CN114812573A - Monocular visual feature fusion-based vehicle positioning method and readable storage medium - Google Patents

Monocular visual feature fusion-based vehicle positioning method and readable storage medium Download PDF

Info

Publication number
CN114812573A
CN114812573A CN202210428550.8A CN202210428550A CN114812573A CN 114812573 A CN114812573 A CN 114812573A CN 202210428550 A CN202210428550 A CN 202210428550A CN 114812573 A CN114812573 A CN 114812573A
Authority
CN
China
Prior art keywords
visual
image
displacement
data
visual feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210428550.8A
Other languages
Chinese (zh)
Inventor
彭祥军
康轶非
罗毅
闫耀威
王宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210428550.8A priority Critical patent/CN114812573A/en
Publication of CN114812573A publication Critical patent/CN114812573A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles

Abstract

The invention relates to the technical field of intelligent driving vehicles, in particular to a vehicle positioning method based on monocular visual feature fusion and a readable storage medium. The method comprises the following steps: restoring the scale of any one of the front view and the back view through the odometer data, and creating a map to generate a corresponding first vehicle running track and a first visual feature map; restoring the scale of the other visual image in the front view and the rear view through the first vehicle running track, and creating a corresponding second visual feature map; fusing the first visual feature map and the second visual feature map to generate a fused visual feature map; matching the front-purpose and/or rear-purpose visual images and fusing the characteristic points in the visual characteristic map to realize repositioning. The invention also discloses a readable storage medium. The invention can realize the restoration of the front and back eye dimensions and the fusion of the vehicle motion trail, thereby improving the accuracy of monocular vision mapping and positioning.

Description

Vehicle positioning method based on monocular visual feature fusion and readable storage medium
Technical Field
The invention relates to the technical field of intelligent driving vehicles, in particular to a vehicle positioning method based on monocular visual feature fusion and a readable storage medium.
Background
Monocular vision Simultaneous Localization and Mapping (SLAM) refers to creating a map that is consistent with the real environment by using a single vision sensor (such as a camera) and determining the position of the map. However, the image data measured by a single vision sensor can only provide the relative distance relationship between objects, and can not provide the real three-dimensional spatial information of the objects. Therefore, the monocular vision SLAM has a problem of uncertainty in Scale (Scale Ambiguity).
Due to the scale uncertainty of monocular vision SLAM, there is a problem of low positioning accuracy when vehicle positioning is performed using a SLAM map constructed by monocular vision SLAM. Therefore, chinese patent publication No. CN109887032B discloses a "vehicle positioning method and system based on monocular vision feature fusion", the method includes: during drawing, in the monocular vision SLAM initialization process, the scale of the monocular vision SLAM map is determined according to the actual moving distance of the vehicle, and the information such as the scale of the SLAM map is continuously optimized according to the actual moving distance of the vehicle; and matching the target image and the characteristic points in the SLAM map during map-based positioning so as to determine the repositioning pose of the vehicle in the SLAM map, so as to obtain the conversion relation between the vehicle body pose measured by the vehicle positioning module and the visual pose of the monocular camera, and continuously optimizing the conversion relation by utilizing a plurality of vehicle body poses and visual poses.
The vehicle positioning method in the existing scheme can provide real scale information for the monocular vision SLAM map, and further improves the vehicle positioning accuracy. The applicant finds that the above prior solution is to complete vehicle position location for a single monocular vision sensor, but in order to improve the robustness of vehicle location, at least two monocular vision sensors in front and at the back need to be arranged. However, when the front and rear monocular vision sensors are used for visual positioning, if the vehicle positioning method in the above-mentioned prior art is still adopted, two feature maps and two vehicle motion trajectories (respectively constructed by the front and rear visual sensors) are obtained during mapping, and the scale obtained by mapping with the front or rear view and the vehicle motion trajectory are mostly non-coincident, but only one vehicle motion trajectory is needed for the downstream module. Therefore, how to design a method capable of realizing front and rear target scale recovery and vehicle motion trajectory fusion so as to improve the accuracy of mapping and positioning is an urgent technical problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a vehicle positioning method based on monocular vision characteristic fusion to realize the restoration of front and back eye dimensions and the fusion of vehicle motion trajectories, thereby improving the accuracy of monocular vision mapping and positioning.
In order to solve the technical problems, the invention adopts the following technical scheme:
the vehicle positioning method based on monocular visual feature fusion comprises the following steps:
s1: acquiring odometer data of a target vehicle and visual images of a front target and a rear target;
s2: restoring the scale of any one of the front view and the back view through the odometer data, and creating a map to generate a corresponding first vehicle running track and a first visual feature map;
s3: restoring the scale of the other visual image in the front view and the rear view through the first vehicle running track, and creating a corresponding second visual feature map;
s4: fusing the first visual feature map and the second visual feature map to generate a fused visual feature map;
s5: and matching the front-purpose and/or rear-purpose visual images and the feature points in the fusion visual feature map to realize the relocation of the target vehicle in the fusion visual feature map.
Preferably, step S1 specifically includes the following steps:
s101: the displacement data and the angle data of a target vehicle are obtained through a wheel speed sensor and an inertial sensor, and front-eye and rear-eye vision sensing images are obtained through a front-eye and rear-eye vision sensor;
s102: abnormal data elimination is carried out on the displacement data, the angle data and the front and rear visual sensing images;
s103: carrying out time synchronization on the displacement data, the angle data and the front-view and rear-view visual sensing images to generate corresponding image time stamps;
s104: carrying out dead reckoning through the displacement data and the angle data to generate corresponding odometer data;
s105: and carrying out distortion removal processing on the front-eye and rear-eye visual sensing images to generate front-eye and rear-eye visual images.
Preferably, step S2 specifically includes the following steps:
s201: carrying out pure vision initialization on the visual image;
s202: obtaining pure visual displacement between corresponding image frames through epipolar constraint;
s203: performing displacement interpolation on the mileage counting data based on the image time stamp to obtain the real displacement between corresponding image frames;
s204: and calculating the mapping relation between the pure visual displacement and the real displacement so as to recover the scale of the visual image.
Preferably, in step S2, the map building process includes the following steps:
s211: drawing is carried out based on the visual image after the scale recovery, and corresponding three-dimensional feature points and visual poses are obtained;
s212: adding the odometer data as constraints to the optimization process of the visual pose;
s213: and performing loop detection and global optimization on the visual pose to generate a corresponding first vehicle running track and a first visual feature map.
Preferably, in step S212, the pose change between the corresponding image frames is obtained by a visual odometer, and the displacement between the corresponding image frames is constrained by the three-dimensional feature points, the image frame reprojection, and the interpolation displacement of the odometer data.
Preferably, step S3 specifically includes the following steps:
s301: carrying out pure vision initialization on the visual image;
s302: obtaining pure visual displacement between corresponding image frames through epipolar constraint;
s303: searching a corresponding first vehicle running track based on the image timestamp and performing displacement interpolation to obtain the real displacement between corresponding image frames;
s304: and calculating the mapping relation between the pure visual displacement and the real displacement so as to recover the scale of the visual image.
Preferably, in step S3, the map building process includes the following steps:
s311: drawing is carried out based on the visual image after the scale recovery, and corresponding three-dimensional feature points and visual poses are obtained;
s313: adding the first vehicle running track as a constraint to the optimization process of the visual pose;
s313: and performing loop detection and global optimization on the visual pose to generate a corresponding second visual feature map.
Preferably, in step S5, the positioning is performed by using the rear-view target visual image preferentially, and in the case where the rear-view target is invalid, the positioning is performed by using the front-view target visual image.
Preferably, in step S5, the corresponding odometer data is added to implement the relocation and location update of the target vehicle in the fused visual feature map.
The invention also discloses a readable storage medium, on which a computer management program is stored, wherein the computer management program realizes the steps of the vehicle positioning method based on monocular visual feature fusion when being executed by a processor.
Compared with the prior art, the vehicle positioning method has the following beneficial effects:
the invention can realize the repositioning of the vehicle in the map through the front-eye visual image and the back-eye visual image, thereby improving the robustness of vehicle positioning.
According to the invention, the scales of the visual characteristic maps constructed by the front eye and the rear eye can be kept consistent by visually observing the scales of the images after the mileage counting data is recovered and then restoring the scales of the front eye and the rear eye through the running track of the rear eye vehicle, so that the problem that the movement tracks of the front eye vehicle and the rear eye vehicle are not coincident is solved by visually observing the characteristic maps and the front eye characteristic maps after the fusion, namely the recovery of the scales of the front eye and the rear eye and the fusion of the movement tracks of the vehicles can be realized, and the accuracy of monocular vision mapping and positioning can be improved.
The invention preferentially adopts the aftereye to carry out monocular vision positioning, can greatly reduce the calculation force requirement on the processor by combining the vehicle running track, and ensures that the positioning result has better robustness. In addition, the invention can switch to use the front-view camera to perform visual positioning under the condition that the back view fails (for example, special conditions such as the back view is blocked and the back view is damaged).
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a logic block diagram of a monocular visual feature fusion based vehicle positioning method.
Detailed Description
The following is further detailed by the specific embodiments:
the first embodiment is as follows:
the embodiment of the invention discloses a vehicle positioning method based on monocular visual feature fusion.
As shown in fig. 1, the vehicle positioning method based on monocular visual feature fusion includes the following steps:
s1: acquiring odometer data of a target vehicle and visual images of a front target and a rear target;
s2: restoring the scale of a rear-view (front-view) visual image through the odometer data, and establishing a map to generate a corresponding rear-view (front-view) vehicle running track and a rear-view (front-view) visual feature map;
s3: restoring the scale of the front-view (rear-view) visual image through the rear-view (front-view) vehicle running track and establishing a map to generate a corresponding front-view (rear-view) visual feature map;
s4: fusing the rear visual feature map and the front visual feature map to generate a fused visual feature map;
s5: and matching the target visual image and the feature points in the fusion visual feature map so as to realize the relocation of the target vehicle in the fusion visual feature map. In this embodiment, the rear-view visual image is preferentially used for positioning, and the front-view visual image is used for positioning when the rear view is invalid.
It should be noted that, the vehicle positioning method based on monocular vision feature fusion in the present invention may generate corresponding software code or software service in a program programming manner, and further, may be run and implemented on a server and a computer.
The invention can realize the repositioning of the vehicle in the map through the front-eye visual image and the back-eye visual image, thereby improving the robustness of vehicle positioning. Secondly, the scales of the visual characteristic maps constructed by the front eye and the rear eye can be kept consistent by recovering the scales of the visual image after the odometer data is recovered and recovering the scales of the front visual image through the running track of the rear eye vehicle, and the problem that the movement tracks of the front eye vehicle and the rear eye vehicle are not coincident is solved by fusing the visual characteristic maps and the front visual characteristic maps, namely the recovery of the scales of the front eye and the rear eye vehicle and the fusion of the movement tracks of the vehicles can be realized, so that the accuracy of monocular visual mapping and positioning can be improved. Finally, the invention does not need expensive vehicle-mounted sensors such as laser radar and binocular cameras, and does not need special modification on the factory side.
The invention preferentially adopts the aftereye to carry out monocular vision positioning, can greatly reduce the calculation force requirement on the processor by combining the vehicle running track, and ensures that the positioning result has better robustness. In addition, the invention can switch to use the front-view camera to perform visual positioning under the condition that the back view fails (for example, special conditions such as the back view is blocked and the back view is damaged).
In a specific implementation process, step S1 includes the following steps:
s101: obtaining displacement data and angle data of a target vehicle through a wheel speed sensor and an Inertial Measurement Unit (IMU), and obtaining front-eye and rear-eye vision sensing images through front-eye and rear-eye vision sensors;
s102: abnormal data elimination is carried out on the displacement data, the angle data and the front and rear visual sensing images; in this embodiment, abnormal data caused by unstable working state of the sensor, delay and blockage of the communication network, and the like is eliminated.
S103: carrying out time synchronization on the displacement data, the angle data and the front-view and rear-view visual sensing images to generate corresponding image time stamps; in this embodiment, all sensor data need to be time synchronized, and the data range statistics in a period of time is determined according to the time sequence of the two frames before and after, and the data is filtered according to the set threshold, so as to obtain the sensor data with normal time sequence.
S104: carrying out dead reckoning through the displacement data and the angle data to generate corresponding odometer data; in this embodiment, the corresponding dead reckoning model and the specific dead reckoning model are configured to implement dead reckoning. The displacement obtained according to the wheel speed encoder in a short time is relatively accurate, the angle obtained according to IMU measurement value integration is relatively accurate, and then the advantages of the two sensors are integrated to carry out dead reckoning.
S105: and carrying out distortion removal processing on the front eye visual sensing image and the rear eye visual sensing image to generate a front eye visual image and a rear eye visual image. In this embodiment, the visual sensing image obtained by the monocular sensor is generally a fisheye image, the fisheye image is subjected to distortion removal processing in a multi-surface unfolding mode, and after the distortion removal processing, the feature point information, particularly the spatial information, corresponding to the pixels is more accurate than that before the distortion removal processing.
In a specific implementation process, step S2 includes the following steps:
s201: carrying out pure vision initialization on the visual image; in this embodiment, pure visual initialization is achieved by existing means.
S202: obtaining pure visual displacement between corresponding image frames through epipolar constraint; in this embodiment, the selecting, mounting and translating between two frames are recovered according to the epipolar constraint, and the initial frame is set as a reference frame, the position is a three-dimensional zero vector, and the posture is a 3 × 3 unit matrix. The epipolar constraint is an existing mature means, and aims to estimate the motion of a camera according to a plurality of groups of 2D pixel point pairs between two images on the premise that 2D pixel coordinates are known.
S203: performing displacement interpolation on the mileage counting data based on the image time stamp to obtain the real displacement between corresponding image frames; in this embodiment, the actual moving distance (real displacement) of the vehicle can be obtained by performing displacement interpolation on the odometry data, and then the scale of the visual image is determined by the actual moving distance.
S204: and calculating the mapping relation between the pure visual displacement and the real displacement so as to recover the scale of the visual image.
The scale of the visual image is restored through the mileage count data, and the scale restoration of the visual image can be realized on the premise of not increasing the cost.
In a specific implementation process, in step S2, the map building process includes the following steps:
s211: drawing is carried out based on the visual image after the scale recovery, and corresponding three-dimensional feature points and visual poses are obtained; in this embodiment, the map is built by using the existing visual odometer.
S212: adding the odometer data as constraints to the optimization process of the visual pose; in the embodiment, the pose change between the corresponding image frames is obtained through the visual odometer, and the displacement between the corresponding image frames is constrained through the three-dimensional feature points, the image frame reprojection and the interpolation displacement of the odometer data.
S213: and performing loop detection and global optimization on the visual pose to generate a corresponding first vehicle running track and a first visual feature map. In the embodiment, loop detection and global optimization are completed by adopting the existing mature means, and the method only adds the constraint condition of odometer data.
According to the invention, the odometer data is added to the optimization process of the visual pose as constraint, so that the accumulated error between image frames can be reduced, namely the drift can be reduced, and the accuracy of monocular vision mapping and positioning can be improved.
In a specific implementation process, step S3 includes the following steps:
s301: carrying out pure vision initialization on the visual image; in this embodiment, pure visual initialization is achieved by existing means.
S302: obtaining pure visual displacement between corresponding image frames through epipolar constraint; in this embodiment, the selecting, mounting and translating between two frames are recovered according to the epipolar constraint, and the initial frame is set as a reference frame, the position is a three-dimensional zero vector, and the posture is a 3 × 3 unit matrix. The epipolar constraint is an existing mature means, and aims to estimate the motion of a camera according to a plurality of groups of 2D pixel point pairs between two images on the premise that 2D pixel coordinates are known.
S303: searching a corresponding first vehicle running track based on the image timestamp and performing displacement interpolation to obtain the real displacement between corresponding image frames; in this embodiment, the actual moving distance (real displacement) of the vehicle can be obtained by performing displacement interpolation on the first vehicle moving track, and then the scale of the visual image is determined by the actual moving distance.
S304: and calculating the mapping relation between the pure visual displacement and the real displacement so as to recover the scale of the visual image.
According to the invention, the scale of the visual image is recovered through the first vehicle running track, the scale recovery of the visual image can be realized on the premise of not increasing the cost, and the problem that the movement tracks of the front and rear vehicles are not coincident can be solved, so that the accuracy of monocular vision mapping and positioning can be improved.
In a specific implementation process, in step S3, the map building process includes the following steps:
s311: drawing is carried out based on the visual image after the scale recovery, and corresponding three-dimensional feature points and visual poses are obtained; in this embodiment, the map is built by using the existing visual odometer.
S313: adding the first vehicle running track as a constraint to the optimization process of the visual pose; in the embodiment, the pose change between the corresponding image frames is obtained through the visual odometer, and the displacement between the corresponding image frames is constrained through the three-dimensional feature points, the image frame reprojection and the interpolation displacement of the first vehicle running track.
S313: and performing loop detection and global optimization on the visual pose to generate a corresponding second visual feature map. In the embodiment, loop detection and global optimization are completed by adopting the existing mature means, and the invention only adds the constraint condition of mileometer data.
According to the invention, the first vehicle running track is added to the optimization process of the visual pose as a constraint, so that the accumulated error between image frames can be reduced, namely the drift can be reduced, the accuracy of monocular vision mapping and positioning can be improved, the problem that the movement tracks of front and rear vehicles do not coincide can be solved, and the accuracy of monocular vision mapping and positioning can be improved.
In the specific implementation process, in step S5, the corresponding mileage data is added to implement the relocation and location update of the target vehicle in the fusion visual feature map. In this embodiment, the fusion positioning module receives ROS messages of the relocation data and the odometer data to perform fusion positioning, and the result of the fusion positioning is given to the DSPACE regulation and control module in a UDP manner.
The invention realizes vehicle positioning by fusing repositioning (pose) and odometer data (vehicle running track), and can further improve the accuracy of monocular vision mapping and positioning.
Example two:
disclosed in the present embodiment is a readable storage medium.
A readable storage medium having stored thereon a computer management like program for implementing the steps of the monocular visual characteristics fusion based vehicle positioning method of the present invention when executed by a processor. The readable storage medium can be a device with readable storage function such as a U disk or a computer.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and those skilled in the art should understand that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all that should be covered by the claims of the present invention.

Claims (10)

1. The vehicle positioning method based on monocular visual feature fusion is characterized by comprising the following steps:
s1: acquiring odometer data of a target vehicle and visual images of a front target and a rear target;
s2: restoring the scale of any one of the front view and the back view through the odometer data, and creating a map to generate a corresponding first vehicle running track and a first visual feature map;
s3: restoring the scale of the other visual image in the front view and the rear view through the first vehicle running track, and creating a corresponding second visual feature map;
s4: fusing the first visual feature map and the second visual feature map to generate a fused visual feature map;
s5: and matching the front-purpose and/or rear-purpose visual images and the feature points in the fusion visual feature map to realize the relocation of the target vehicle in the fusion visual feature map.
2. The monocular visual feature fusion based vehicle positioning method of claim 1, wherein: in step S1, the method specifically includes the following steps:
s101: the displacement data and the angle data of a target vehicle are obtained through a wheel speed sensor and an inertial sensor, and front-eye and rear-eye vision sensing images are obtained through a front-eye and rear-eye vision sensor;
s102: abnormal data elimination is carried out on the displacement data, the angle data and the front and rear visual sensing images;
s103: carrying out time synchronization on the displacement data, the angle data and the front-view and rear-view visual sensing images to generate corresponding image time stamps;
s104: carrying out dead reckoning through the displacement data and the angle data to generate corresponding odometer data;
s105: and carrying out distortion removal processing on the front eye visual sensing image and the rear eye visual sensing image to generate a front eye visual image and a rear eye visual image.
3. The monocular visual feature fusion based vehicle positioning method of claim 2, wherein: in step S2, the method specifically includes the following steps:
s201: carrying out pure vision initialization on the visual image;
s202: obtaining pure visual displacement between corresponding image frames through epipolar constraint;
s203: performing displacement interpolation on the mileage counting data based on the image time stamp to obtain the real displacement between corresponding image frames;
s204: and calculating the mapping relation between the pure visual displacement and the real displacement so as to recover the scale of the visual image.
4. The monocular visual feature fusion based vehicle positioning method of claim 3, wherein: in step S2, the map building process includes the following steps:
s211: drawing is carried out based on the visual image after the scale recovery, and corresponding three-dimensional feature points and visual poses are obtained;
s212: adding the odometer data as constraints to the optimization process of the visual pose;
s213: and performing loop detection and global optimization on the visual pose to generate a corresponding first vehicle running track and a first visual feature map.
5. The monocular visual feature fusion based vehicle positioning method of claim 4, wherein: in step S212, the pose change between the corresponding image frames is obtained through the visual odometer, and the displacement between the corresponding image frames is restrained through the three-dimensional feature points, the image frame reprojection and the interpolation displacement of the odometer data.
6. The monocular visual feature fusion based vehicle positioning method of claim 2, wherein: in step S3, the method specifically includes the following steps:
s301: carrying out pure vision initialization on the visual image;
s302: obtaining pure visual displacement between corresponding image frames through epipolar constraint;
s303: searching a corresponding first vehicle running track based on the image timestamp and performing displacement interpolation to obtain the real displacement between corresponding image frames;
s304: and calculating the mapping relation between the pure visual displacement and the real displacement so as to recover the scale of the visual image.
7. The monocular visual feature fusion based vehicle positioning method of claim 6, wherein: in step S3, the map building process includes the following steps:
s311: drawing is carried out based on the visual image after the scale recovery, and corresponding three-dimensional feature points and visual poses are obtained;
s313: adding the first vehicle running track as a constraint to the optimization process of the visual pose;
s313: and performing loop detection and global optimization on the visual pose to generate a corresponding second visual feature map.
8. The monocular visual feature fusion based vehicle positioning method of claim 1, wherein: in step S5, the positioning is performed by using the rear-view image preferentially, and in the case where the rear view is invalid, the positioning is performed by using the front-view image.
9. The monocular visual feature fusion based vehicle positioning method of claim 1, wherein: in step S5, adding the corresponding odometer data to implement repositioning and location updating of the target vehicle in the fused visual feature map.
10. A readable storage medium, characterized in that a computer management like program is stored thereon, which, when being executed by a processor, carries out the steps of the monocular vision feature fusion based vehicle positioning method of any one of claims 1-9.
CN202210428550.8A 2022-04-22 2022-04-22 Monocular visual feature fusion-based vehicle positioning method and readable storage medium Pending CN114812573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210428550.8A CN114812573A (en) 2022-04-22 2022-04-22 Monocular visual feature fusion-based vehicle positioning method and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210428550.8A CN114812573A (en) 2022-04-22 2022-04-22 Monocular visual feature fusion-based vehicle positioning method and readable storage medium

Publications (1)

Publication Number Publication Date
CN114812573A true CN114812573A (en) 2022-07-29

Family

ID=82505845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210428550.8A Pending CN114812573A (en) 2022-04-22 2022-04-22 Monocular visual feature fusion-based vehicle positioning method and readable storage medium

Country Status (1)

Country Link
CN (1) CN114812573A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115388880A (en) * 2022-10-27 2022-11-25 联友智连科技有限公司 Low-cost memory parking map building and positioning method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115388880A (en) * 2022-10-27 2022-11-25 联友智连科技有限公司 Low-cost memory parking map building and positioning method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110044354B (en) Binocular vision indoor positioning and mapping method and device
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN110207714B (en) Method for determining vehicle pose, vehicle-mounted system and vehicle
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN111983639A (en) Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN112197770B (en) Robot positioning method and positioning device thereof
CN109443348B (en) Underground garage position tracking method based on fusion of look-around vision and inertial navigation
CN111263960B (en) Apparatus and method for updating high definition map
CN113678079A (en) Generating structured map data from vehicle sensors and camera arrays
CN208323361U (en) A kind of positioning device and robot based on deep vision
EP3852065A1 (en) Data processing method and apparatus
KR102219843B1 (en) Estimating location method and apparatus for autonomous driving
CN111077907A (en) Autonomous positioning method of outdoor unmanned aerial vehicle
CN113740871B (en) Laser SLAM method, system equipment and storage medium under high dynamic environment
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN114623817B (en) Self-calibration-contained visual inertial odometer method based on key frame sliding window filtering
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN114638897B (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN113052897A (en) Positioning initialization method and related device, equipment and storage medium
CN116295412A (en) Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method
CN113503873A (en) Multi-sensor fusion visual positioning method
CN114812573A (en) Monocular visual feature fusion-based vehicle positioning method and readable storage medium
CN115015956A (en) Laser and vision SLAM system of indoor unmanned vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination