CN117522969A - Combined ground library positioning method, system, vehicle, storage medium and equipment - Google Patents

Combined ground library positioning method, system, vehicle, storage medium and equipment Download PDF

Info

Publication number
CN117522969A
CN117522969A CN202210905711.8A CN202210905711A CN117522969A CN 117522969 A CN117522969 A CN 117522969A CN 202210905711 A CN202210905711 A CN 202210905711A CN 117522969 A CN117522969 A CN 117522969A
Authority
CN
China
Prior art keywords
information
ground
vehicle
dimensional
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210905711.8A
Other languages
Chinese (zh)
Inventor
朱敏峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN202210905711.8A priority Critical patent/CN117522969A/en
Publication of CN117522969A publication Critical patent/CN117522969A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The application discloses a joint ground library positioning method, a system, a vehicle, a storage medium and equipment, and belongs to the technical field of data processing. The method mainly comprises the following steps: acquiring an initial pose of a vehicle according to a panoramic mosaic corresponding to a picture shot by a camera; extracting the description information and the pixel information of the characteristic line segments and the characteristic points of the ground mark from the panoramic mosaic; according to the characteristic line segments, matching the ground identification with a 3-dimensional ground identification map to obtain first error information; matching the feature points with a 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points to obtain second error information; and optimizing the initial pose according to the first error information and the second error information, and acquiring pose information of the vehicle. And the ground identification and the characteristic points are positioned in the 3-dimensional map by matching the characteristic identification in the ground library with the 3-dimensional map, so that the positioning accuracy is improved.

Description

Combined ground library positioning method, system, vehicle, storage medium and equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method, a system, a vehicle, a storage medium, and an apparatus for locating a joint library.
Background
The repositioning of the object in the space is an important embodiment of the degree of correspondence between the automatic driving vehicle and the 3-dimensional map, and accurate positioning information can enable the automatic driving vehicle to sense the environment more accurately in the driving process of the automatic driving vehicle. For example, if the upright is positioned to be 5 meters in front of the vehicle, the automatic driving vehicle can judge that the vehicle is driven to be 4 meters in front of the vehicle to stop advancing, and if the upright is actually positioned to be 3 meters in front of the vehicle, namely, a point location error occurs, the automatic driving vehicle collides with a wall when the vehicle is driven to be 3 meters in front of the vehicle, and the vehicle cannot advance to be 4 meters in front of the vehicle.
In the prior art, a GPS (global positioning system) mode is generally adopted for positioning, but when the GPS is applied to a ground base or a high-rise, the GPS mode is often invalid, so that certain limitation exists in the GPS positioning mode; based on the limitation of the GPS, another application mode is to attach a strong reflective object such as a shading strip to a fixed object, and then complement the limitation problem of the GPS by the reflective effect of the reflective object on the laser radar or the millimeter wave radar. However, this method requires the GPS and radar to be installed at the same time, and the radar is expensive and costly.
Disclosure of Invention
Aiming at the problems that the prior art needs to be provided with a GPS and a radar at the same time, and the radar is high in price and high in cost, the application mainly provides a joint ground base positioning method, a system, a vehicle, a storage medium and equipment.
In a first aspect, embodiments of the present application provide a joint library positioning method, including: image stitching is carried out on pictures shot by a plurality of cameras respectively in real time, a panoramic stitching picture corresponding to the pictures is obtained, wherein image information in the pictures is environmental information of the same floor and different angles, and the cameras are loaded around a vehicle; global initialization is carried out on the panoramic stitching graph, and the real-time initial pose of the vehicle is obtained; respectively extracting the description information and the pixel information respectively corresponding to the characteristic line segments and the characteristic points respectively corresponding to the ground marks around the vehicle from the panoramic mosaic; according to the characteristic line segments respectively corresponding to the ground identifications, matching the ground identifications with a pre-established 3-dimensional ground identification map to obtain first error information between an initial pose and pose information of the 3-dimensional ground identification map; according to the description information and the pixel information corresponding to the feature points, matching the feature points with a pre-established 3-dimensional ground library feature point map to obtain second error information between the initial pose and the pose information of the pre-established 3-dimensional ground library feature point map; and optimizing the initial pose according to the first error information and the second error information to obtain pose information of the vehicle. According to the technical scheme, based on a pure visual mode, since fixed objects such as ground marks, stand columns and the like do not change along with the change of the environment, the robustness is very strong, and therefore the 3-dimensional map is matched through the characteristic objects fixed in the ground library and the characteristic marks corresponding to the characteristic points, so that the vehicle is positioned in the 3-dimensional map, and the positioning accuracy is improved.
Optionally, matching the ground identification information with a pre-established 3-dimensional ground identification map according to the feature line segments respectively corresponding to the ground identifications, to obtain first error information between the position information in the initial position 3-dimensional ground identification map, including: utilizing the characteristic line segments corresponding to the ground identifiers respectively to establish a 2-dimensional characteristic diagram corresponding to the ground identifiers; image segmentation is carried out on the 3-dimensional ground identification map, and a 3-dimensional ground identification map of each frame is obtained, wherein the 3-dimensional ground identification map of each frame contains pose information corresponding to a vehicle at the moment; and projecting the 3-dimensional ground identification map into the 2-dimensional feature map, calculating the coincidence degree of the pose information and the initial pose, and obtaining first error information. The local ground identification map is respectively projected into the current feature 2D map and is matched with the ground identification in the current feature 2D map, so that the accuracy of subsequent matching is improved, confusion caused by too many feature elements in the 3D map is avoided, and the situation of matching errors is caused.
Optionally, matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and pose information in the pre-established 3-dimensional ground library feature point map, including: respectively calculating the similarity between the description information in the 3-dimensional ground library feature point map and the description information corresponding to the feature points; and feature points in the 3-dimensional database feature point map corresponding to the description information in the 3-dimensional database feature point map with similarity larger than or equal to a preset similarity threshold are used as 3-dimensional map points; and calculating and acquiring pixel errors between pixel information corresponding to the 3-dimensional map points and pixel information of the feature points, and acquiring second error information.
Optionally, calculating to obtain pixel errors between pixel information corresponding to the 3-dimensional map points and pixel information of the feature points, and obtaining second error information includes: projecting the 3-dimensional map points into a camera coordinate system established by taking the vehicle as a center, and acquiring pixel information corresponding to the 3-dimensional map points; and calculating pixel errors between the pixel information corresponding to the 3-dimensional map points and the pixel information of the feature points, and obtaining second error information.
Optionally, optimizing the initial pose according to the first error information and the second error information, and obtaining pose information of the vehicle includes: respectively judging the first error information and the second error information, wherein if the first error information is smaller than or equal to a first error threshold value and the second error information is smaller than or equal to a second error threshold value, the initial pose is taken as pose information of the vehicle; and if the first error information is larger than the first error threshold value and/or the second error information is larger than the second error threshold value, correcting the initial pose, and taking the corrected initial pose as pose information of the vehicle.
Optionally, calculating and acquiring wheel speed track information of the vehicle by using IMU information and wheel speed information acquired by a position sensor system loaded by the vehicle; acquiring visual track information of the vehicle according to the multi-frame pictures; and fusing the wheel speed track information and the visual track information to obtain pose information of the vehicle. Because the images acquired by the camera correspond to the visual odometer, compared with the wheel speed odometer calculated and generated by IMU information and wheel speed information, the frequency is lower; if the positioning time is not the shooting time of the camera, the visual odometer cannot be used for obtaining the positioning information, and the high-frequency wheel speed odometer stores the positioning information at the corresponding time, so that the visual odometer and the wheel speed odometer are fused to obtain pose information corresponding to the visual odometer at the positioning time, and the precision of the pose information is improved.
In a second aspect, embodiments of the present application provide a joint library positioning system comprising: the image splicing module is used for carrying out image splicing on the images shot by the cameras respectively in real time to obtain a panoramic splicing image corresponding to the images, wherein the image information in the images is environmental information of the same floor at different angles, and the cameras are loaded around the vehicle; the initialization module is used for globally initializing the panoramic stitching graph to acquire the real-time initial pose of the vehicle; the information extraction module is used for respectively extracting characteristic line segments respectively corresponding to ground marks around the vehicle and descriptive information and pixel information respectively corresponding to the characteristic points from the panoramic mosaic; the ground identification matching module is used for matching the ground identification with a pre-established 3-dimensional ground identification map according to the characteristic line segments respectively corresponding to the ground identifications, and obtaining first error information between the initial pose and the position information of the 3-dimensional ground identification map; the feature point matching module is used for matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and the position information in the pre-established 3-dimensional ground library feature point map; and the positioning module optimizes the initial pose according to the first error information and the second error information to obtain pose information of the vehicle.
In a third aspect, an embodiment of the present application provides a vehicle, where the vehicle includes a joint ground bank positioning system in the above solution, where the joint ground bank positioning system includes: the image splicing module is used for carrying out image splicing on the images shot by the cameras respectively in real time to obtain a panoramic splicing image corresponding to the images, wherein the image information in the images is environmental information of the same floor at different angles, and the cameras are loaded around the vehicle; the initialization module is used for globally initializing the panoramic stitching graph to acquire the real-time initial pose of the vehicle; the information extraction module is used for respectively extracting characteristic line segments respectively corresponding to ground marks around the vehicle and descriptive information and pixel information respectively corresponding to the characteristic points from the panoramic mosaic; the ground identification matching module is used for matching the ground identification with a pre-established 3-dimensional ground identification map according to the characteristic line segments respectively corresponding to the ground identifications, and obtaining first error information between the initial pose and the position information of the 3-dimensional ground identification map; the feature point matching module is used for matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and the position information in the pre-established 3-dimensional ground library feature point map; and the positioning module optimizes the initial pose according to the first error information and the second error information to obtain pose information of the vehicle.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing computer instructions operable to perform the federated library positioning method of the above-described aspects.
In a fifth aspect, embodiments of the present application provide a computer device, comprising: at least one processor, the processor and the memory coupled, the memory storing computer instructions, wherein the computer instructions are configured to perform the federated library positioning method of scheme one.
In a sixth aspect, embodiments of the present application provide a computer program product containing computer instructions operable to perform the feature point-based ground library positioning method of the above-described aspects.
For the advantageous effects of the above corresponding aspects of the second aspect to the sixth aspect, reference is made to the advantageous effects of the aspects in the foregoing first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic diagram of an alternative embodiment of a federated library positioning method of the present application;
FIG. 2 is a schematic diagram of an alternative embodiment of a federated library positioning system of the present application;
FIG. 3 is a schematic diagram of one embodiment of a computer device of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
The preferred embodiments of the present application will be described in detail below with reference to the drawings so that the advantages and features of the present application can be more easily understood by those skilled in the art, thereby making a clearer and more definite definition of the protection scope of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The repositioning of the object in the space is an important embodiment of the degree of correspondence between the automatic driving vehicle and the 3-dimensional map, and accurate positioning information can enable the automatic driving vehicle to sense the environment more accurately in the driving process of the automatic driving vehicle. For example, if the upright is positioned to be 5 meters in front of the vehicle, the automatic driving vehicle can judge that the vehicle is driven to be 4 meters in front of the vehicle to stop advancing, and if the upright is actually positioned to be 3 meters in front of the vehicle, namely, a point location error occurs, the automatic driving vehicle collides with a wall when the vehicle is driven to be 3 meters in front of the vehicle, and the vehicle cannot advance to be 4 meters in front of the vehicle.
In the prior art, a GPS mode is generally adopted for positioning, but when the GPS is applied to a ground warehouse or a high-rise, the GPS is often invalid, so that the positioning mode of the GPS has certain limitation; based on the limitation of the GPS, another application mode is to attach a strong reflective object such as a shading strip to a fixed object, and then complement the limitation problem of the GPS by the reflective effect of the reflective object on the laser radar or the millimeter wave radar. However, this method requires the GPS and radar to be installed at the same time, and the radar is expensive and costly.
Aiming at the problems existing in the prior art, the application mainly provides a joint ground library positioning method, a system, a vehicle, a storage medium and equipment. The method mainly comprises the following steps: image stitching is carried out on pictures shot by a plurality of cameras respectively in real time, a panoramic stitching picture corresponding to the pictures is obtained, wherein image information in the pictures is environmental information of the same floor and different angles, and the cameras are loaded around a vehicle; global initialization is carried out on the panoramic stitching graph, and the real-time initial pose of the vehicle is obtained; respectively extracting the description information and the pixel information respectively corresponding to the characteristic line segments and the characteristic points respectively corresponding to the ground marks around the vehicle from the panoramic mosaic; according to the characteristic line segments respectively corresponding to the ground identifications, matching the ground identifications with a pre-established 3-dimensional ground identification map to obtain first error information between an initial pose and pose information of the 3-dimensional ground identification map; according to the description information and the pixel information corresponding to the feature points, matching the feature points with a pre-established 3-dimensional ground library feature point map to obtain second error information between the initial pose and the pose information of the pre-established 3-dimensional ground library feature point map; and optimizing the initial pose according to the first error information and the second error information to obtain pose information of the vehicle.
According to the method, based on a pure vision mode, since fixed objects such as ground marks, stand columns and the like do not change along with the change of the environment, the robustness is very strong, and therefore the 3-dimensional map is matched through the fixed characteristic objects in the ground library and the characteristic marks corresponding to the characteristic points, so that the vehicle is positioned in the 3-dimensional map, and the positioning accuracy is improved.
The following describes the technical solution of the present application and how the technical solution of the present application solves the above technical problems in detail with specific embodiments. The specific embodiments described below may be combined with one another to form new embodiments. The same or similar ideas or processes described in one embodiment may not be repeated in certain other embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
FIG. 1 illustrates an alternative embodiment of a federated library positioning method of the present application.
In an alternative embodiment shown in fig. 1, the joint ground base positioning method mainly includes step S101, performing image stitching on pictures respectively shot by a plurality of cameras in real time, and obtaining a panoramic stitching graph corresponding to the pictures, wherein image information in the pictures is environmental information of the same floor at different angles, and the cameras are loaded around a vehicle.
In the alternative embodiment, because the visual pictures are not limited by external factors such as environment and scene, and the GPS is limited by high-rise, when the map of the same floor is built, accurate positioning cannot be obtained by using the GPS, so that the map of the floor is built by using a camera using visual perception so as to perform positioning; for example, when a vehicle entering a garage is positioned in an underground garage in real time, cameras are installed around the vehicle, and when the vehicle enters the garage, each camera takes a picture in real time; encoding the picture, storing pose information corresponding to the picture in a database, and preferably selecting a fisheye camera as the camera. Providing a basis for the subsequent retrieval step; therefore, in the global initialization process, firstly, according to the pictures shot in real time by a plurality of cameras loaded around the vehicle, a panoramic mosaic is obtained.
In an alternative embodiment shown in fig. 1, the joint ground library positioning method further includes step S102, performing global initialization on the panoramic stitching graph, and obtaining a real-time initial pose of the vehicle.
In the alternative implementation mode, inputting the panoramic stitching graph into a preset deep learning model, and outputting description information describing characteristics of each ground identification azimuth relation, size, shape and the like of the panoramic stitching graph by the model; for example, the scene of the panoramic mosaic is an intersection of a ground warehouse, and a speed bump, a lane line, a left turn arrow, a right turn arrow, a zebra crossing, a guide line and the like are arranged on a road, and parking space lines at two sides of the road. And acquiring the initial pose of the vehicle according to the description information of the panoramic mosaic, and providing a basis for the subsequent repositioning step.
In an optional example of the application, according to the description information of the panoramic mosaic, the description information matched with the panoramic mosaic is searched in a pre-established global initialization model, and pose information of the type to which the description information belongs is used as the initial pose of the vehicle; or the pose information obtained by searching in the pre-established global initialization model is used as the prior pose, the pre-established 3-dimensional feature point map is matched with the feature points in the panoramic mosaic, and the prior pose is corrected according to the pose information corresponding to the local feature point map in the successfully matched 3-dimensional feature point map, so that the initial pose of the automatic driving vehicle in the current frame is obtained; when the camera is a fisheye camera or a pinhole camera, after the priori pose is acquired, converting the panoramic stitching graph into a cylindrical panoramic graph, matching the cylindrical panoramic graph with the feature points in the cylindrical panoramic graph by utilizing a pre-established 3-dimensional feature point map, and correcting the priori pose according to pose information corresponding to a local feature point map in the successfully matched 3-dimensional feature point map to acquire the initial pose of the vehicle.
In an alternative embodiment shown in fig. 1, the joint ground library positioning method further includes step S103, respectively extracting, in the panorama stitching diagram, characteristic line segments respectively corresponding to ground identifiers around the vehicle and description information and pixel information respectively corresponding to the characteristic points.
In the alternative embodiment, inputting the panoramic stitching graph into a preset deep learning model, and outputting description information describing each characteristic point in the panoramic stitching graph by the model; for example, when a certain characteristic point in the panoramic mosaic is a corner point of the upper right corner of the roadside light board, the description information of the characteristic point describes the information such as the height, the azimuth and the relationship between the corner point and other characteristic points. And extracting pixel information corresponding to the pixel position in the panorama stitching graph where the feature point is located. And simultaneously, extracting characteristic line segments corresponding to the ground marks in the panoramic mosaic. The generation of the description information and the pixel information and the extraction of the characteristic line segments provide necessary characteristic bases for subsequent matching and correction.
In an alternative embodiment shown in fig. 1, the joint ground library positioning method further includes step S104, matching the ground identifier with a pre-established 3-dimensional ground identifier map according to the feature line segments corresponding to the ground identifiers respectively, to obtain first error information between the initial pose and the pose information in the 3-dimensional ground identifier map.
In the alternative implementation mode, a current characteristic 2-dimensional map with characteristic line segments is established by utilizing the characteristic line segments corresponding to the ground identifiers; and projecting the 3-dimensional ground identification map into the current characteristic 2-dimensional map, matching the 3-dimensional ground identification map with the ground identification in the current characteristic 2-dimensional map, calculating the error between the initial pose and the pose information according to the pose information of the vehicle corresponding to the local ground identification map in the successfully matched 3-dimensional ground identification map, and taking the error as first error information to provide an important basis for the subsequent correction of the initial pose by using the first error information.
In an optional embodiment of the present application, according to feature line segments respectively corresponding to ground identifiers, matching ground identifier information with a pre-established 3-dimensional ground identifier map, to obtain first error information between position information in an initial position and pose 3-dimensional ground identifier map, including: utilizing the characteristic line segments corresponding to the ground identifiers respectively to establish a 2-dimensional characteristic diagram corresponding to the ground identifiers; image segmentation is carried out on the 3-dimensional ground identification map, and a 3-dimensional ground identification map of each frame is obtained, wherein the 3-dimensional ground identification map of each frame contains pose information corresponding to a vehicle at the moment; and projecting the 3-dimensional ground identification map into the 2-dimensional feature map, calculating the coincidence degree of the pose information and the initial pose, and obtaining first error information.
In the alternative embodiment, when the 3-dimensional ground identification map is utilized for matching, firstly, image segmentation is carried out on the 3-dimensional ground identification map, other unnecessary characteristic elements are filtered out, and local ground identification maps respectively corresponding to the 3-dimensional ground identification map at each moment are obtained. And respectively projecting the local ground identification map into the current feature 2-dimensional map, and matching with the ground identification in the current feature 2-dimensional map, so that the accuracy of subsequent matching is improved, and the situation that the matching is wrong due to the fact that the number of feature elements in the 3-dimensional map is too large is avoided. And comparing the pose information of the vehicle corresponding to the successfully matched local ground identification map with the initial pose, calculating the coincidence degree of the pose information and the initial pose, acquiring first error information, and providing an important basis for correcting the initial pose by using the first error information.
In an optional example of the present application, when matching the 3-dimensional ground identification map, image segmentation is first performed on the 3-dimensional ground identification map, and other unnecessary feature elements are filtered out, so as to obtain local ground identification maps corresponding to the 3-dimensional ground identification map at each time. And respectively projecting the local ground identification map into the current feature 2-dimensional map, and matching with the ground identification in the current feature 2-dimensional map, so that the accuracy of subsequent matching is improved, and the situation that the matching is wrong due to the fact that the number of feature elements in the 3-dimensional map is too large is avoided. And comparing the pose information of the vehicle corresponding to the successfully matched local ground identification map with the initial pose, calculating the coincidence degree of the pose information and the initial pose, acquiring first error information, and providing an important basis for correcting the initial pose by using the first error information.
In an alternative embodiment shown in fig. 1, the joint ground library positioning method further includes step S105, matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and the pose information in the pre-established 3-dimensional ground library feature point map.
In the alternative implementation mode, the description information of the characteristic points in the panoramic mosaic is matched with the description information of the characteristic points in the 3-dimensional ground library characteristic point map, and the corresponding 3-dimensional map points of the characteristic points in the panoramic mosaic in the 3-dimensional ground library characteristic point map are obtained; according to the pixel position of the 3-dimensional map point in the camera coordinate system, determining the pixel information of the 3-dimensional map point, calculating the pixel error between the pixel information of the 3-dimensional map point and the pixel information of the characteristic point in the panoramic mosaic, and taking the pixel error as second error information, providing a basis for determining the pose information of the vehicle by using the second error information subsequently, and ensuring the precision of the pose information of the vehicle.
In an optional embodiment of the present application, according to the description information and the pixel information corresponding to the feature points, matching the feature points with a pre-established 3-dimensional ground library feature point map to obtain second error information between the initial pose and the position information in the pre-established 3-dimensional ground library feature point map, where the second error information includes: respectively calculating the similarity between the description information in the 3-dimensional ground library feature point map and the description information corresponding to the feature points; and feature points in the 3-dimensional database feature point map corresponding to the description information in the 3-dimensional database feature point map with similarity larger than or equal to a preset similarity threshold are used as 3-dimensional map points; and calculating and acquiring pixel errors between pixel information corresponding to the 3-dimensional map points and pixel information of the feature points, and acquiring second error information.
In the alternative embodiment, similarity of descriptive information corresponding to the characteristic points in the 3-dimensional ground library characteristic point map and the characteristic points in the panorama stitching diagram is compared respectively, and the characteristic points in the 3-dimensional ground library characteristic point map corresponding to the descriptive information with the characteristic point similarity greater than or equal to a preset similarity threshold value in the panorama stitching diagram are determined to be 3-dimensional map points.
In an optional example of the application, calculating the similarity between the description information in the 3-dimensional ground library feature point map and the description information corresponding to the feature points respectively; taking the feature points in the 3-dimensional ground library feature point map corresponding to the description information in the 3-dimensional ground library feature point map with the similarity larger than or equal to a preset similarity threshold value as 3-dimensional map points; projecting the 3-dimensional map points into a camera coordinate system established by taking the vehicle as a center, and acquiring pixel information corresponding to the 3-dimensional map points; and calculating pixel errors between the pixel information corresponding to the 3-dimensional map points and the pixel information of the feature points.
In an optional embodiment of the present application, calculating a pixel error between the pixel information corresponding to the acquired 3-dimensional map point and the pixel information of the feature point, and acquiring second error information includes: projecting the 3-dimensional map points into a camera coordinate system established by taking the vehicle as a center, and acquiring pixel information corresponding to the 3-dimensional map points; and calculating pixel errors between the pixel information corresponding to the 3-dimensional map points and the pixel information of the feature points, and obtaining second error information.
In the optional embodiment, a camera coordinate system is established by taking the vehicle as the center, the 3-dimensional map point is projected into the camera coordinate system, and pixel information corresponding to the 3-dimensional map point is obtained; and calculating pixel errors between the pixel information of the feature points in the panoramic mosaic and the pixel information of the 3-dimensional map points, providing a basis for determining pose information of the vehicle by using the pixel errors later, and ensuring the precision of the pose information of the vehicle.
In an alternative embodiment shown in fig. 1, the joint ground library positioning method further includes step S106, optimizing the initial pose according to the first error information and the second error information, and obtaining pose information of the vehicle.
In this alternative embodiment, the initial pose is corrected according to the first error information and the second error information, so as to obtain accurate pose information and improve the accuracy of vehicle positioning.
In an optional embodiment of the present application, optimizing the initial pose according to the first error information and the second error information, obtaining pose information of the vehicle includes: respectively judging the first error information and the second error information, wherein if the first error information is smaller than or equal to a first error threshold value and the second error information is smaller than or equal to a second error threshold value, the initial pose is taken as pose information of the vehicle; and if the first error information is larger than the first error threshold value and/or the second error information is larger than the second error threshold value, correcting the initial pose, and taking the corrected initial pose as pose information of the vehicle.
In this alternative embodiment, the initial pose is corrected according to the first error information and the second error information, so as to obtain accurate pose information and improve the accuracy of vehicle positioning. The method comprises the steps that an error threshold value is preset, the preset error value is the upper limit threshold value of an allowable error range, and when the first error information and the second error information are smaller than or equal to the preset error threshold value, the initial pose is not required to be corrected at the moment, and the initial pose is directly used as pose information of a vehicle in a current frame; and if the first error information and/or the second error information are/is larger than a preset error threshold value, correcting the initial pose, and taking the corrected initial pose as pose information of the vehicle.
In an optional embodiment of the present application, the feature point-based ground library positioning method further includes: calculating and acquiring wheel speed track information of a vehicle by using IMU information and wheel speed information acquired by a position sensor system loaded on the vehicle; acquiring visual track information of a vehicle according to multi-frame ground library fish-eye pictures; and fusing the wheel speed track information and the visual track information to obtain pose information of the vehicle.
In this alternative embodiment, since the image obtained by the camera corresponds to a visual odometer, the frequency is lower than that of a wheel speed odometer generated by calculating IMU information and wheel speed information; if the current moment is not the shooting time of the fish-eye camera, positioning information cannot be obtained by utilizing the visual odometer, and the high-frequency wheel speed odometer stores the positioning information of the corresponding moment, so that the visual odometer and the wheel speed odometer are fused to obtain pose information corresponding to the visual odometer of the current moment, and the precision of the pose information is improved.
In an optional example of the present application, under a historical frame before a current frame, a ground library fisheye image obtained by using a fisheye camera corresponds to a visual odometer, a wheel odometer generated by calculating IMU information and wheel speed information of the historical frame calculates a conversion relationship between the two, and pose information of the wheel odometer of the current frame at a moment corresponding to the wheel odometer of the current frame is converted into pose information corresponding to the visual odometer of the current frame by using the conversion relationship.
FIG. 2 illustrates an alternative embodiment of a federated library positioning system of the present application.
In the particular embodiment shown in FIG. 2, the federated library positioning system essentially comprises: the picture splicing module 201 performs image splicing on pictures shot by a plurality of cameras respectively in real time to obtain panoramic spliced pictures corresponding to the pictures, wherein the image information in the pictures is environmental information of the same floor at different angles, and the cameras are loaded around the vehicle; the initialization module 202 performs global initialization on the panoramic stitching graph to obtain the real-time initial pose of the vehicle; the information extraction module 203 is used for respectively extracting the characteristic line segments respectively corresponding to the ground marks around the vehicle and the description information and the pixel information respectively corresponding to the characteristic points in the panoramic mosaic; the ground identification matching module 204 is used for matching the ground identification with a pre-established 3-dimensional ground identification map according to the characteristic line segments respectively corresponding to the ground identifications, and obtaining first error information between the initial pose and the position information of the 3-dimensional ground identification map; the feature point matching module 205 matches the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtains second error information between the initial pose and pose information in the pre-established 3-dimensional ground library feature point map; and a positioning module 206, for optimizing the initial pose according to the first error information and the second error information, and obtaining pose information of the vehicle.
In an alternative embodiment of the present application, the functional modules of a federated library positioning system may be directly in hardware, in software modules executed by a processor, or in a combination of the two.
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
The processor may be a central processing unit (English: central Processing Unit; CPU; for short), or other general purpose processor, digital signal processor (English: digital Signal Processor; for short DSP), application specific integrated circuit (English: application Specific Integrated Circuit; ASIC; for short), field programmable gate array (English: field Programmable Gate Array; FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, etc. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
The joint ground library positioning system provided by the application can be used for executing the joint ground library positioning method described in any embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
In an alternative embodiment adopted by the application, a vehicle comprises the joint ground bank positioning system in the scheme, wherein the joint ground bank positioning system comprises: the image splicing module is used for carrying out image splicing on the images shot by the cameras respectively in real time to obtain a panoramic splicing image corresponding to the images, wherein the image information in the images is environmental information of the same floor at different angles, and the cameras are loaded around the vehicle; the initialization module is used for globally initializing the panoramic stitching graph to acquire the real-time initial pose of the vehicle; the information extraction module is used for respectively extracting characteristic line segments respectively corresponding to ground marks around the vehicle and descriptive information and pixel information respectively corresponding to the characteristic points from the panoramic mosaic; the ground identification matching module is used for matching the ground identification with a pre-established 3-dimensional ground identification map according to the characteristic line segments respectively corresponding to the ground identifications, and obtaining first error information between the initial pose and the position information of the 3-dimensional ground identification map; the feature point matching module is used for matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and the position information in the pre-established 3-dimensional ground library feature point map; and the positioning module optimizes the initial pose according to the first error information and the second error information to obtain pose information of the vehicle.
The vehicle provided by the application can be used for executing the joint ground library positioning method described in any embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
In another alternative embodiment of the present application, a computer readable storage medium stores computer instructions that are operative to perform the federated library positioning method described in the above-described embodiments.
In an alternative embodiment of the present application, as shown in fig. 3, a computer device includes: at least one processor coupled to the memory, the memory storing computer instructions that when executed by the processor implement the joint library positioning method of any of the above aspects.
In another embodiment of the present application, a computer program product comprising computer instructions operative to perform the feature point-based federated library positioning method of any of the aspects described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing description is only exemplary embodiments of the present application and is not intended to limit the scope of the present application, and all equivalent structural changes made by the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the present application.

Claims (10)

1. A method of federated library positioning comprising:
image stitching is carried out on pictures shot by a plurality of cameras respectively in real time, a panoramic stitching picture corresponding to the pictures is obtained, wherein image information in the pictures is environmental information of the same floor and different angles, and the cameras are loaded around a vehicle;
global initialization is carried out on the panoramic stitching graph, and the real-time initial pose of the vehicle is obtained;
respectively extracting characteristic line segments respectively corresponding to ground marks around the vehicle and descriptive information and pixel information respectively corresponding to characteristic points from the panoramic mosaic;
according to the characteristic line segments respectively corresponding to the ground identifiers, matching the ground identifiers with a pre-established 3-dimensional ground identifier map, and obtaining first error information between the initial pose and the pose information in the 3-dimensional ground identifier map;
matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and pose information in the pre-established 3-dimensional ground library feature point map; and
and optimizing the initial pose according to the first error information and the second error information, and acquiring pose information of the vehicle.
2. The joint ground library positioning method according to claim 1, wherein the matching the ground identification information with a pre-established 3-dimensional ground identification map according to the feature line segments respectively corresponding to the ground identifications, to obtain first error information between the initial pose and the pose information in the 3-dimensional ground identification map, includes:
utilizing the characteristic line segments corresponding to the ground identifiers respectively to establish a 2-dimensional characteristic diagram corresponding to the ground identifiers;
image segmentation is carried out on the 3-dimensional ground identification map, and the 3-dimensional ground identification map of each frame is obtained, wherein the 3-dimensional ground identification map of each frame contains pose information corresponding to the vehicle at the moment; and
and projecting the 3-dimensional ground identification map into the 2-dimensional feature map, calculating the coincidence degree of the pose information and the initial pose, and acquiring the first error information.
3. The joint ground library positioning method according to claim 1, wherein said matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points to obtain second error information between the initial pose and the pose information of the pre-established 3-dimensional ground library feature point map comprises:
respectively calculating the similarity between the description information in the 3-dimensional ground library feature point map and the description information corresponding to the feature points; and
taking the feature points in the 3-dimensional ground library feature point map corresponding to the description information in the 3-dimensional ground library feature point map with the similarity larger than or equal to a preset similarity threshold value as the 3-dimensional map points;
and calculating and acquiring pixel errors between the pixel information corresponding to the 3-dimensional map points and the pixel information of the feature points, and acquiring the second error information.
4. A joint ground library positioning method according to claim 3, wherein said calculating to obtain pixel errors between pixel information corresponding to the 3-dimensional map points and pixel information of the feature points, and obtaining the second error information, comprises:
projecting the 3-dimensional map points into a camera coordinate system established by taking the vehicle as a center, and acquiring pixel information corresponding to the 3-dimensional map points; and
and calculating pixel errors between the pixel information corresponding to the 3-dimensional map points and the pixel information of the feature points, and acquiring the second error information.
5. The joint ground bank positioning method according to any one of claims 1 to 4, wherein the optimizing the initial pose according to the first error information and the second error information to obtain pose information of the vehicle includes:
respectively judging the first error information and the second error information, wherein,
if the first error information is smaller than or equal to a first error threshold value and the second error information is smaller than or equal to a second error threshold value, the initial pose is used as pose information of the vehicle;
and if the first error information is larger than the first error threshold value and/or the second error information is larger than the second error threshold value, correcting the initial pose, and taking the corrected initial pose as pose information of the vehicle.
6. The federated library positioning method of claim 1, further comprising:
calculating and acquiring wheel speed track information of the vehicle by using IMU information and wheel speed information acquired by the position sensor system loaded by the vehicle;
acquiring visual track information of the vehicle according to the multi-frame pictures;
and fusing the wheel speed track information and the visual track information to obtain pose information of the vehicle.
7. A joint ground library positioning system, comprising:
the image splicing module is used for carrying out image splicing on images shot by a plurality of cameras respectively in real time to obtain a panoramic splicing image corresponding to the images, wherein the image information in the images is environmental information of the same floor at different angles, and the cameras are loaded around a vehicle;
the initialization module is used for carrying out global initialization on the panoramic stitching graph to obtain the real-time initial pose of the vehicle;
the information extraction module is used for respectively extracting characteristic line segments respectively corresponding to the ground marks around the vehicle and descriptive information and pixel information respectively corresponding to the characteristic points from the panoramic mosaic;
the ground identification matching module is used for matching the ground identification with a pre-established 3-dimensional ground identification map according to the characteristic line segments respectively corresponding to the ground identifications, and obtaining first error information between the initial pose and the pose information of the 3-dimensional ground identification map;
the feature point matching module is used for matching the feature points with a pre-established 3-dimensional ground library feature point map according to the description information and the pixel information corresponding to the feature points, and obtaining second error information between the initial pose and the position information of the pre-established 3-dimensional ground library feature point map; and
and the positioning module optimizes the initial pose according to the first error information and the second error information and acquires pose information of the vehicle.
8. A vehicle comprising the joint ground library positioning system of claim 7.
9. A computer readable storage medium storing computer instructions operable to perform the federated library positioning method of any one of claims 1-6.
10. A computer device, comprising:
at least one processor coupled to a memory, the memory storing computer instructions that, when executed by the processor, implement the federated library positioning method of any of claims 1-6.
CN202210905711.8A 2022-07-29 2022-07-29 Combined ground library positioning method, system, vehicle, storage medium and equipment Pending CN117522969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210905711.8A CN117522969A (en) 2022-07-29 2022-07-29 Combined ground library positioning method, system, vehicle, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210905711.8A CN117522969A (en) 2022-07-29 2022-07-29 Combined ground library positioning method, system, vehicle, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN117522969A true CN117522969A (en) 2024-02-06

Family

ID=89748272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210905711.8A Pending CN117522969A (en) 2022-07-29 2022-07-29 Combined ground library positioning method, system, vehicle, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN117522969A (en)

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US8437501B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
KR20190090393A (en) Lane determining method, device and storage medium
CN112667837A (en) Automatic image data labeling method and device
CN111830953A (en) Vehicle self-positioning method, device and system
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
WO2020156923A2 (en) Map and method for creating a map
CN115164918B (en) Semantic point cloud map construction method and device and electronic equipment
CN113139031B (en) Method and related device for generating traffic sign for automatic driving
CN111105695B (en) Map making method and device, electronic equipment and computer readable storage medium
CN114663852A (en) Method and device for constructing lane line graph, electronic equipment and readable storage medium
CN114037762A (en) Real-time high-precision positioning method based on image and high-precision map registration
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
CN111191596B (en) Closed area drawing method, device and storage medium
CN110827340B (en) Map updating method, device and storage medium
CN115457084A (en) Multi-camera target detection tracking method and device
CN116295463A (en) Automatic labeling method for navigation map elements
CN117522969A (en) Combined ground library positioning method, system, vehicle, storage medium and equipment
CN117541465A (en) Feature point-based ground library positioning method, system, vehicle and storage medium
CN117516527A (en) Ground library positioning method, system, vehicle, storage medium and equipment
CN112528918A (en) Road element identification method, map marking method and device and vehicle
CN114531580B (en) Image processing method and device
CN117541464A (en) Global initialization method, system, vehicle, storage medium and equipment
CN113822932B (en) Device positioning method, device, nonvolatile storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination