CN111765892B - Positioning method, positioning device, electronic equipment and computer readable storage medium - Google Patents

Positioning method, positioning device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111765892B
CN111765892B CN202010397293.7A CN202010397293A CN111765892B CN 111765892 B CN111765892 B CN 111765892B CN 202010397293 A CN202010397293 A CN 202010397293A CN 111765892 B CN111765892 B CN 111765892B
Authority
CN
China
Prior art keywords
low
positioning
visual features
level visual
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010397293.7A
Other languages
Chinese (zh)
Other versions
CN111765892A (en
Inventor
何潇
常满禹
张丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN202010397293.7A priority Critical patent/CN111765892B/en
Publication of CN111765892A publication Critical patent/CN111765892A/en
Application granted granted Critical
Publication of CN111765892B publication Critical patent/CN111765892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application discloses a positioning method, comprising: determining a fusion map based on the low-level visual features and the high-level visual features; extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points; determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points; determining a vehicle location based on the determined location strategy; the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.

Description

Positioning method, positioning device, electronic equipment and computer readable storage medium
Technical Field
The application relates to the field of unmanned driving, in particular to a positioning method, a positioning device, electronic equipment and a storage medium.
Background
The vision-based positioning technology is one of important components of an intelligent driving system and has wide application scenes. Currently, mainstream visual positioning algorithms include a feature point method, a direct method, an optical flow method, and the like, and these methods are mainly based on low-level visual features, such as feature points and even pixel-level information. The low-level visual feature has an advantage of being abundant in number, and can provide a positioning result with higher accuracy, however, it is sensitive to environmental changes such as a change in illumination intensity, and at the same time, it is less interpretable to humans, and besides, the low-level visual feature map is generally large in map storage amount due to abundant feature points.
Based on this, the industry is also increasingly exploring the application of advanced visual features to positioning technologies. High-level semantics have strong interpretability and low sensitivity to environmental changes, but at the same time high-level semantics have sparsity and can provide less positioning constraints than low-level features.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, an electronic device and a computer-readable storage medium, which aim at the problems of low positioning accuracy and high failure rate in the prior art.
A first aspect of an embodiment of the present application provides a positioning method, including: determining a fusion map based on the low-level visual features and the high-level visual features; extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points; determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points; determining a vehicle location based on the determined location strategy; the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.
In some embodiments, the determining a fusion map based on the low-level visual features and the high-level visual features comprises: constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features; unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map.
In some embodiments, the determining a fusion map further comprises: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
In some embodiments, the determining a positioning strategy based on the number of inliers and the inliers height-depth ratio includes: when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted; when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted; when the number of the inner points and the height-depth ratio of the inner points do not meet the two conditions, adopting a fusion positioning strategy; therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
In some embodiments, the performing location optimization based on low-level visual features and fusion maps comprises: based on T*=argmin∑||pi-TPiI | determining vehicle location, where T is vehicle location, piAnd PiThe low-level visual features are detected in real-time on the match and fused with the low-level visual features in the map, respectively.
In some embodiments, the performing location optimization based on advanced visual features and the fusion map, determining vehicle location, comprises: based on T*=argmin∑||mi-TMiI | determining vehicle location, where T is vehicle location, miAnd MiRespectively, detecting advanced visual features in real time on matching and fusing the advanced visual features in the map.
In some embodiments, the jointly optimizing based on the low-level visual features, the high-level visual features, and the fusion map to determine the vehicle position includes: based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) determine vehicle location, where T is vehicle location, piAnd PiReal-time detection of low-level visual features on matches with low-level visual features in the fusion map, m, respectivelyiAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
A second aspect of an embodiment of the present application provides a positioning apparatus, including: a fusion map determination unit for determining a fusion map based on the low-level visual features and the high-level visual features; the low-level visual feature positioning unit is used for extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points; the positioning strategy determining unit is used for determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points; a positioning determination unit for determining a vehicle positioning based on the determined positioning strategy; the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.
In some embodiments, the fusion map determination unit is specifically configured to: constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features; unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map.
In some embodiments, the fusion map determining unit is further specifically configured to: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
In some embodiments, the positioning policy determining unit is specifically configured to: when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted; when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted; when the number of the inner points is not more than thi_highAnd the height-depth ratio of the inner point is not more than thd_highOr when the number of inner points is not less than thi_lowAnd the ratio of the internal point to the depth is not less than thd_lowAdopting a fusion positioning strategy; therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high
In some embodiments, the location determining unit is specifically configured to: when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
In some embodiments, the location determining unit is specifically configured to: when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
In some embodiments, the location determining unit is specifically configured to: when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
In some embodiments, the low-level visual feature localization unit is specifically configured to: based on T*=argmin∑||pi-TPiI | determining vehicle location, where T is vehicle location, piAnd PiThe low-level visual features are detected in real-time on the match and fused with the low-level visual features in the map, respectively.
In some embodiments, the location determining unit performs location optimization based on the advanced visual features and the fusion map, and determines the vehicle location, specifically including: based on T*=argmin∑||mi-TMiI | determining vehicle location, where T is vehicle location, miAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
In some embodiments, the location determination unit performs joint optimization based on the low-level visual features, the high-level visual features, and the fusion map, determining vehicle location, comprising: based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) determine vehicle location, where T is vehicle location, piAnd PiReal-time detection of low-level visual features on matches with low-level visual features in the fusion map, m, respectivelyiAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
A third aspect of an embodiment of the present application provides an electronic device, including: a memory and one or more processors; wherein the memory is communicatively connected to the one or more processors, and the memory stores instructions executable by the one or more processors, and when the instructions are executed by the one or more processors, the electronic device is configured to implement the positioning method according to the foregoing embodiments.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a computing device, the computer-executable instructions can be used to implement the positioning method according to the foregoing embodiments.
The beneficial effect of this application is as follows:
the method has the advantages that the advantages of low-level visual features and high-level visual features are combined, the mutual defects are overcome, a fusion positioning method is provided, in the practical application process of the method, the intelligent driving vehicle can smoothly acquire vehicle positioning in the scene with poor performance of the two positioning methods, and the robustness of the whole intelligent driving system is obviously improved;
the method and the device also utilize the sparsity of the high-level visual features to optimize the size of the low-level visual feature map, so that the data storage capacity of the fusion map is remarkably reduced while the original positioning precision is ensured, and the space requirement on the vehicle-mounted computing platform is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that it is also possible for a person skilled in the art to apply the application to other similar scenarios without inventive effort on the basis of these drawings. Unless otherwise apparent from the context of language or otherwise indicated, like reference numerals in the figures refer to like structures and operations.
FIG. 1 is a schematic diagram of a positioning method according to some embodiments of the present application;
FIG. 2 is a schematic illustration of a local plane determination method according to some embodiments of the present application;
FIG. 3 is a schematic illustration of a location policy determination according to some embodiments of the present application;
FIG. 4 is a schematic view of a positioning device according to some embodiments of the present application; and
FIG. 5 is a schematic diagram of an electronic device shown in accordance with some embodiments of the present application.
Detailed Description
In the following detailed description, numerous specific details of the present application are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. It will be apparent, however, to one skilled in the art that the present application may be practiced without these specific details. It should be understood that the use of the terms "system," "apparatus," "unit" and/or "module" herein is a method for distinguishing between different components, elements, portions or assemblies at different levels of sequential arrangement. However, these terms may be replaced by other expressions if they can achieve the same purpose.
It will be understood that when a device, unit or module is referred to as being "on" … … "," connected to "or" coupled to "another device, unit or module, it can be directly on, connected or coupled to or in communication with the other device, unit or module, or intervening devices, units or modules may be present, unless the context clearly dictates otherwise. For example, as used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in the specification and claims of this application, the terms "a", "an", and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified features, integers, steps, operations, elements, and/or components, but not to constitute an exclusive list of such features, integers, steps, operations, elements, and/or components.
These and other features and characteristics of the present application, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood upon consideration of the following description and the accompanying drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the application. It will be understood that the figures are not drawn to scale.
Various block diagrams are used in this application to illustrate various variations of embodiments according to the application. It should be understood that the foregoing and following structures are not intended to limit the present application. The protection scope of this application is subject to the claims.
In view of the above advantages and disadvantages of the positioning method using low-level visual features or high-level visual features alone, the present invention provides a low-level and high-level visual feature fusion positioning method to combine the advantages of each other and expand the application scenarios of a single method.
In addition, the low-level visual feature map has large map storage capacity due to rich feature points, and in the process of establishing the low-level and high-level visual feature fusion map depending on fusion positioning, the invention also utilizes the advantages of high-level features, optimizes and deletes the quantity of the low-level features, and effectively reduces the size of the map.
Fig. 1 illustrates a schematic diagram of a positioning method according to some embodiments of the present application.
At 102, a fusion map is determined based on the low-level visual features and the high-level visual features. Generally, the content of an image includes not only low-level visual features such as color, texture, shape, structure, etc., but also middle-level features such as objects, etc., and high-level semantic feature information such as scenes, emotions, etc. In general domain retrieval, the "semantic gap" between low-level features and high-level semantic features is difficult to infer directly from the low-level features because domain knowledge of the general domain cannot be manually established. The high-level visual features refer to interpretable visual features with semantics. The process of determining a fusion map based on the low-level visual features and the high-level visual features is shown in fig. 2 and described herein.
At 202, a low-level visual feature map is constructed based on the low-level visual features and a high-level visual feature map is constructed based on the high-level visual features. In order to enhance the flexibility and the realizability of the graph building method, the fused graph building method is realized by adopting a post-processing mode, the original graph building process is not damaged, and the independence and the modularity of the original graph building process are reserved.
At 204, the map constructed by the low-level visual features and the map constructed by the high-level visual features are unified to the same coordinate system and are directly superposed to determine a fusion map.
In some embodiments, after determining the fusion map, the method further includes: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features. This is because: the high-level features already describe the information of the areas, so that a method of deleting the low-level features in a certain area near the high-level features after two maps are directly superposed under the same coordinate system can be adopted, thereby reducing the overall data volume of the map. In some embodiments, the preset area is a predefined area with an arbitrary shape, such as a circular area, a rectangular area, etc. around the high-level semantic features.
At 104, low-level visual features of the input image are extracted, positioning optimization is carried out based on the low-level visual features and the fusion map, and the number of interior points and the interior point high depth ratio are determined.
Therefore, when a single frame runs initially, the low-level visual features and the high-level visual features are not directly extracted at the same time, but the positioning optimization based on the low-level visual features is tried first, and if the positioning result is not ideal, the application of the high-level visual features is considered.
In some embodiments, the performing location optimization based on low-level visual features and fusion maps comprises: based on T*=argmin∑||pi-TPiI | determining vehicle location, where T is vehicle location, piAnd PiThe low-level visual features are detected in real-time on the match and fused with the low-level visual features in the map, respectively.
ComputingOptimal solution T*While using RANSAC algorithm to iterate and count the number of interior points (namely, satisfying the low-level visual characteristics of model calculation), on one hand, the more the number of interior points is, namely, the more the positioning constraints are, the better the positioning accuracy can be obtained, therefore, the number of interior points can be directly used for evaluating the quality of the positioning result; on the other hand, the larger the depth of the low-level visual feature extracted from the image is, the larger the error generated by the low-level visual feature is, and therefore if the number of points with larger depth in the interior points is large, the positioning result can be considered to be poor. Therefore, the two parameters of the number of the inner points and the height-depth ratio of the inner points are adopted to select the positioning strategy. The ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points. The depth threshold is a preset appropriate value. For example an empirical value.
At 106, a positioning strategy is determined based on the number of inliers and the inliers high-depth ratio. Fig. 3 is a schematic illustration of a location strategy determination according to some embodiments of the present application.
In 302, when the number of inliers is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted;
in 304, when the number of inliers is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted;
in 306, when the number of the inner points and the height-depth ratio of the inner points do not satisfy the two situations, a fusion positioning strategy is adopted;
therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high
It should be noted that fig. 3 illustrates three cases, rather than three steps, which are not in sequence. In addition, although the expression "when the number of inliers and the inliers height-depth ratio do not satisfy the above two cases" in 306 does not indicate that the execution of 306 depends on 302 and 304 (the execution order is after 302 and 304), actually, the expression "when the number of inliers and the inliers height-depth ratio do not satisfy the above two cases" in 306 may be expressed by an actual logical relationship.
The logical relationship described in FIG. 3 may be represented by the following logical relationship (n)iIs the number of interior points, rdInternal point height to depth ratio):
if ni>thi_high and rd<thd_low:
choose low level feature based stragety;
else if ni<thi_low and rd>thd_high:
choose high level feature based stragety;
else:choose fusing stragety.
in some embodiments, the positioning method further comprises: when the vehicle is located at a particular location, a particular positioning strategy is employed. For example, in a particular area, it has been determined that a low-level visual feature location strategy or a high-level visual feature location strategy works best in that area, the corresponding location strategy may be employed directly. In some embodiments, the particular positioning strategy may also be a low-level visual feature positioning strategy, and other positioning strategies beyond a fusion positioning strategy.
At 108, a vehicle location is determined based on the determined location strategy.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning. Specifically, the positioning optimization based on the advanced visual features and the fusion map to determine the vehicle positioning comprises the following steps: based on T*=argmin∑||mi-TMiI | determining vehicle location, where T is vehicle location, miAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning. Specifically, the joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine the vehicle positioning comprises the following steps: based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) determine vehicle location, where T is vehicle location, piAnd PiReal-time detection of low-level visual features on matches with low-level visual features in the fusion map, m, respectivelyiAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
FIG. 4 is a schematic view of a positioning device according to some embodiments of the present application. The positioning device shown in fig. 4 is used to perform the method as described in fig. 1-3.
As shown in fig. 4, the positioning apparatus includes a fusion map determination unit 410, a low-level visual feature positioning unit 420, a positioning policy determination unit 430, and a positioning unit 440. Wherein:
the fusion map determination unit 410 is configured to determine a fusion map based on the low-level visual features and the high-level visual features;
the low-level visual feature positioning unit 420 is configured to extract low-level visual features of the input image, perform positioning optimization based on the low-level visual features and the fusion map, and determine the number of interior points and a high depth ratio of the interior points, where the high depth ratio of the interior points is a ratio of the number of the interior points with a depth greater than a depth threshold to the total number of the interior points;
the positioning policy determining unit 430 is configured to determine a positioning policy based on the number of interior points and the interior point height-depth ratio;
the location determination unit 440 is configured to determine a vehicle location based on the determined location strategy.
Specifically, the fusion map determination 410 unit is configured to: constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features; unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map. Further, the fusion map determination unit is further configured to: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
Specifically, the positioning policy determining unit 430 is configured to: when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted; when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted; when the number of the inner points is not more than thi_highAnd the height-depth ratio of the inner point is not more than thd_highOr when the number of inner points is not less than thi_lowAnd the ratio of the internal point to the depth is not less than thd_lowAdopting a fusion positioning strategy; therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high
Specifically, the location determining unit 440 is specifically configured to: when a low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimization positioning is used as vehicle positioning;
when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning;
when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
Specifically, theThe low-level visual feature location unit 420 is configured to: based on T*=argmin∑||pi-TPiI | determining vehicle location, where T is vehicle location, piAnd PiThe low-level visual features are detected in real-time on the match and fused with the low-level visual features in the map, respectively.
Specifically, the positioning determining unit 440 performs positioning optimization based on the advanced visual features and the fusion map, and determines the vehicle positioning, including: based on T*=argmin∑||mi-TMiI | determining vehicle location, where T is vehicle location, miAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
Specifically, the positioning determination unit 440 performs joint optimization based on the low-level visual features, the high-level visual features and the fusion map, and determines the vehicle positioning, including: based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) determine vehicle location, where T is vehicle location, piAnd PiReal-time detection of low-level visual features on matches with low-level visual features in the fusion map, m, respectivelyiAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
Fig. 5 is a schematic structural diagram suitable for implementing an electronic device according to an embodiment of the present application.
As shown in fig. 5, the electronic apparatus 500 includes a Central Processing Unit (CPU)501 that can execute various processes in the embodiments shown in fig. 1 to 3 described above in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The CPU501, ROM502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to embodiments of the present application, the methods described above with reference to fig. 1-3 may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the method of FIGS. 1-3. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described herein.
In summary, the present application provides a positioning method, an apparatus, an electronic device and a computer-readable storage medium thereof. By performing fusion positioning based on the low-level visual features and the high-level visual features, the data storage capacity is reduced, and the robustness, stability and success rate of the system are improved.
It is to be understood that the above-described embodiments of the present application are merely illustrative of or illustrative of the principles of the present application and are not to be construed as limiting the present application. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present application shall be included in the protection scope of the present application. Further, it is intended that the appended claims cover all such changes and modifications that fall within the scope and range of equivalents of the appended claims, or the equivalents of such scope and range.

Claims (18)

1. A method of positioning, comprising:
determining a fusion map based on the low-level visual features and the high-level visual features;
extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points;
determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points;
determining a vehicle location based on the determined location strategy;
the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points;
wherein the determining a fusion map based on the low-level visual features and the high-level visual features comprises:
constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features;
unifying a map constructed by the low-level visual features and a map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map;
the determining a positioning strategy based on the number of the interior points and the height-depth ratio of the interior points comprises:
when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted;
when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted;
when the number of the inner points and the height-depth ratio of the inner points do not meet the two conditions, adopting a fusion positioning strategy;
therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high
2. The method of claim 1, wherein the determining a fusion map further comprises:
redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
3. The method of claim 1, wherein determining the vehicle location based on the determined location strategy comprises:
when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
4. The method of claim 1, wherein determining the vehicle location based on the determined location strategy comprises:
when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image;
and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
5. The method of claim 1, wherein determining the vehicle location based on the determined location strategy comprises:
when a fusion positioning strategy is adopted, extracting high-level visual features in an input image;
and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
6. The method of claim 1, wherein the performing localization optimization based on low-level visual features and fusion maps comprises:
based on T*=argmin∑||pi-TPiI | determining vehicle location, where T is vehicle location, piAnd PiThe low-level visual features are detected in real-time on the match and fused with the low-level visual features in the map, respectively.
7. The method of claim 4, wherein the performing location optimization based on advanced visual features and a fusion map, determining vehicle location, comprises:
based on T*=argmin∑||mi-TMiI | determining vehicle location, where T is vehicle location, miAnd MiRespectively, detecting advanced visual features in real time on matching and fusing the advanced visual features in the map.
8. The method of claim 5, wherein the jointly optimizing based on the low-level visual features, the high-level visual features, and the fusion map to determine vehicle localization comprises:
based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) determine vehicle location, where T is vehicle location, piAnd PiReal-time detection of low-level visual features on matches with low-level visual features in the fusion map, m, respectivelyiAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
9. A positioning device, comprising:
a fusion map determination unit for determining a fusion map based on the low-level visual features and the high-level visual features;
the low-level visual feature positioning unit is used for extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points;
the positioning strategy determining unit is used for determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points;
a positioning determination unit for determining a vehicle positioning based on the determined positioning strategy;
the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points;
wherein the fusion map determination unit is specifically configured to:
constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features;
unifying a map constructed by the low-level visual features and a map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map;
the positioning policy determining unit is specifically configured to:
when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted;
when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted;
when the number of the inner points is not more than thi_highAnd the height-depth ratio of the inner point is not more than thd_highOr when the number of inner points is not less than thi_lowAnd the ratio of the internal point to the depth is not less than thd_lowAdopting a fusion positioning strategy;
therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high
10. The positioning apparatus according to claim 9, wherein the fused map determining unit is further specifically configured to:
redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
11. The positioning apparatus of claim 9, wherein the positioning determination unit is specifically configured to:
when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
12. The positioning apparatus of claim 9, wherein the positioning determination unit is specifically configured to:
when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image;
and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
13. The positioning apparatus of claim 9, wherein the positioning determination unit is specifically configured to:
when a fusion positioning strategy is adopted, extracting high-level visual features in an input image;
and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
14. The positioning device of claim 9, wherein the low-level visual feature positioning unit is specifically configured to:
based on T*=argmin∑||pi-TPiI | determining vehicle location, where T is vehicle location, piAnd PiThe low-level visual features are detected in real-time on the match and fused with the low-level visual features in the map, respectively.
15. The positioning device according to claim 12, wherein the positioning determination unit performs positioning optimization based on the advanced visual features and the fusion map to determine the vehicle positioning, specifically comprising:
based on T*=argmin∑||mi-TMiI | determining vehicle location, where T is vehicle location, miAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
16. The positioning device of claim 13, wherein the positioning determination unit performs joint optimization based on low-level visual features, high-level visual features, and a fusion map, determining vehicle positioning, comprising:
based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) determine vehicle location, where T is vehicle location, piAnd PiReal-time detection of low-level visual features on matches with low-level visual features in the fusion map, m, respectivelyiAnd MiRespectively, real-time detection advanced visual features and map advanced visual features on matching.
17. An electronic device, comprising:
a memory and one or more processors;
wherein the memory is communicatively coupled to the one or more processors, the memory having stored therein instructions executable by the one or more processors, the instructions when executed by the one or more processors, the electronic device to implement the positioning method of any of claims 1-8.
18. A computer-readable storage medium having stored thereon computer-executable instructions operable to implement the positioning method of any one of claims 1-8 when executed by a computing device.
CN202010397293.7A 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium Active CN111765892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010397293.7A CN111765892B (en) 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010397293.7A CN111765892B (en) 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111765892A CN111765892A (en) 2020-10-13
CN111765892B true CN111765892B (en) 2022-04-29

Family

ID=72719028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010397293.7A Active CN111765892B (en) 2020-05-12 2020-05-12 Positioning method, positioning device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111765892B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060113B (en) * 2019-12-31 2022-04-08 歌尔股份有限公司 Map updating method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN108920584A (en) * 2018-06-25 2018-11-30 广州视源电子科技股份有限公司 A kind of semanteme grating map generation method and its device
CN109556617A (en) * 2018-11-09 2019-04-02 同济大学 A kind of map elements extracting method of automatic Jian Tu robot
CN110415297A (en) * 2019-07-12 2019-11-05 北京三快在线科技有限公司 Localization method, device and unmanned equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7668797B2 (en) * 2006-04-07 2010-02-23 Gary Kuvich Active semiotic system for image and video understanding by robots and unmanned vehicles, methods and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN108920584A (en) * 2018-06-25 2018-11-30 广州视源电子科技股份有限公司 A kind of semanteme grating map generation method and its device
CN109556617A (en) * 2018-11-09 2019-04-02 同济大学 A kind of map elements extracting method of automatic Jian Tu robot
CN110415297A (en) * 2019-07-12 2019-11-05 北京三快在线科技有限公司 Localization method, device and unmanned equipment

Also Published As

Publication number Publication date
CN111765892A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
KR102447352B1 (en) Method and device for traffic light detection and intelligent driving, vehicle, and electronic device
US20190156144A1 (en) Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
WO2020103893A1 (en) Lane line property detection method, device, electronic apparatus, and readable storage medium
CN114782499A (en) Image static area extraction method and device based on optical flow and view geometric constraint
CN110853085B (en) Semantic SLAM-based mapping method and device and electronic equipment
CN114463603B (en) Training method and device for image detection model, electronic equipment and storage medium
CN114820679B (en) Image labeling method and device electronic device and storage medium
CN111765892B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN113076889B (en) Container lead seal identification method, device, electronic equipment and storage medium
CN112784675B (en) Target detection method and device, storage medium and terminal
CN113409340A (en) Semantic segmentation model training method, semantic segmentation device and electronic equipment
CN116189150B (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN115565072A (en) Road garbage recognition and positioning method and device, electronic equipment and medium
CN113516013B (en) Target detection method, target detection device, electronic equipment, road side equipment and cloud control platform
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN112634294A (en) Method for measuring boundary performance of semantic segmentation network
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN113963322B (en) Detection model training method and device and electronic equipment
CN113807293B (en) Deceleration strip detection method, deceleration strip detection system, deceleration strip detection equipment and computer readable storage medium
CN117649530B (en) Point cloud feature extraction method, system and equipment based on semantic level topological structure
CN117516561A (en) Map construction method, apparatus, device, storage medium, and program product
CN117407473A (en) Method and device for generating evaluation data, electronic equipment and storage medium
CN117078997A (en) Image processing or training method, device, equipment and medium of image processing model
CN117671474A (en) Remote sensing image building contour extraction method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant