CN111765892A - Positioning method, positioning device, electronic equipment and computer readable storage medium - Google Patents
Positioning method, positioning device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111765892A CN111765892A CN202010397293.7A CN202010397293A CN111765892A CN 111765892 A CN111765892 A CN 111765892A CN 202010397293 A CN202010397293 A CN 202010397293A CN 111765892 A CN111765892 A CN 111765892A
- Authority
- CN
- China
- Prior art keywords
- low
- visual features
- level visual
- positioning
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
Abstract
The application discloses a positioning method, comprising: determining a fusion map based on the low-level visual features and the high-level visual features; extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points; determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points; determining a vehicle location based on the determined location strategy; the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.
Description
Technical Field
The application relates to the field of unmanned driving, in particular to a positioning method, a positioning device, electronic equipment and a storage medium.
Background
The vision-based positioning technology is one of important components of an intelligent driving system and has wide application scenes. Currently, mainstream visual positioning algorithms include a feature point method, a direct method, an optical flow method, and the like, and these methods are mainly based on low-level visual features, such as feature points and even pixel-level information. The low-level visual feature has an advantage of being abundant in number, and can provide a positioning result with higher accuracy, however, it is sensitive to environmental changes such as a change in illumination intensity, and at the same time, it is less interpretable to humans, and besides, the low-level visual feature map is generally large in map storage amount due to abundant feature points.
Based on this, the industry is also increasingly exploring the application of advanced visual features to positioning technologies. High-level semantics have strong interpretability and low sensitivity to environmental changes, but at the same time high-level semantics have sparsity and can provide less positioning constraints than low-level features.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, an electronic device and a computer-readable storage medium, which aim at the problems of low positioning accuracy and high failure rate in the prior art.
A first aspect of an embodiment of the present application provides a positioning method, including: determining a fusion map based on the low-level visual features and the high-level visual features; extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points; determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points; determining a vehicle location based on the determined location strategy; the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.
In some embodiments, the determining a fusion map based on the low-level visual features and the high-level visual features comprises: constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features; unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map.
In some embodiments, the determining a fusion map further comprises: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
In some embodiments, the determining a positioning strategy based on the number of inliers and the inliers height-depth ratio includes: when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted; when insideThe number of dots is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted; when the number of the inner points and the height-depth ratio of the inner points do not meet the two conditions, adopting a fusion positioning strategy; therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high。
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
In some embodiments, the performing location optimization based on low-level visual features and fusion maps comprises: based on T*=argmin∑‖pi-TPiII, determining vehicle location, wherein the vehicle location is determined, and respectively detecting the low-level visual features on the match and fusing the low-level visual features in the map.
In some embodiments, the performing location optimization based on advanced visual features and the fusion map, determining vehicle location, comprises: based on T*=argmin∑‖mi-TMiII, determining vehicle positioning, wherein M and M are respectively real-time detection advanced visual features on the matching and advanced visual features in the fusion map.
In some embodiments, the jointly optimizing based on the low-level visual features, the high-level visual features, and the fusion map to determine the vehicle position includes: based on T*=argmin(∑‖piTPi + mi-TMi) determining a vehicle localization, wherein for a vehicle localization, the real-time detection of low-level visual features on matches and the fusion of low-level visual features in the map, respectively, M and M are the real-time detection of high-level visual features on matches and the map high-level visual features, respectively.
A second aspect of an embodiment of the present application provides a positioning apparatus, including: a fusion map determination unit for determining a fusion map based on the low-level visual features and the high-level visual features; the low-level visual feature positioning unit is used for extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points; the positioning strategy determining unit is used for determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points; a positioning determination unit for determining a vehicle positioning based on the determined positioning strategy; the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.
In some embodiments, the fusion map determination unit is specifically configured to: constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features; unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map.
In some embodiments, the fusion map determining unit is further specifically configured to: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
In some embodiments, the positioning policy determining unit is specifically configured to: when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted; when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted; when the inner point isThe number is not more than thi_highAnd the height-depth ratio of the inner point is not more than thd_highOr when the number of inner points is not less than thi_lowAnd the ratio of the internal point to the depth is not less than thd_lowAdopting a fusion positioning strategy; therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high。
In some embodiments, the vehicle locating unit is specifically configured to: when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
In some embodiments, the vehicle locating unit is specifically configured to: when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
In some embodiments, the vehicle locating unit is specifically configured to: when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
In some embodiments, the low-level visual feature localization unit is specifically configured to: based on T*=argmin∑‖pi-TPiII, determining vehicle location, wherein the vehicle location is determined, and respectively detecting the low-level visual features on the match and fusing the low-level visual features in the map.
In some embodiments, the vehicle positioning unit performs positioning optimization based on the advanced visual features and the fusion map, and determines the vehicle positioning, specifically including: based on T*=argmin∑‖mi-TMiII, determining the vehicle location, wherein M and M are respectively the real-time detection advanced visual features and the map advanced visual features on the match.
In some embodiments, the vehicle positioning unit is based on low-level visual features, high-level visual features, and fusion groundAnd (3) performing joint optimization to determine vehicle positioning, wherein the joint optimization comprises the following steps: based on T*=argmin(∑‖pi-TPi‖+∑‖mi-TMiIib), determining vehicle location, wherein for vehicle location, and for real-time detection of low-level visual features on match and fusion of low-level visual features in the map, respectively, M and M are for real-time detection of high-level visual features on match and high-level visual features of the map, respectively.
A third aspect of an embodiment of the present application provides an electronic device, including: a memory and one or more processors; wherein the memory is communicatively connected to the one or more processors, and the memory stores instructions executable by the one or more processors, and when the instructions are executed by the one or more processors, the electronic device is configured to implement the positioning method according to the foregoing embodiments.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a computing device, the computer-executable instructions can be used to implement the positioning method according to the foregoing embodiments.
The beneficial effect of this application is as follows:
the method has the advantages that the advantages of low-level visual features and high-level visual features are combined, the mutual defects are overcome, a fusion positioning method is provided, in the practical application process of the method, the intelligent driving vehicle can smoothly acquire vehicle positioning in the scene with poor performance of the two positioning methods, and the robustness of the whole intelligent driving system is obviously improved;
the method and the device also utilize the sparsity of the high-level visual features to optimize the size of the low-level visual feature map, so that the data storage capacity of the fusion map is remarkably reduced while the original positioning precision is ensured, and the space requirement on the vehicle-mounted computing platform is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that it is also possible for a person skilled in the art to apply the application to other similar scenarios without inventive effort on the basis of these drawings. Unless otherwise apparent from the context of language or otherwise indicated, like reference numerals in the figures refer to like structures and operations.
FIG. 1 is a schematic diagram of a positioning method according to some embodiments of the present application;
FIG. 2 is a schematic illustration of a local plane determination method according to some embodiments of the present application;
FIG. 3 is a schematic illustration of a location policy determination according to some embodiments of the present application;
FIG. 4 is a schematic view of a positioning device according to some embodiments of the present application; and
FIG. 5 is a schematic diagram of an electronic device shown in accordance with some embodiments of the present application.
Detailed Description
In the following detailed description, numerous specific details of the present application are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. It will be apparent, however, to one skilled in the art that the present application may be practiced without these specific details. It should be understood that the use of the terms "system," "apparatus," "unit" and/or "module" herein is a method for distinguishing between different components, elements, portions or assemblies at different levels of sequential arrangement. However, these terms may be replaced by other expressions if they can achieve the same purpose.
It will be understood that when a device, unit or module is referred to as being "on" … … "," connected to "or" coupled to "another device, unit or module, it can be directly on, connected or coupled to or in communication with the other device, unit or module, or intervening devices, units or modules may be present, unless the context clearly dictates otherwise. For example, as used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in the specification and claims of this application, the terms "a", "an", and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified features, integers, steps, operations, elements, and/or components, but not to constitute an exclusive list of such features, integers, steps, operations, elements, and/or components.
These and other features and characteristics of the present application, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood upon consideration of the following description and the accompanying drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the application. It will be understood that the figures are not drawn to scale.
Various block diagrams are used in this application to illustrate various variations of embodiments according to the application. It should be understood that the foregoing and following structures are not intended to limit the present application. The protection scope of this application is subject to the claims.
In view of the above advantages and disadvantages of the positioning method using low-level visual features or high-level visual features alone, the present invention provides a low-level and high-level visual feature fusion positioning method to combine the advantages of each other and expand the application scenarios of a single method.
In addition, the low-level visual feature map has large map storage capacity due to rich feature points, and in the process of establishing the low-level and high-level visual feature fusion map depending on fusion positioning, the invention also utilizes the advantages of high-level features, optimizes and deletes the quantity of the low-level features, and effectively reduces the size of the map.
Fig. 1 illustrates a schematic diagram of a positioning method according to some embodiments of the present application.
At 102, a fusion map is determined based on the low-level visual features and the high-level visual features. Generally, the content of an image includes not only low-level visual features such as color, texture, shape, structure, etc., but also middle-level features such as objects, etc., and high-level semantic feature information such as scenes, emotions, etc. In general domain retrieval, the "semantic gap" between low-level features and high-level semantic features is difficult to infer directly from the low-level features because domain knowledge of the general domain cannot be manually established. The high-level visual features refer to interpretable visual features with semantics. The process of determining a fusion map based on the low-level visual features and the high-level visual features is shown in fig. 2 and described herein.
At 202, a low-level visual feature map is constructed based on the low-level visual features and a high-level visual feature map is constructed based on the high-level visual features. In order to enhance the flexibility and the realizability of the graph building method, the fused graph building method is realized by adopting a post-processing mode, the original graph building process is not damaged, and the independence and the modularity of the original graph building process are reserved.
At 204, the map constructed by the low-level visual features and the map constructed by the high-level visual features are unified to the same coordinate system and are directly superposed to determine a fusion map.
In some embodiments, after determining the fusion map, the method further includes: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features. This is because: the high-level features already describe the information of the areas, so that a method of deleting the low-level features in a certain area near the high-level features after two maps are directly superposed under the same coordinate system can be adopted, thereby reducing the overall data volume of the map. In some embodiments, the preset area is a predefined area with an arbitrary shape, such as a circular area, a rectangular area, etc. around the high-level semantic features.
At 104, low-level visual features of the input image are extracted, positioning optimization is carried out based on the low-level visual features and the fusion map, and the number of interior points and the interior point high depth ratio are determined.
Therefore, when a single frame runs initially, the low-level visual features and the high-level visual features are not directly extracted at the same time, but the positioning optimization based on the low-level visual features is tried first, and if the positioning result is not ideal, the application of the high-level visual features is considered.
In some embodiments, the performing location optimization based on low-level visual features and fusion maps comprises: based on T*=argmin∑‖pi-TPiII, determining vehicle location, wherein the vehicle location is determined, and respectively detecting the low-level visual features on the match and fusing the low-level visual features in the map.
Calculating an optimal solution T*While using RANSAC algorithm to iterate and count the number of interior points (namely, satisfying the low-level visual characteristics of model calculation), on one hand, the more the number of interior points is, namely, the more the positioning constraints are, the better the positioning accuracy can be obtained, therefore, the number of interior points can be directly used for evaluating the quality of the positioning result; on the other hand, the larger the depth of the low-level visual feature extracted from the image is, the larger the error generated by the low-level visual feature is, and therefore if the number of points with larger depth in the interior points is large, the positioning result can be considered to be poor. Therefore, the two parameters of the number of the inner points and the height-depth ratio of the inner points are adopted to select the positioning strategy. The ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points. The depth threshold is a preset appropriate value. For example an empirical value.
At 106, a positioning strategy is determined based on the number of inliers and the inliers high-depth ratio. Fig. 3 is a schematic illustration of a location strategy determination according to some embodiments of the present application.
In 302, when the number of inliers is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted;
in 304, when the number of inliers is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted;
in 306, when the number of the inner points and the height-depth ratio of the inner points do not satisfy the two situations, a fusion positioning strategy is adopted;
therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high。
It should be noted that fig. 3 illustrates three cases, rather than three steps, which are not in sequence. In addition, although the expression "when the number of inliers and the inliers height-depth ratio do not satisfy the above two cases" in 306 does not indicate that the execution of 306 depends on 302 and 304 (the execution order is after 302 and 304), actually, the expression "when the number of inliers and the inliers height-depth ratio do not satisfy the above two cases" in 306 may be expressed by an actual logical relationship.
The logical relationship described in FIG. 3 may be represented by the following logical relationship (n)iIs the number of interior points, rdInternal point height to depth ratio):
if ni>thi_highand rd<thd_low:
choose low level feature based stragety;
else if ni<thi_lowand rd>thd_high:
choose high level feature based stragety;
else:choose fusing stragety.
in some embodiments, the positioning method further comprises: when the vehicle is located at a particular location, a particular positioning strategy is employed. For example, in a particular area, it has been determined that a low-level visual feature location strategy or a high-level visual feature location strategy works best in that area, the corresponding location strategy may be employed directly. In some embodiments, the particular positioning strategy may also be a low-level visual feature positioning strategy, and other positioning strategies beyond a fusion positioning strategy.
At 108, a vehicle location is determined based on the determined location strategy.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning. Specifically, the positioning optimization based on the advanced visual features and the fusion map to determine the vehicle positioning comprises the following steps: based on T*=argmin∑‖mi-TMiII, determining the vehicle location, wherein T is the vehicle location, and M and M are respectively the real-time detection advanced visual features and the map advanced visual features on the match.
In some embodiments, said determining a vehicle location based on the determined location strategy comprises: when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning. Specifically, the joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine the vehicle positioning comprises the following steps: based on T*=argmin(∑‖pi-TPi‖+∑‖mi-TMiIib), determining vehicle location, wherein for vehicle location, and for real-time detection of low-level visual features on match and fusion of low-level visual features in the map, respectively, M and M are for real-time detection of high-level visual features on match and high-level visual features of the map, respectively.
FIG. 4 is a schematic view of a positioning device according to some embodiments of the present application. The positioning device shown in fig. 4 is used to perform the method as described in fig. 1-3.
As shown in fig. 4, the positioning apparatus includes a fusion map determination unit 410, a low-level visual feature positioning unit 420, a positioning policy determination unit 430, and a positioning unit 440. Wherein:
the fusion map determination unit 410 is configured to determine a fusion map based on the low-level visual features and the high-level visual features;
the low-level visual feature positioning unit 420 is configured to extract low-level visual features of the input image, perform positioning optimization based on the low-level visual features and the fusion map, and determine the number of interior points and a high depth ratio of the interior points, where the high depth ratio of the interior points is a ratio of the number of the interior points with a depth greater than a depth threshold to the total number of the interior points;
the positioning policy determining unit 430 is configured to determine a positioning policy based on the number of interior points and the interior point height-depth ratio;
the location determination unit 440 is configured to determine a vehicle location based on the determined location strategy.
Specifically, the fusion map determination 410 unit is configured to: constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features; unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map. Further, the fusion map determination unit is further configured to: redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
Specifically, the positioning policy determining unit 430 is configured to: when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted; when the number of interior points is less than thi_lowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted; when the number of the inner points is not more than thi_highAnd the height-depth ratio of the inner point is not more than thd_bighOr when the number of inner points is not less than thi_lowAnd the ratio of the internal point to the depth is not less than thd_lowAdopting a fusion positioning strategy; therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high。
Specifically, the vehicle positioning unit 440 is specifically configured to: when a low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimization positioning is used as vehicle positioning;
when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image; performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning;
when a fusion positioning strategy is adopted, extracting high-level visual features in an input image; and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
In particular, the low-level visual feature localization unit 420 is configured to: based on T*=argmin∑‖pi-TPiII, determining vehicle location, wherein the vehicle location is determined, and respectively detecting the low-level visual features on the match and fusing the low-level visual features in the map.
Specifically, the vehicle positioning unit 440 performs positioning optimization based on the advanced visual features and the fusion map, and determines vehicle positioning, which specifically includes: based on T*=argmin∑‖mi-TMiII, determining the vehicle location, wherein M and M are respectively the real-time detection advanced visual features and the map advanced visual features on the match.
Specifically, the vehicle positioning unit 440 performs joint optimization based on the low-level visual features, the high-level visual features and the fusion map, and determines vehicle positioning, including: based on T*=argmin(∑‖pi-TPi‖+∑‖mi-TMiIib), determining vehicle location, wherein for vehicle location, and for real-time detection of low-level visual features on match and fusion of low-level visual features in the map, respectively, M and M are for real-time detection of high-level visual features on match and high-level visual features of the map, respectively.
Fig. 5 is a schematic structural diagram suitable for implementing an electronic device according to an embodiment of the present application.
As shown in fig. 5, the electronic apparatus 500 includes a Central Processing Unit (CPU)501 that can execute various processes in the embodiments shown in fig. 1 to 3 described above in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The CPU501, ROM502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to embodiments of the present application, the methods described above with reference to fig. 1-3 may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the method of FIGS. 1-3. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described herein.
In summary, the present application provides a positioning method, an apparatus, an electronic device and a computer-readable storage medium thereof. By performing fusion positioning based on the low-level visual features and the high-level visual features, the data storage capacity is reduced, and the robustness, stability and success rate of the system are improved.
It is to be understood that the above-described embodiments of the present application are merely illustrative of or illustrative of the principles of the present application and are not to be construed as limiting the present application. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present application shall be included in the protection scope of the present application. Further, it is intended that the appended claims cover all such changes and modifications that fall within the scope and range of equivalents of the appended claims, or the equivalents of such scope and range.
Claims (10)
1. A method of positioning, comprising:
determining a fusion map based on the low-level visual features and the high-level visual features;
extracting low-level visual features of the input image, performing positioning optimization based on the low-level visual features and the fusion map, and determining the number of interior points and the height-depth ratio of the interior points;
determining a positioning strategy based on the number of the inner points and the height-depth ratio of the inner points;
determining a vehicle location based on the determined location strategy;
the ratio of the depths of the inner points to the heights of the inner points is the ratio of the number of the inner points with the depths larger than the depth threshold value to the total number of the inner points.
2. The method of claim 1, wherein determining a fusion map based on the low-level visual features and the high-level visual features comprises:
constructing a low-level visual feature map based on the low-level visual features, and constructing a high-level visual feature map based on the high-level visual features;
unifying the map constructed by the low-level visual features and the map constructed by the high-level visual features under the same coordinate system to be directly superposed to determine a fusion map.
3. The method of claim 2, wherein the determining a fusion map further comprises:
redundant low-level features are pruned, which are low-level features within a predetermined area near the high-level features.
4. The method of claim 1, wherein determining a positioning strategy based on the number of inliers and the inliers high-depth ratio comprises:
when the number of inner points is greater than thi_highAnd the height-depth ratio of the inner point is less than thd_lowThen, a low-level visual feature positioning strategy is adopted;
when the number of interior points is less than thi_LowAnd the height-depth ratio of the inner point is greater than thd_highWhen the method is used, a high-level visual characteristic positioning strategy is adopted;
when the number of the inner points and the height-depth ratio of the inner points do not meet the two conditions, adopting a fusion positioning strategy;
therein, thi_highAnd thi_lowIs a threshold value of the number of inliers and thi_highGreater than thi_low,thd_lowAnd thd_highIs a threshold value of the internal point height to depth ratio and thd_lowIs less than thd_high。
5. The method of claim 4, wherein determining the vehicle location based on the determined location strategy comprises:
when the low-level visual feature positioning strategy is adopted, the result of the low-level visual feature optimized positioning is used as the vehicle positioning.
6. The method of claim 4, wherein determining the vehicle location based on the determined location strategy comprises:
when an advanced visual feature positioning strategy is adopted, extracting advanced visual features in an input image;
and performing positioning optimization based on the advanced visual features and the fusion map to determine vehicle positioning.
7. The method of claim 4, wherein determining the vehicle location based on the determined location strategy comprises:
when a fusion positioning strategy is adopted, extracting high-level visual features in an input image;
and performing joint optimization based on the low-level visual features, the high-level visual features and the fusion map to determine vehicle positioning.
8. The method of claim 1, wherein the performing localization optimization based on low-level visual features and fusion maps comprises:
based onT*=argmin∑||pi-TPiAnd | l, determining vehicle location, wherein vehicle location is determined, and respectively detecting low-level visual features in the low-level visual features and the fusion map in real time on matching.
9. The method of claim 6, wherein the performing location optimization based on advanced visual features and a fusion map, determining vehicle location, comprises:
based on T*=argmin∑||mi-TMiAnd l, determining vehicle positioning, wherein M and M are matched real-time detection advanced visual features and advanced visual features in the fusion map respectively.
10. The method of claim 7, wherein the jointly optimizing based on the low-level visual features, the high-level visual features, and the fusion map to determine vehicle localization comprises:
based on T*=argmin(∑||pi-TPi||+∑||mi-TMi| |) to determine vehicle location, where T is vehicle location, P and P are matching real-time detection low-level visual features and low-level visual features in the fusion map, respectively, and M are matching real-time detection high-level visual features and map high-level visual features, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010397293.7A CN111765892B (en) | 2020-05-12 | 2020-05-12 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010397293.7A CN111765892B (en) | 2020-05-12 | 2020-05-12 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111765892A true CN111765892A (en) | 2020-10-13 |
CN111765892B CN111765892B (en) | 2022-04-29 |
Family
ID=72719028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010397293.7A Active CN111765892B (en) | 2020-05-12 | 2020-05-12 | Positioning method, positioning device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111765892B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220307859A1 (en) * | 2019-12-31 | 2022-09-29 | Goertek Inc. | Method and device for updating map |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070239314A1 (en) * | 2006-04-07 | 2007-10-11 | Gary Kuvich | Active semiotic system for image and video understanding by robots and unmanned vehicles, methods and apparatus |
CN106127739A (en) * | 2016-06-16 | 2016-11-16 | 华东交通大学 | A kind of RGB D SLAM method of combination monocular vision |
CN108920584A (en) * | 2018-06-25 | 2018-11-30 | 广州视源电子科技股份有限公司 | Semantic grid map generation method and device |
CN109556617A (en) * | 2018-11-09 | 2019-04-02 | 同济大学 | A kind of map elements extracting method of automatic Jian Tu robot |
CN110415297A (en) * | 2019-07-12 | 2019-11-05 | 北京三快在线科技有限公司 | Localization method, device and unmanned equipment |
-
2020
- 2020-05-12 CN CN202010397293.7A patent/CN111765892B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070239314A1 (en) * | 2006-04-07 | 2007-10-11 | Gary Kuvich | Active semiotic system for image and video understanding by robots and unmanned vehicles, methods and apparatus |
CN106127739A (en) * | 2016-06-16 | 2016-11-16 | 华东交通大学 | A kind of RGB D SLAM method of combination monocular vision |
CN108920584A (en) * | 2018-06-25 | 2018-11-30 | 广州视源电子科技股份有限公司 | Semantic grid map generation method and device |
CN109556617A (en) * | 2018-11-09 | 2019-04-02 | 同济大学 | A kind of map elements extracting method of automatic Jian Tu robot |
CN110415297A (en) * | 2019-07-12 | 2019-11-05 | 北京三快在线科技有限公司 | Localization method, device and unmanned equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220307859A1 (en) * | 2019-12-31 | 2022-09-29 | Goertek Inc. | Method and device for updating map |
US12031837B2 (en) * | 2019-12-31 | 2024-07-09 | Goertek Inc. | Method and device for updating map |
Also Published As
Publication number | Publication date |
---|---|
CN111765892B (en) | 2022-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111666921A (en) | Vehicle control method, apparatus, computer device, and computer-readable storage medium | |
CN108734058B (en) | Obstacle type identification method, device, equipment and storage medium | |
Ebrahimpour et al. | Vanishing point detection in corridors: using Hough transform and K-means clustering | |
Mei et al. | Scene-adaptive off-road detection using a monocular camera | |
US20220075994A1 (en) | Real-time facial landmark detection | |
CN108305260A (en) | Detection method, device and the equipment of angle point in a kind of image | |
CN110853085A (en) | Semantic SLAM-based mapping method and device and electronic equipment | |
CN113781493A (en) | Image processing method, image processing apparatus, electronic device, medium, and computer program product | |
CN114091515A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
CN115424245A (en) | Parking space identification method, electronic device and storage medium | |
CN111765892B (en) | Positioning method, positioning device, electronic equipment and computer readable storage medium | |
CN113409340A (en) | Semantic segmentation model training method, semantic segmentation device and electronic equipment | |
CN112446385B (en) | Scene semantic segmentation method and device and electronic equipment | |
CN116189150B (en) | Monocular 3D target detection method, device, equipment and medium based on fusion output | |
Seo et al. | An efficient detection of vanishing points using inverted coordinates image space | |
CN114429631B (en) | Three-dimensional object detection method, device, equipment and storage medium | |
CN116403062A (en) | Point cloud target detection method, system, equipment and medium | |
Li et al. | Study on semantic image segmentation based on convolutional neural network | |
CN113516013B (en) | Target detection method, target detection device, electronic equipment, road side equipment and cloud control platform | |
CN115565072A (en) | Road garbage recognition and positioning method and device, electronic equipment and medium | |
CN116434181A (en) | Ground point detection method, device, electronic equipment and medium | |
CN113762027B (en) | Abnormal behavior identification method, device, equipment and storage medium | |
CN112651986B (en) | Environment recognition method, recognition device, recognition system, electronic equipment and medium | |
Zhang et al. | Texture feature-based local adaptive Otsu segmentation and Hough transform for sea-sky line detection | |
CN114612544A (en) | Image processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |