CN113031041B - Urban canyon integrated navigation and positioning method based on skyline matching - Google Patents

Urban canyon integrated navigation and positioning method based on skyline matching Download PDF

Info

Publication number
CN113031041B
CN113031041B CN202110265342.6A CN202110265342A CN113031041B CN 113031041 B CN113031041 B CN 113031041B CN 202110265342 A CN202110265342 A CN 202110265342A CN 113031041 B CN113031041 B CN 113031041B
Authority
CN
China
Prior art keywords
skyline
information
building
matching
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110265342.6A
Other languages
Chinese (zh)
Other versions
CN113031041A (en
Inventor
孙蕊
蒋磊
孔洁
傅麟霞
王均晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110265342.6A priority Critical patent/CN113031041B/en
Publication of CN113031041A publication Critical patent/CN113031041A/en
Application granted granted Critical
Publication of CN113031041B publication Critical patent/CN113031041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/24Acquisition or tracking or demodulation of signals transmitted by the system
    • G01S19/28Satellite selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)

Abstract

The invention discloses an urban canyon integrated navigation positioning method based on skyline matching, which belongs to the technical field of satellite positioning navigation, and is characterized in that a short-distance building image shot by a common camera is compared with building data in a 3D urban model to roughly determine candidate positions and generate a skyline database; smoothing an infrared sky image shot by an infrared camera, performing edge detection and feature extraction, and matching with data features of a skyline database to obtain region position points; calculating the building elevation angle of the area position point, performing satellite screening according to the building elevation angle, the carrier-to-noise ratio and the pseudo-range residual error, and performing navigation resolving by using the screened satellite signals to obtain GNSS position solution information; and (4) introducing the GNSS position solution information and the IMU navigation parameters into a Kalman filter, and correcting the navigation parameters by using the error correction information to obtain final position information. The method improves the navigation performance of the fusion of the GNSS, the IMU and the vision, and the operation efficiency and the precision are improved well.

Description

Urban canyon integrated navigation and positioning method based on skyline matching
Technical Field
The invention belongs to the technical field of satellite positioning and navigation, and particularly relates to a GNSS/IMU/vision combined navigation and positioning method based on skyline matching in an urban canyon environment.
Background
With the development of social modernization, cities have become the most important places for human activities. The intelligent transportation is the most important application in urban environment and has a huge application market. Meanwhile, in recent years, the intelligent traffic of urban environment is developing towards the direction of novelty and intellectualization, and also more severe requirements are provided for navigation positioning service. The GNSS (Global Navigation Satellite System) is used as a basic sensor for Navigation and positioning, and can provide high-precision space-time information for a carrier in an open environment, but in an urban canyon environment, the GNSS signal is blocked and reflected by various obstacles, and the quality of the signal used for positioning and resolving often cannot meet the requirement, so that deviation is generated in the result of GNSS output, and finally the positioning result of the whole System cannot meet the actual requirement.
At present, many scholars have developed a lot of research in this field aiming at the navigation positioning technology of urban environment, and mainly focus on multi-sensor or multi-source information fusion to realize the improvement of navigation performance. The conventional GNSS/IMU (Inertial Measurement Unit ) integrated navigation system can improve the navigation performance in an urban canyon environment, but the long-time low-quality GNSS observation still has a serious influence on the integrated navigation performance. The visual sensor is used as a low-cost autonomous sensor, can acquire surrounding environment scene information, and can effectively improve the reliability of GNSS/IMU combined navigation in an urban environment. The current research on GNSS, IMU and visual fusion navigation positioning technology mainly includes: su provides a method for positioning and attitude determination by combining a GNSS/IMU and a plurality of sensors such as vision and the like. The method has clear structure and easy expansion, and improves the matching accuracy of the feature points of two adjacent images and the resolving precision of the position posture. However, the method is susceptible to the influence of the surrounding environment such as illumination, and the extraction and matching efficiency of the feature points is low. Liu et al propose a method based on optimized vision-inertia SLAM (synchronous positioning And Mapping), which tightly couples SLAM And GNSS original measurement, And jointly optimizes a reprojection error, an IMU pre-integration error And an original GNSS measurement error by adopting a sliding window binding adjustment method. Fu proposes a GNSS/INS combined navigation positioning scheme based on a vision-inertia odometer, the scheme solves the shielding problem of partial GNSS signals by constructing the vision-inertia odometer, and simultaneously solves the delay problem existing in combined navigation by a delay estimation and compensation algorithm. But this solution still risks failure in case of poor lighting conditions and excessive shading. Furthermore, the deviation cannot be completely eliminated because of the slow cumulative error of the visual odometer itself. Li et al propose a tightly coupled multi-GNSS RTK (Real-Time Kinematic)/INS/vision integration scheme. Experiments show that in an urban canyon environment, the method can greatly improve the running speed and the attitude accuracy of the system. However, the algorithm is complex, and the used vision sensor is easily influenced by the change of the ambient light sensation, so that the running speed of the whole system is influenced. An improvement scheme of the existing multi-sensor integrated System for assisting the abnormal feature suppression of a Visual Odometer (VO) by using an INS (Inertial Navigation System) is proposed by the Rahman et al. The method reduces the calculation complexity and improves the state covariance convergence and the navigation precision, but the method needs a plurality of IMUs, the actual structure of the whole system is complex, the size is large, and the application range is limited. Urban canyon GNSS signal quality is low, and the existing GNSS/IMU/vision fusion does not fully utilize the environmental characteristics of the urban canyon, so that the positioning accuracy and reliability are insufficient. The accurate acquisition and processing of effective visual information is a key for realizing the improvement of navigation performance in urban environment.
To further exploit the advantages of visual navigation in urban environments, Petovello et al propose a concept based on the positioning of visible skylines at specific points. Skylines are extracted from a picture of the sky taken upwards using a fisheye camera and matched to ideal (error-free) skylines calculated using a 3D (Dimensional) city model to determine the location of the points. The results show that in an urban environment, the positioning accuracy is between 5 and 18 meters. Ramalingam et al propose a method for geo-location using skylines extracted from panoramic images using omnidirectional cameras and a texture-free 3D city model. Experimental results show that the method can be superior to GNSS measurement in urban environment. Petovello et al further propose the use of an infrared low-cost camera for urban environment positioning by separating skylines from the captured infrared picture and matching the skylines generated from the 3D city model, but due to the accuracy of the 3D city model used, experimental results show that the median of the positioning accuracy errors is about 10 meters. However, the above method has the problems of being susceptible to environment in the skyline matching and insufficient in matching precision and efficiency.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem of providing an urban canyon integrated navigation and positioning method based on skyline matching aiming at the defects of the prior art.
In order to solve the technical problem, the invention discloses an urban canyon integrated navigation positioning method based on skyline matching, which comprises the following steps of
Step 1: and determining candidate positions and constructing a city skyline database of the area, shooting a short-distance building image by using a common high-definition camera obliquely arranged upwards at the top of the automobile or the unmanned aerial vehicle, comparing the short-distance building image with building characteristic data in a 3D city model, roughly determining the candidate positions, namely candidate search areas, and calling data in the area in the 3D city model to generate the skyline database.
Step 2: and (3) skyline matching, namely shooting a sky image by using an infrared camera, carrying out noise processing on the obtained infrared sky image, smoothing the noise by using Gaussian filtering, carrying out edge detection and feature extraction on the noise, and carrying out feature matching on the noise and data in a pre-generated skyline database to obtain a region position point. The infrared camera is used for shooting the skyline, the problems that the boundary between a building and the sky is difficult to determine and is easily influenced by environmental changes are solved, the Gaussian filtering is adopted to carry out noise smoothing processing on the image before image processing is carried out, and compared with the traditional skyline matching, the computing efficiency and the precision of the image smoothing processing method are well improved.
And step 3: satellite screening, namely calculating the elevation angle of a building by utilizing the position information of the position point of the area and combining the boundary information of the nearby building in the 3D city model, and calculating the carrier-to-noise ratio (C/N) according to the elevation angle of the building0Carrier To Noise Ratio) and pseudorange residuals, and performing navigation solution by using the filtered satellite signals To obtain GNSS position solution information.
And 4, step 4: and (3) performing integrated navigation position calculation, introducing GNSS position solution information and navigation parameters of the IMU into a Kalman filter, outputting corresponding error correction information by the Kalman filter, inputting the error correction information into the mechanical arrangement process of the IMU to correct the navigation parameters of the IMU, and outputting the corrected navigation parameters to obtain final position information.
In the traditional GNSS/IMU combined navigation, the GNSS output result is used for correcting the navigation parameters of the IMU, and the whole system has higher precision requirement on the GNSS output. However, since the signal quality of GNSS positioning solution in urban canyon environment often cannot meet the actual accuracy requirement, the combined system still has a large error in urban canyon environment. The skyline of a city has been considered by scholars to uniquely represent a location in a city, like a human fingerprint, and has some special properties. First, the skyline of a city is compact and unique, with relevant information about a location being very rich. Secondly, it is less susceptible to conditions such as: the sheltering of obstacles such as trees, lamp posts, automobiles, pedestrians and the like. Then, the positioning function can be realized only through the vision module on the premise of not depending on information of other external sensors through the skyline matching, so that the quality control of the GNSS can be realized by combining the positioning result with the 3D city model and the position information of the GNSS satellite, and the positioning precision of the whole integrated navigation system is further improved. In addition, by combining the skyline matching with the GNSS/IMU, the accuracy problem in the single skyline matching is also compensated by the GNSS/IMU module in the integrated navigation system. Therefore, the method has a great prospect of improving the urban canyon navigation and positioning performance by designing an efficient algorithm for skyline matching and applying the algorithm to GNSS/IMU/visual combined navigation.
Further, the candidate position determination step in step 1 is as follows: building images shot by a common high-definition camera are compared with building data existing in the 3D city model by using a SURF (Speed Up Robust Features) algorithm, so that the approximate position of the building images in the 3D city model is preliminarily determined. The search range of the skyline matching is narrowed by determining the candidate position, and the condition that the efficiency is reduced because the global search is carried out every time the skyline matching is carried out is avoided. In the current skyline matching algorithm, the positioning is realized by matching the image with the existing database through feature point detection. However, when the method for detecting and matching feature points is applied to scenes like indoors, the range of the whole scene is relatively small, and the database to be searched is also small, so that the real-time and dynamic positioning effect can be better met. However, when the detection and matching of the city skyline are performed, since the candidate database content includes data related to the skyline of the entire city, the content stored in the database is huge, and the time is long when the global search matching is performed. In addition, in order to implement a real-time and dynamic positioning process, the system needs to perform a global search for each frame of image, so that the time consumed by the system is relatively long, and the requirements of real-time and dynamic positioning cannot be well met. In addition, considering that the spatial position between two adjacent frames of images does not change greatly in an urban canyon environment in general, it is considered to improve the real-time performance thereof by shortening the range of matching each time image search is performed. In addition, the process of shortening the search range and the process of processing the infrared image data are carried out in parallel, and extra time is not occupied.
Further, the data in the candidate location area in the 3D city model in step 1 and the data in the skyline database each include: building boundary point coordinate information, building boundary length information, building identification information and road boundary point information in the candidate position area.
Furthermore, in step 2, Prewitt operator is used to perform edge detection and contour feature extraction on the smoothed infrared sky image to obtain skyline feature information, wherein the skyline feature information is skyline boundary shape information and includes boundary point information and boundary length information.
Further, when the characteristic matching is performed on the skyline characteristic information and the data in the generated skyline database in the step 2, the boundary point and the side length of the skyline in the infrared sky image are selected as matching indexes, the boundary point is matched first, then the edge length is matched, and the point with the highest matching degree is selected as the position of the target in the 3D city model, and the step includes:
step 2.1: judging the characteristic angular points, namely judging whether the positions of the corner points of the skyline are consistent, and if so, judging in the step 2.2; if not, judging that the two are not matched, and continuing to execute the step;
step 2.2: matching the boundary length of the skyline, matching the boundary length of the skyline in the skyline characteristic information with data in a skyline database, if not, continuing to execute the step for matching, and finding a position point with the highest matching degree as a target point; if so, executing the step 2.3;
step 2.3: and locking the position of the position point in the 3D city model by combining the 3D city model, wherein the locked position information is the regional position point of the target in the actual city.
Further, step 3, the satellite screening step comprises: calculating the maximum elevation angle value alpha of buildings on two sides of the area position point, firstly judging the satellite elevation angle according to the calculated maximum elevation angle value alpha of the buildings, reserving the satellites with the elevation angles larger than a threshold value alpha, and rejecting other satellites; then, C/N is performed0Judgment of (3), reserving C/N0Rejecting satellites larger than a threshold value beta; and finally screening through the pseudo-range residual error, reserving the satellite with the pseudo-range residual error smaller than the threshold value gamma, and using the signal of the satellite to carry out final position calculation.
Further, the maximum elevation values α of the buildings on both sides of the area position point calculated in step 3 are obtained by calculation according to the position information of the area position point and the building height information contained in the boundary length information, the road width information and the building boundary point information of the buildings on both sides in the 3D city model where the area position point is located.
Further, the GNSS and IMU combination method in step 4 is a method using a distributed combination open-loop scheme, and uses a kalman filter to connect the output of the GNSS and the mechanical arrangement of the IMU, and uses the error correction information output by the kalman filter for the correction of the navigation parameters of the IMU.
Further, the ordinary high-definition camera in the step 1 is obliquely arranged at the top of the automobile or the unmanned aerial vehicle by 45 degrees upwards. Too small an angle may result in the building to be photographed being blocked by a large vehicle, and too large an angle may result in only the sky or only a part of the building being photographed.
Has the advantages that:
1. the method has the advantages that the common high-definition camera is used for shooting nearby building images, and the images are matched with corresponding building data in the 3D city model through a building recognition algorithm, so that a search area is preliminarily determined, the search range of the skyline matching is narrowed, and the condition that the efficiency is reduced because the global search is carried out every time the skyline matching is carried out is avoided; in addition, the infrared camera is used for shooting the skyline, the problems that the boundary between a building and the sky is difficult to determine and is easily influenced by environmental changes are solved, the Gaussian filtering is adopted to carry out noise smoothing processing on the image before image processing is carried out, and compared with the traditional skyline matching, the computing efficiency and the precision of the image are well improved.
2. Obtaining a position point based on skyline matching of a visual sensor, calculating the elevation angle of the building by combining the position information of the point with the boundary information of the nearby building in the 3D city model, and calculating the elevation angle of the building according to the elevation angle of the building and C/N0And screening all possible received satellites by using the pseudo-range residual error, and performing navigation calculation by using the screened satellite signals to obtain GNSS position solution information. The GNSS position solution information and the navigation parameters of the IMU are introduced into a Kalman filter, the corresponding error correction information is output by the Kalman filter, and the GNSS position solution information and the navigation parameters of the IMU are combined to obtain the corresponding error correction informationThe navigation parameters of the IMU are corrected when the navigation parameters are input into the mechanical arrangement process of the IMU, and then the corrected navigation parameters are output to obtain final position information. The method is used for screening the satellite signal category at the position point determined based on the skyline matching through a satellite screening algorithm under the urban canyon environment, so that the quality control of the GNSS observation value is realized, and the navigation performance of the GNSS, the IMU and the vision fusion system is improved.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of a combined navigation and positioning method in an urban canyon environment based on skyline matching;
FIG. 2 is a diagram illustrating extreme point detection.
Fig. 3 is a schematic diagram illustrating calculation of the maximum elevation angle α.
FIG. 4 is a diagram of experimental error results for a single GNSS measurement method, a conventional GNSS/IMU combination method, and an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings.
A flowchart of an urban canyon integrated navigation positioning method based on skyline matching is shown in fig. 1, and includes the following four stages.
Candidate position determination and construction of city skyline database of the area
The method comprises the steps of shooting a building in a nearby block by using a common high-definition camera which is placed on the top of an automobile or an unmanned aerial vehicle and is obliquely arranged upwards by 45 degrees, roughly matching a building image shot by the camera with the information of the existing building in a 3D city model by using an SURF algorithm, and thus preliminarily determining the position information of the building. The method comprises the following steps:
firstly, a Hessian Matrix (Hessian Matrix, blackplug Matrix) is constructed in a scale space constructed by a box-shaped filter, and all interest points are generated and used for feature point extraction. For each pixel point of the building image, a Hessian matrix can be constructed, and the calculation formula is as follows:
Figure BDA0002971382610000071
wherein, (x, y) represents the coordinates of the pixel points of the building image, and f (x, y) represents the gray value of the pixel points at (x, y). The discriminant of the Hessian matrix is:
Figure BDA0002971382610000072
when the local maximum is obtained by the discriminant of the Hessian matrix, it is determined that the current point is a brighter or darker point than other points in the surrounding neighborhood, thereby locating the position of the feature point, as shown in fig. 2. In the algorithm, a 9 × 9 initial box filter is approximated by a gaussian second-order partial derivative with a scale σ of 1.2. Thus, in computing the matrix, a convolution D of the box-like filter template with the original input building image may be employedxx、Dyy、DxyTo replace
Figure BDA0002971382610000073
To reduce the amount of computation. Accordingly, the discriminant of the matrix can be expressed approximately as:
det(H)=DxxDyy-(0.9×Dxy)2
in the above equation 0.9 represents a weighting factor, which acts to balance the error due to the approximation using the box filter.
Secondly, a scale space is constructed. The dimension space is composed of O groups of L layers, the sizes of building images among different groups are consistent, but the sizes of the templates of the box filters used among different groups are gradually increased, and the sizes of the filters are calculated as follows according to the components, and the sizes of the filters in each group are also layered as follows:
(2otcave×interval+1)×3
in the formula, otcave is the group number, and interval is the group inner layer number. In SURF algorithm group 1 layer size is 9 × 9, then group 1 layer 2 is 15 × 15; the group 2, layer 1 is 15 x 15, the group 2, layer 2 is 27 x 27, and so on, i.e. the size interval groups of the layers within each group are doubled. Filters of the same size are used between different layers in the same group, but the scale coefficients of the filters gradually increase, wherein the size of 9 × 9 corresponds to the coefficient 1.2, the size of 15 × 15 corresponds to the coefficient 1.2 × 2, and the size of 27 × 27 corresponds to the coefficient 1.2 × 3, and the sizes are sequentially increased in proportion.
Then, the feature point is located. And comparing each pixel point processed by the Hessian matrix with 26 points in the neighborhood of the two-dimensional building image space and the scale space, preliminarily positioning a key point, and screening out the final stable characteristic point by filtering the key point with weaker energy and the key point with wrong positioning.
This is followed by feature point principal direction assignment. In the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet features of all the points in the sector of 60 degrees is counted, and then the sector is divided into certain intervals (in the embodiment, the sum is taken
Figure BDA0002971382610000081
) After rotating and counting again the harr wavelet feature value in the region, finally taking the direction of the sector with the maximum value as the main direction of the feature point.
And then a feature point description. A 4 × 4 rectangular area block is taken around the feature point, but the taken rectangular area direction is along the main direction of the feature point. The rectangular area block comprises 4 multiplied by 4 sub-areas, each sub-area comprises 5 multiplied by 5 pixel points, each sub-area counts harr wavelet characteristics of 25 pixels in the horizontal direction and the vertical direction, and the horizontal direction and the vertical direction are relative to the main direction. The harr wavelet features are 4 directions of the sum of horizontal direction values, the sum of vertical direction values, the sum of horizontal direction absolute values and the sum of vertical direction absolute values. These 4 values are used as the feature vector of each subblock region, so that a total of 4 × 4 × 4 is 64-dimensional vector.
Finally, feature point matching is performed. The matching degree is determined by calculating the Euclidean distance between the shot building image and two corresponding feature points in the 3D city model, and the shorter the Euclidean distance is, the better the matching degree of the two feature points is. In addition, Hessian matrix trace judgment is added, if the signs of matrix traces of two characteristic points are the same, the two characteristic points have contrast changes in the same direction, if the signs of the matrix traces are different, the contrast change directions of the two characteristic points are opposite, and even if the Euclidean distance is 0, the two characteristic points are directly excluded.
After the matching is completed, the approximate location area of the vehicle in the 3D city model can be determined. This region is the candidate search region for which skyline matching is to be performed. Then, the skyline data information around the region is called in the 3D city model, and a corresponding skyline database is generated. The database contains data information mainly comprising: the system comprises city building boundary length information, building boundary corner point coordinate information, building boundary identification information and road boundary point coordinate information.
(II) skyline matching
Smoothing a real-time infrared sky image shot by an infrared camera in a Gaussian filtering mode, wherein the Gaussian filtering formula is as follows:
Figure BDA0002971382610000082
wherein σ is a standard deviation of Gaussian distribution, and a size calculation formula is as follows:
σ=0.3×((ksize-1)×0.5-1)+0.8
where ksize is the size of the gaussian filter used, and the larger the value of σ, the smoother the resulting image. After the infrared sky image is smoothed, edge contour detection is carried out by adopting a Prewitt operator, and a skyline contour is extracted. The Prewitt operator is an edge detection of a first-order differential operator, and the edge is detected by using the gray difference of upper, lower, left and right adjacent points of a pixel point to reach an extreme value at the edge, so that part of a pseudo edge is removed. The principle is that two direction templates are used for carrying out neighborhood convolution on the infrared sky image space after smoothing processing and the infrared sky image after smoothing processing, wherein one direction template is used for detecting a horizontal edge, and the other direction template is used for detecting a vertical edge.
For the smoothed infrared sky image I (I, j), the Prewitt operator is defined as follows:
G(i)=|[I(i-1,j-1)+I(i-1,j)+I(i-1,j+1)]
-[I(i+1,j-1)+I(i+1,j)+I(i+1,j+1)]|
G(j)=|[I(i-1,j+1)+I(i,j+1)+I(i+1,j+1)]
-[I(i-1,j-1)+I(i,j-1)+I(i+1,j-1)]|
then P (i, j) ═ max [ G (i), G (j)) ] or P (i, j) ═ G (i) + G (j))
The Prewitt operator considers: all pixels with new gray values greater than or equal to the threshold K are edge points. That is, an appropriate threshold K is selected, and if P (i, j) ≧ K, the point (i, j) is determined as an edge point, and P (i, j) is an edge image. The Prewitt gradient operator method is to average and then differentiate to determine the gradient. The horizontal and vertical gradient templates are respectively:
transverse template: detecting horizontal edges
Figure BDA0002971382610000091
Longitudinal template: detecting vertical edges
Figure BDA0002971382610000092
And comparing the gray value of each pixel point in the smoothed infrared sky image with a threshold K so as to determine the position of the boundary pixel point. In this embodiment, the comparison threshold is K, and the value taking method is as follows.
Figure BDA0002971382610000093
Where M, N are the number of rows and columns, respectively, of the target image. k is a gain coefficient and takes a value of 4. The length information of the pixel points is calculated by combining the position information of the pixel points with a plane distance formula, and the boundary shape information (including boundary point information and boundary length information) of the skyline in the infrared sky image after the smoothing processing is used as the skyline characteristic information extracted from the infrared sky image.
Then the matching process of the skyline is performed. And performing feature matching on the skyline feature information extracted from the infrared sky image after the smoothing processing and the data in a skyline database previously generated in the candidate region. Combining data such as boundary length information of urban buildings, boundary point coordinate information, road boundary point coordinate information and the like in a 3D urban model database, generating length information and boundary point information of an antenna in a search area antenna database through a distance formula, and correspondingly matching feature information extracted from the smoothed infrared sky image with feature information generated from the search area database, wherein the matching process comprises the following steps: firstly, judging characteristic angular points, namely judging whether the positions of corner points of skylines are consistent, and if so, judging the next step; if not, it is determined as not matching. And matching the boundary length of the skyline, matching the calculated boundary length of the skyline in the infrared sky image after smoothing processing with data calculated from a search area database, and if the boundary length of the skyline does not accord with the data calculated from the search area database, continuing to match, and finding a position point with the highest matching degree as a target point. And then locking the position of the position point in the 3D city model by combining the 3D city model, wherein the position information of the locking point is the position of the target in the actual city.
(III) satellite screening
According to the position information of the determined point and the building boundary length information, road width information and building height information contained in the building boundary point information in the 3D city model database, the maximum elevation angle alpha of the buildings at two sides of the point can be calculated, the calculation mode is as shown in FIG. 3, the distance from the target object to the building corner point (corner point A, B) is calculated according to the height and the position of the target object and the height and the position of the building corner point (corner point A, B), and the elevation angle from the target object to the nearby building can be calculated by combining the height information of the building, as shown in the figure: the angle is a, b, c, d and e, wherein the maximum angle corresponds to the maximum elevation angle alpha. The third step, the satellite screening, is started as follows:
first, a first round of filtering is performed on all the received satellites of the receiver. And calculating the elevation angles of all the received satellites by combining the 3D city model, and judging that the elevation angles of the satellites are larger than alpha according to the judgment basis. According to the judgment basis, the satellite with the elevation angle larger than alpha and the signal thereof are stored in a satellite set A1, the satellite signal with the satellite elevation angle smaller than or equal to alpha is judged as an NLOS (Non-Line-Of-Sight) signal, the satellite is removed, and the navigation solution is carried out without using the signal Of the satellite. The satellites in the satellite set a1 that passed the first round of screening are subjected to a second round of screening. The second round of screening is characterized by the carrier-to-noise ratio C/N0The respective carrier-to-noise ratios C/N are calculated for all the satellites stored in the set A10Comparing the value with a preset threshold value beta (beta is 40), and calculating all carrier-to-noise ratios C/N0And (4) the signals with the value larger than the threshold value beta are regarded as passing the screening, the signals are stored in a satellite set A2, and all carrier-to-noise ratios C/N are eliminated0Satellites and their signals whose values are less than or equal to a threshold value. And then, performing the last round Of screening, calculating pseudorange residuals Of all satellite signals in the set A2, comparing the pseudorange residuals with a set threshold gamma (gamma is 346.731), removing the satellites with the pseudorange residuals larger than the threshold gamma and signals thereof, and regarding the rest satellites as passing the screening and storing the satellites in the final satellite set, namely regarding all the satellite signals in the final satellite set as LOS (Line-Of-Sight) signals, and directly using the LOS signals in the GNSS for positioning and resolving.
(IV) Integrated navigation position resolution
And (4) introducing the GNSS position information obtained by resolving and the IMU navigation parameters which are not corrected into a Kalman filter together, and outputting to obtain corresponding error correction information. And inputting the output error correction information into an IMU mechanical arrangement to correct the navigation parameters. And finally outputting the corrected navigation parameters as a final positioning result.
Fig. 4 shows a single GNSS measurement method, a conventional GNSS/IMU combination method, and a graph of experimental error results of the embodiment of the present invention, from which it can be seen that the results of the method of the embodiment of the present invention are superior to those of other methods.
The following table shows the comparison of the average time consumed by the conventional skyline matching algorithm and the method proposed by the embodiment of the present invention to process the same 100 pictures at the same time, and it can be seen from the following table that the operation efficiency of the method proposed by the present invention is better than that of the conventional skyline matching algorithm.
Name of algorithm Average elapsed time of algorithm (seconds/piece)
Conventional skyline matching 0.89
Examples of the invention 0.56
The invention provides an urban canyon combination navigation and positioning method based on skyline matching, which has a plurality of methods and ways for realizing the technical scheme, and the above description is only a specific implementation way of the invention, and it should be noted that, for those skilled in the art, without departing from the principle of the invention, several improvements and embellishments can be made, and these improvements and embellishments should also be regarded as the protection scope of the invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (9)

1. An urban canyon integrated navigation and positioning method based on skyline matching is characterized by comprising the following steps:
step 1: determining candidate positions and constructing a city skyline database of the area, shooting a building image in a short distance by using a common high-definition camera obliquely arranged upwards at the top of an automobile or an unmanned aerial vehicle, comparing the building image with building characteristic data in a 3D city model, roughly determining the candidate positions, and calling data in the candidate position area in the 3D city model to generate the skyline database;
step 2: skyline matching, namely shooting a sky image by using an infrared camera, smoothing the noise of the infrared sky image by using Gaussian filtering, performing edge detection and feature extraction on the smoothed infrared sky image to obtain skyline feature information, and performing feature matching on the skyline feature information and data in a pre-generated skyline database to obtain a region position point;
and step 3: satellite screening, namely calculating the elevation angle of a building by combining the position information of the regional position point with the boundary information of the nearby building in the 3D city model, and calculating the C/N (carrier to noise ratio) according to the elevation angle of the building and the C/N0Screening all possible received satellites by using the pseudo-range residual error, and performing navigation resolving by using screened satellite signals to obtain GNSS position solution information;
and 4, step 4: and (2) performing integrated navigation position calculation, introducing GNSS position solution information and navigation parameters of the IMU into a Kalman filter, outputting corresponding error correction information by the Kalman filter, inputting the error correction information into the mechanical arrangement process of the IMU to correct the navigation parameters of the IMU, and outputting the corrected navigation parameters to obtain final position information.
2. The integrated navigation and positioning method for urban canyons based on skyline matching according to claim 1, wherein the candidate position determination step in step 1 is: building images shot by a common high-definition camera are compared with building data existing in the 3D city model by using an SURF algorithm, so that the approximate position of a building in the 3D city model in the building images is preliminarily determined.
3. The integrated navigation and positioning method for urban canyons based on skyline matching according to claim 2, wherein the data in the candidate location area in the 3D urban model in step 1 and the data in the skyline database each comprise coordinate information of building boundary points, building boundary length information, building identification information and road boundary point information in the candidate location area.
4. The method according to claim 1, wherein the Prewitt operator is used in step 2 to perform edge detection and contour feature extraction on the smoothed infrared sky image to obtain skyline feature information, and the skyline feature information is skyline boundary shape information and includes boundary point information and boundary length information.
5. The integrated navigation and positioning method for urban canyons based on skyline matching according to claim 4, wherein the step of matching the skyline feature information with the data in the skyline database generated in step 1 in step 2 comprises:
step 2.1: judging the characteristic angular points, namely judging whether the positions of the corner points of the skyline are consistent, and if so, judging in the step 2.2; if not, judging that the two are not matched, and continuing to execute the step;
step 2.2: matching the boundary length of the skyline, matching the boundary length of the skyline in the skyline characteristic information with data in a skyline database, if not, continuing to execute the step for matching, and finding a position point with the highest matching degree as a target point; if so, executing the step 2.3;
step 2.3: and locking the position of the position point in the 3D city model by combining the 3D city model, wherein the locked position information is the regional position point of the target in the actual city.
6. The integrated navigation and positioning method for urban canyons based on skyline matching as claimed in claim 1, wherein step 3 comprises calculating buildings on both sides of said area location pointAccording to the calculated maximum elevation value alpha of the building, firstly judging the satellite elevation angle, reserving the satellite with the satellite elevation angle larger than a threshold value alpha, and rejecting other satellites; then, the carrier-to-noise ratio C/N is performed0To reserve the carrier to noise ratio C/N0Rejecting other satellites if the satellite is larger than the threshold value beta; and finally screening through the pseudo-range residual error, reserving the satellite with the pseudo-range residual error smaller than the threshold value gamma, and using the signal of the satellite to carry out final position calculation.
7. The method according to claim 6, wherein the maximum elevation values α of the buildings at the two sides of the area position point calculated in step 3 are calculated and obtained according to the position information of the area position point and the building height information contained in the boundary length information, the road width information and the building boundary point information of the buildings at the two sides of the 3D city model where the area position point is located.
8. The integrated navigation and positioning method for urban canyons based on skyline matching as claimed in claim 1, wherein the method for combining GNSS and IMU in step 4 is a distributed combined open-loop scheme, wherein a kalman filter is used to connect the output of GNSS and the mechanical arrangement of IMU, and the error correction information output by the kalman filter is used to correct the navigation parameters of IMU.
9. The integrated navigation and positioning method for urban canyons based on skyline matching according to claim 1, wherein in step 1, the ordinary high-definition camera is placed on the top of a car or an unmanned aerial vehicle at an angle of 45 ° upward.
CN202110265342.6A 2021-03-11 2021-03-11 Urban canyon integrated navigation and positioning method based on skyline matching Active CN113031041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110265342.6A CN113031041B (en) 2021-03-11 2021-03-11 Urban canyon integrated navigation and positioning method based on skyline matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110265342.6A CN113031041B (en) 2021-03-11 2021-03-11 Urban canyon integrated navigation and positioning method based on skyline matching

Publications (2)

Publication Number Publication Date
CN113031041A CN113031041A (en) 2021-06-25
CN113031041B true CN113031041B (en) 2022-06-10

Family

ID=76469596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110265342.6A Active CN113031041B (en) 2021-03-11 2021-03-11 Urban canyon integrated navigation and positioning method based on skyline matching

Country Status (1)

Country Link
CN (1) CN113031041B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113504553B (en) * 2021-06-29 2024-03-29 南京航空航天大学 GNSS positioning method based on accurate 3D city model in urban canyon
CN113820697B (en) * 2021-09-09 2024-03-26 中国电子科技集团公司第五十四研究所 Visual positioning method based on city building features and three-dimensional map
CN113834486A (en) * 2021-09-22 2021-12-24 江苏泰扬金属制品有限公司 Distributed detection system based on navigation positioning
CN114359476A (en) * 2021-12-10 2022-04-15 浙江建德通用航空研究院 Dynamic 3D urban model construction method for urban canyon environment navigation
CN115291261A (en) * 2022-08-10 2022-11-04 湖南北云科技有限公司 Satellite positioning method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084398A (en) * 2008-06-25 2011-06-01 微软公司 Registration of street-level imagery to 3D building models
CN107966724A (en) * 2017-11-27 2018-04-27 南京航空航天大学 Satellite positioning method in a kind of urban canyons based on 3D city models auxiliary
CN109285177A (en) * 2018-08-24 2019-01-29 西安建筑科技大学 A kind of digital city skyline extracting method
CN110926474A (en) * 2019-11-28 2020-03-27 南京航空航天大学 Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL164650A0 (en) * 2004-10-18 2005-12-18 Odf Optronics Ltd An application for the extraction and modeling of skyline for and stabilization the purpese of orientation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102084398A (en) * 2008-06-25 2011-06-01 微软公司 Registration of street-level imagery to 3D building models
CN107966724A (en) * 2017-11-27 2018-04-27 南京航空航天大学 Satellite positioning method in a kind of urban canyons based on 3D city models auxiliary
CN109285177A (en) * 2018-08-24 2019-01-29 西安建筑科技大学 A kind of digital city skyline extracting method
CN110926474A (en) * 2019-11-28 2020-03-27 南京航空航天大学 Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method

Also Published As

Publication number Publication date
CN113031041A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN113031041B (en) Urban canyon integrated navigation and positioning method based on skyline matching
CN110926474B (en) Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
CN111862672B (en) Parking lot vehicle self-positioning and map construction method based on top view
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
KR101105795B1 (en) Automatic processing of aerial images
WO2020000137A1 (en) Integrated sensor calibration in natural scenes
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN109816708B (en) Building texture extraction method based on oblique aerial image
CN114199259A (en) Multi-source fusion navigation positioning method based on motion state and environment perception
CN114973028B (en) Aerial video image real-time change detection method and system
CN105466399B (en) Quickly half global dense Stereo Matching method and apparatus
CN111536970B (en) Infrared inertial integrated navigation method for low-visibility large-scale scene
CN110569797A (en) earth stationary orbit satellite image forest fire detection method, system and storage medium thereof
CN114693754A (en) Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN113610905A (en) Deep learning remote sensing image registration method based on subimage matching and application
CN117218201A (en) Unmanned aerial vehicle image positioning precision improving method and system under GNSS refusing condition
CN116824079A (en) Three-dimensional entity model construction method and device based on full-information photogrammetry
CN116485894A (en) Video scene mapping and positioning method and device, electronic equipment and storage medium
CN115164900A (en) Omnidirectional camera based visual aided navigation method and system in urban environment
CN114494039A (en) Underwater hyperspectral push-broom image geometric correction method
CN114199250A (en) Scene matching navigation method and device based on convolutional neural network
CN111833384A (en) Method and device for quickly registering visible light and infrared images
Gakne et al. Skyline-based positioning in urban canyons using a narrow fov upward-facing camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant