CN107451593B - High-precision GPS positioning method based on image feature points - Google Patents

High-precision GPS positioning method based on image feature points Download PDF

Info

Publication number
CN107451593B
CN107451593B CN201710552065.0A CN201710552065A CN107451593B CN 107451593 B CN107451593 B CN 107451593B CN 201710552065 A CN201710552065 A CN 201710552065A CN 107451593 B CN107451593 B CN 107451593B
Authority
CN
China
Prior art keywords
image
point
points
feature
reference position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710552065.0A
Other languages
Chinese (zh)
Other versions
CN107451593A (en
Inventor
卫军胡
陈俊希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710552065.0A priority Critical patent/CN107451593B/en
Publication of CN107451593A publication Critical patent/CN107451593A/en
Application granted granted Critical
Publication of CN107451593B publication Critical patent/CN107451593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Abstract

A high-precision GPS positioning method based on image characteristic points comprises the steps of reference position positioning information calibration and real-time position offset judgment, wherein in the positioning information calibration, GPS data are collected firstly to obtain longitude and latitude coordinates of a reference position; meanwhile, collecting a reference position environment image, extracting feature points by using an SIFT algorithm, and taking the longitude and latitude coordinates and the feature points as reference position positioning reference information; in the offset judgment, the GPS data of the current position and the front environment image are collected firstly, the reference position closest to the current position is found out according to a longitude and latitude conversion distance formula, the reference image of the reference position is matched with the real-time environment image of the current position by characteristic points, the position of the central point of the real-time image in the reference image is found out, and the offset state of the actual position relative to the preset route is judged according to the offset state of the central point of the image.

Description

High-precision GPS positioning method based on image feature points
Technical Field
The invention belongs to the field of digital image processing and the field of geodetic measurement subjects, and particularly relates to a high-precision GPS positioning method based on image feature points.
Background
The Global Positioning System (GPS) is a System that uses GPS Positioning satellites to perform real-time Positioning and navigation in the Global area. The system consists of three parts, namely a ground control part which consists of a main control station, a ground antenna, a monitoring station and a communication auxiliary system; the second is a space constellation part which consists of 24 satellites and is distributed on 6 orbital planes; and the third is a user device part which consists of a GPS receiver and a satellite antenna. The GPS technology can provide accurate three-dimensional coordinates and other related information for vast users on land, sea and space all weather, quickly and efficiently, so that the GPS technology is widely applied to different fields of navigation, geodetic surveying, target tracking and the like of military and civil traffic (ships, airplanes, automobiles and the like). At present, the precision error of general civil GPS positioning equipment can be reduced to about 3 meters, and the method can be suitable for most application environments.
The Scale Invariant Feature Transform (SIFT) algorithm is a classic algorithm in the field of image feature extraction. The SIFT feature is based on some local appearance of points of interest on the object, regardless of the size and rotation of the image. As shown in fig. 1 and 2. The tolerance to light, noise, and micro-viewing angle changes is also quite high. These characteristics are highly pronounced and relatively easy to extract. The detection rate of partial object occlusion using the SIFT feature descriptors is also quite high, and even more than 3 SIFT object features are enough to calculate the position and orientation. Under the present computer hardware speed and small feature database conditions, the recognition speed can approach real-time computation. The SIFT features have large information quantity and are suitable for quick and accurate matching in a mass database. In the invention, the algorithm is used as a method for extracting the characteristics of the environment image.
Human beings can distinguish the direction and the path according to familiar scenes in the brain and the sea and some living common knowledge, and for intelligent trolleys and robots, the GPS positioning technology is usually relied on. In practical situations, GPS signals are highly susceptible to varying degrees of interference from external factors such as weather, radio, shelter, and the like. In addition, the precision and stability of the civil GPS positioning device are difficult to meet the expected requirements. Therefore, the use of GPS equipment alone cannot improve accurate positioning for the robot.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a high-precision GPS positioning method based on image feature points, which simulates a method for identifying directions of human beings, firstly utilizes GPS data to realize rough global positioning, then utilizes SIFT algorithm to obtain image features, matches and confirms the environment image features acquired in real time and the reference image features of known positions to obtain deviation information of image positions, and is used as a compensation means of GPS positioning to realize precise local positioning, thereby effectively improving the positioning precision of intelligent trolleys and robots and providing more precise reference basis for motion control of the intelligent trolleys and robots.
In order to achieve the purpose, the invention adopts the technical scheme that:
a high-precision GPS positioning method based on image feature points comprises the following steps:
step 1: location information calibration of reference location
1.1) acquiring longitude and latitude coordinates of a reference position by utilizing GPS equipment;
1.2) acquiring an environment image right in front observed at a reference position by using a camera, converting the environment image into a gray image, and extracting characteristic pixel points of the image by using an SIFT algorithm;
1.3) the positioning information of each reference position is composed of longitude and latitude coordinates of the position and feature pixel points of an environment image, each position has only one longitude and latitude coordinate, a plurality of feature pixel points can be extracted from the environment image, each feature pixel point mainly comprises two types of information of a position and a feature descriptor, wherein the feature descriptor is a feature vector of 1 x 128, and the position is a two-dimensional coordinate of the feature pixel point in the image;
step 2: offset determination of current position
2.1) acquiring GPS data in real time to acquire longitude and latitude coordinates of the current position;
2.2) acquiring an environmental image of the current position in real time and extracting feature points;
2.3) searching for a reference position closest to the current position, the main steps are as follows:
firstly, selecting a corresponding reference position at the last moment as a candidate point of a current position; then, respectively calculating the distances from all reference positions (candidate points) in the neighborhood range taking the candidate point as the center to the current position; finally, selecting the candidate point with the minimum distance as the reference position of the current moment;
2.4) calculating the offset direction, and deducing and adjusting the direction.
Firstly, matching a feature pixel point of a current position with a feature pixel point of a reference position by using a KNN matching algorithm by taking the Euclidean distance of a descriptor (feature vector) as a judgment basis of similarity; then, three groups of matching points with the highest similarity are calculated, and the corresponding position of the central point of the current environment image in the reference image is found out by using a three-point positioning method; and finally, deducing the offset state of the current position according to the offset state of the corresponding central point relative to the central point of the reference image. The principle of deriving and adjusting the direction is as follows:
2.4.1) if the corresponding point of the central point of the current environment image in the reference image is positioned near the central area, the current position is considered to have no obvious offset relative to the reference position;
2.4.2) if the central point of the current environment image deviates to the left relative to the central point of the reference image, considering that the current position deviates to the left relative to the reference position or the movement direction deviates to the left, and the direction adjusting scheme at the moment is to be adjusted to the right;
2.4.3) if the central point of the current environment image deviates to the right relative to the central point of the reference image, the current position is considered to deviate to the right relative to the reference position or the moving direction deviates to the right, and the direction adjusting scheme at this moment is to be adjusted to the left.
And when the reference position closest to the current position is searched, the adopted distance calculation formula is a calculation formula for converting the longitude and latitude into the distance.
In calculating the offset distance, a 2-time KNN matching algorithm must be used for each set of matching points. Only when two characteristic pixel points are detected to be matched points with each other, the matching of the characteristic pixel points is considered to be successful.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention supplements the positioning information by using the characteristic points of the environment image, innovatively utilizes the position deviation of the image characteristic points to correct the actual position and direction deviation, and can correct the positioning deviation by using the environment image information in time when the GPS equipment has a positioning error due to the precision problem or external influence, thereby improving the positioning precision.
2. The environmental scenery is not required to be identified, and only the real-time image and the reference image are required to be subjected to feature matching, so that the complexity is reduced, and the positioning speed is increased.
3. The SIFT algorithm has high tolerance to light, noise and micro-view angle change during image acquisition. Particularly, the image scale problem is caused by non-uniform distance, and the algorithm can well detect the feature points.
4. In the early stage, an environment map is not required to be modeled independently, various road indicating signs are not required to be erected, the robot is guided to move once along a planned route manually, positioning information calibration of a path network is completed, and adaptability to various working environments is improved.
Drawings
Fig. 1 shows the result of extracting image feature points by using the SIFT algorithm.
Fig. 2 is a result of SIFT algorithm extracting features and matching for rotated and scaled images.
Fig. 3 is a structural diagram of navigation information, where the positioning information and the motion strategy together form the navigation information of the robot, the positioning information includes longitude and latitude and environmental image characteristics, and the motion strategy includes the advancing speed and steering angle of the robot.
Fig. 4 is a schematic diagram of the robot finding a reference position closest to an actual position according to latitude and longitude information.
FIG. 5 is a diagram illustrating an offset determination.
Fig. 6 shows the detection result of the leftward deflection of the viewing angle.
Fig. 7 shows the detection result of the rightward deflection of the viewing angle.
Fig. 8 is a schematic view of a situation when the robot deviates from the reference route in an actual scene.
Fig. 9 shows the feature detection results of the real-time image and the reference image when the robot is at position No. 1. The detection result shows that the deviation is small and can be ignored.
Fig. 10 shows the feature detection results of the real-time image and the reference image when the robot is at position No. 2. And the detection result is that the image deflects to the right, the actual position of the robot deflects to the left, and the adjustment strategy is that the image deflects to the right.
Fig. 11 shows the feature detection results of the real-time image and the reference image when the robot is at position No. 3. And the detection result is that the image deflects rightwards, the actual position of the robot deviates leftwards, and the adjustment strategy is that the image deflects rightwards.
Fig. 12 shows the feature detection results of the real-time image and the reference image when the robot is at position No. 4. And the detection result is that the image deflects leftwards, the actual position of the robot deflects rightwards, and the adjustment strategy is that the image deflects leftwards.
Fig. 13 shows the feature detection results of the real-time image and the reference image when the robot is at position No. 5. The detection result is that the deflection angle is small and can be ignored.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings and examples.
The invention is mainly divided into two parts: (1) setting positioning information of a reference position; (2) and determining the offset of the position in real time. The main steps of setting the reference positioning information include GPS data acquisition, image acquisition, characteristic point extraction and setting of positioning reference information; the main steps of the real-time position deviation judgment are real-time positioning information acquisition, nearest reference position searching, deviation direction calculation and derivation direction adjustment scheme.
The following steps are specifically introduced:
step one, setting reference position positioning information
The longitude and latitude is used as positioning information widely applied by people, and can be accurately and uniquely positioned to any position on the earth surface; the environment image can also provide the position information of the observation point, and the characteristic pixel points of the image are extracted and can be used as more accurate positioning information of the observation point by combining the longitude and latitude of the observation point. Firstly, measuring longitude and latitude coordinates of a reference position by using a high-precision GPS instrument; then, an environment image right in front of the reference position is collected, feature pixel points of the image are extracted by using an SIFT algorithm, and a feature descriptor (feature vector) of 1 x 128 is generated for each feature pixel point; the latitude and longitude of the reference position are combined with all the characteristic pixel points acquired from the position, and the combination is used as the accurate positioning information of the reference position, as shown in fig. 3.
1) GPS data acquisition
Longitude and latitude, which is a term of longitude and latitude, is also called a geographic coordinate system, and is a spherical coordinate system that defines the space on the earth by using a sphere of three-dimensional space, and can mark any position on the earth. The method for obtaining the longitude and latitude mainly measures the distance from a satellite with a known position to a user receiver by means of GPS equipment, and then integrates data fed back by more than 4 satellites to know the specific position of the receiver. The GPS data is divided into a plurality of data formats according to different purposes, and the GPGGA format is adopted.
The method mainly comprises the following steps: firstly, analyzing the received GPS data, and extracting four data of latitude, latitude hemisphere, longitude and longitude hemisphere. Then, the latitude and longitude formats are unified, and the units are converted into degrees.
2) Image acquisition and feature point extraction
The camera is used for collecting an environment image in front of the motion point, the environment image is converted into a gray image, then a SIFT algorithm is used for extracting feature pixel points of the image, and the feature pixel points meet the following three conditions:
A) the maximum value or the minimum value exists in the neighborhood of 26 points of the local layer and the upper and lower layers of a Gaussian difference scale-space (DOG scale-space);
B) cannot be an extreme point of low contrast;
C) and cannot be an extreme point where the edge response is strong.
The gradient of each pixel in the 16 x 16 pixel field around the feature pixel point is calculated and the weight away from the center is reduced using a gaussian descent function. In each 4 x 4 quadrant 1/16, a gradient direction histogram is calculated by adding weighted gradient values to one of the 8 direction bins of the histogram. Therefore, a descriptor of 1 x 128 can be formed for each characteristic pixel point, and the influence of illumination can be further eliminated by normalizing the vector.
3) Positioning information setting
Each reference location has one and only one latitude and longitude coordinate. And a plurality of characteristic pixel points can be extracted from the environment image, and each characteristic pixel point mainly has two types of information of a position and a descriptor. The descriptor is a vector of 1 × 128, and the position information is two-dimensional coordinate information of the point in the image.
In summary, the new positioning information includes the following contents:
Figure GDA0002320190360000061
step two offset determination of real-time position
1) GPS data real-time acquisition
Taking an intelligent trolley as an example, the trolley keeps the GPS equipment in a working state all the time in the movement process, and can acquire GPS data in real time so as to obtain longitude and latitude information of the current position.
2) Image acquisition and feature point extraction
The intelligent trolley keeps the camera in a working state all the time in the movement process, ensures that one frame of environment image at the corresponding moment can be acquired every time one piece of GPS data is received, then converts the image into a gray image, and extracts characteristic pixel points.
3) Nearest reference location search
During the movement of the intelligent trolley, the actual position of the intelligent trolley may deviate to a certain degree relative to the reference position. To reduce this offset, it is first clear which reference position the offset is relative to, and the reference position closest to the actual position is usually chosen. As shown in fig. 4. The method mainly comprises the following steps:
firstly, a plurality of reference points near the reference position at the previous moment are taken as candidate points, then the distance between each candidate point and the actual position is respectively calculated, and finally, the candidate point with the minimum distance is taken as the reference position at the moment.
The distance calculation method of the candidate point and the actual position is as follows:
latitude and longitude of the candidate point
Figure GDA0002320190360000071
The longitude and latitude of the actual position is
Figure GDA0002320190360000072
Taking positive value of east longitude according to the reference of 0 degree longitude
Figure GDA0002320190360000073
Negative value of longitude from west longitude
Figure GDA0002320190360000074
When the north latitude was 90-latitude (90-psi) and the south latitude was 90+ latitude (90+ psi), the two points after the above treatment were counted as (phi)AA) And (phi)BB). Then from the trigonometric derivation, the following equation can be derived for calculating the distance D between two points:
D=R×arccos[sinΨA×sinΨB×cos(ΦAB)+cosΨA×cosΨB]×π÷180
in the formula, the unit of the distance D is kilometer, and the radius R of the earth is 6371.004 kilometers
4) Calculating offset direction and deriving direction adjustment method
After the reference position is determined, firstly, the characteristic pixel point of the current position is matched with the characteristic pixel point of the reference position, and the offset state of the matching point is calculated and counted.
The method mainly comprises the following steps:
A) identifying matching points
For each characteristic pixel point P of the actual positioniFirst, all the feature points Q of the descriptor and the reference position are combinedjSolving the Euclidean distance from the descriptor; then find out P and PiCharacteristic point Q of two reference positions with the closest euclidean distance ofmAnd Qn(ii) a Finally using the nearest Euclidean distance PiQmDivided by the next nearest Euclidean distance PiQnIf the result is less than a certain proportional threshold (between 0 and 1), then the characteristic pixel point Q is considered to bemIs PiCandidate matching points of the points, and then judging P by the same methodiWhether the point is QmA candidate matching point of points. Two points are considered to be a set of corresponding matching points only if they are candidate matching points for each other. Typically, the proportional threshold is 0.5.
B) Computing and counting offset states
First, three groups of matching points with the shortest Euclidean distance are selected from a plurality of matching points. P in the real-time image, as shown in FIG. 51、P2、P3Three points are respectively compared with Q in the reference image1、Q2、Q3The three points are mutually matched points.
Then, a scaling ratio p of the real-time image relative to the reference image is calculated,
Figure GDA0002320190360000081
since the position coordinates of the three groups of matching points in the respective images are known, and the central point P of the real-time image0Is also known, so we can find P using the three-point location method0The corresponding position R (x, y) in the reference image is shown in fig. 5. Wherein the abscissa and ordinate of the R point can be obtained by the following equation:
Figure GDA0002320190360000082
in the formula, the image center point P0The abscissa and ordinate of (a) are half the length and width of the image, respectively.
C) Determining a directional adjustment scheme
Finally, according to the R point, the central point Q of the reference image is opposite to the central point Q of the reference image0And (4) judging the change of the visual angles of the two images, and further deducing the offset condition of the robot. Since the intelligent vehicle is moved on the ground and mainly performs adjustment in the left and right directions, the abscissa of two points is mainly compared.
When x isQ0-ε<x<xQ0And when the current time is + epsilon, the visual angle of the camera does not change significantly when the images are acquired twice, namely the position of the intelligent robot at the current time is considered to be not shifted temporarily. In this case, no direction adjustment is required. Here, epsilon represents a radius of the region of the center point R, and its value size is related to the size of the image.
When x is<xQ0When- ε, i.e. R point is at Q0To the left of the point, which represents the acquisitionIn the real-time image, the visual angle of the camera is shifted to the left, so that the intelligent trolley can be considered to have leftward deflection or leftward position shift, as shown in fig. 6. At this point, the cart should be directionally adjusted to the right.
When x is>xQ0When + ε, the R point is at Q0On the right side of the point, this indicates that the viewing angle of the camera shifts to the right when acquiring a real-time image, that is, it can be considered that the smart car has a rightward deflection or a rightward shift in position, as shown in fig. 7. At this point, the cart should be directionally adjusted to the left.
After the intelligent trolley obtains the offset direction and the adjustment scheme, the movement direction and the movement speed can be correspondingly adjusted, so that the trolley can correct the offset state as soon as possible and return to the preset movement route. As shown in fig. 5, 6, 7, 8, 9, 10, 11, 12, 13.

Claims (6)

1. A high-precision GPS positioning method based on image feature points is characterized by comprising the following steps:
step 1: location information calibration of reference location
1.1) acquiring longitude and latitude coordinates of a reference position by utilizing GPS equipment;
1.2) acquiring an environment image right in front observed at a reference position by using a camera, converting the environment image into a gray image, and extracting characteristic pixel points of the image by using an SIFT algorithm;
1.3) the positioning information of each reference position is composed of longitude and latitude coordinates of the position and feature pixel points of an environment image, each position has only one longitude and latitude coordinate, a plurality of feature pixel points can be extracted from the environment image, each feature pixel point mainly comprises two types of information of a position and a feature descriptor, wherein the feature descriptor is a feature vector of 1 x 128, and the position is a two-dimensional coordinate of the feature pixel point in the image;
step 2: offset determination of current position
2.1) acquiring GPS data in real time to acquire longitude and latitude coordinates of the current position;
2.2) acquiring an environmental image of the current position in real time and extracting feature points;
2.3) searching for a reference position closest to the current position, the main steps are as follows:
firstly, selecting a corresponding reference position at the last moment as a candidate point of a current position; then, respectively calculating the distances from all reference positions in a neighborhood range taking the candidate point as the center to the current position; finally, selecting the candidate point with the minimum distance as the reference position of the current moment;
2.4) calculating the offset direction and deducing a direction adjusting method, which mainly comprises the following steps:
firstly, matching the feature pixel point of the current position with the feature pixel point of the reference position by using a KNN matching algorithm by taking the Euclidean distance of a descriptor as a judgment basis of similarity; then, three groups of matching points with the highest similarity are calculated, and the corresponding position of the central point of the current environment image in the reference image is found out by using a three-point positioning method; and finally, deducing the offset state of the current position according to the offset state of the corresponding central point relative to the central point of the reference image.
2. The image feature point-based high-precision GPS positioning method according to claim 1, wherein in the step 2.3), the distance calculation formula used in searching for the reference position closest to the current position is a longitude-latitude conversion distance calculation formula.
3. The image feature point-based high-precision GPS positioning method according to claim 1, characterized in that the derivation and adjustment of the direction are as follows:
2.4.1) if the corresponding point of the central point of the current environment image in the reference image is positioned near the central area, the current position is considered to have no obvious offset relative to the reference position;
2.4.2) if the central point of the current environment image deviates to the left relative to the central point of the reference image, considering that the current position deviates to the left relative to the reference position or the movement direction deviates to the left, and the direction adjusting scheme at the moment is to be adjusted to the right;
2.4.3) if the central point of the current environment image deviates to the right relative to the central point of the reference image, the current position is considered to deviate to the right relative to the reference position or the moving direction deviates to the right, and the direction adjusting scheme at this moment is to be adjusted to the left.
4. The image feature point-based high-precision GPS positioning method according to claim 1, wherein in calculating the offset distance, 2 KNN matching algorithms must be applied to each set of matching points, and only when two feature pixels are detected as matching points, the matching of the set of feature pixels is considered to be successful.
5. The image feature point-based high-precision GPS positioning method according to claim 1, characterized in that the matching points are confirmed by the following method:
for each characteristic pixel point P of the actual positioniFirst, all the feature points Q of the descriptor and the reference position are combinedjSolving the Euclidean distance from the descriptor; then find out P and PiCharacteristic point Q of two reference positions with the closest euclidean distance ofmAnd Qn(ii) a Finally using the nearest Euclidean distance PiQmDivided by the next nearest Euclidean distance PiQnIf the result is less than a preset proportional threshold, the characteristic pixel point Q is consideredmIs PiCandidate matching points of the points, and then judging P by the same methodiWhether the point is QmAnd the candidate matching points of the points are considered to be a group of corresponding matching points only when the two points are mutually the candidate matching points.
6. The image feature point-based high-precision GPS positioning method according to claim 5, characterized in that the offset state is calculated as statistics by the following method:
firstly, three groups of matching points with the shortest Euclidean distance are selected from a plurality of matching points, and P in a real-time image1、P2、P3Three points are respectively compared with Q in the reference image1、Q2、Q3The three points are mutually matched;
then, a scaling ratio p of the real-time image relative to the reference image is calculated,
Figure FDA0002320190350000031
finding P by three-point positioning method0The corresponding position R (x, y) in the reference image, wherein the abscissa and ordinate of the R point are determined by the following system of equations:
Figure FDA0002320190350000032
in the formula, the image center point P0The abscissa and ordinate of (a) are half the length and width of the image, respectively.
CN201710552065.0A 2017-07-07 2017-07-07 High-precision GPS positioning method based on image feature points Active CN107451593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710552065.0A CN107451593B (en) 2017-07-07 2017-07-07 High-precision GPS positioning method based on image feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710552065.0A CN107451593B (en) 2017-07-07 2017-07-07 High-precision GPS positioning method based on image feature points

Publications (2)

Publication Number Publication Date
CN107451593A CN107451593A (en) 2017-12-08
CN107451593B true CN107451593B (en) 2020-05-15

Family

ID=60487783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710552065.0A Active CN107451593B (en) 2017-07-07 2017-07-07 High-precision GPS positioning method based on image feature points

Country Status (1)

Country Link
CN (1) CN107451593B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108253975B (en) * 2017-12-29 2022-01-14 驭势(上海)汽车科技有限公司 Method and equipment for establishing map information and positioning vehicle
CN108519772B (en) * 2018-03-01 2022-06-03 Ai机器人株式会社 Positioning method and device for conveying equipment, conveying equipment and storage medium
CN108981698B (en) * 2018-05-29 2020-07-14 杭州视氪科技有限公司 Visual positioning method based on multi-mode data
CN109241979A (en) * 2018-08-24 2019-01-18 武汉光庭信息技术股份有限公司 A kind of vehicle relative position estimation method based on SPEED VISION Feature Points Matching
CN116972880A (en) * 2019-03-18 2023-10-31 深圳市速腾聚创科技有限公司 Precision detection device of positioning algorithm
CN110188777B (en) * 2019-05-31 2023-08-25 东莞先知大数据有限公司 Multi-period steel rail damage data alignment method based on data mining
CN110361748A (en) * 2019-07-18 2019-10-22 广东电网有限责任公司 A kind of mobile device air navigation aid, relevant device and product based on laser ranging
CN110907955B (en) * 2019-12-02 2021-02-09 荣讯塑胶电子制品(深圳)有限公司 Positioning instrument fault identification system
CN111062875B (en) * 2019-12-19 2021-11-12 广州启量信息科技有限公司 Coordinate conversion method and device for air panoramic roaming data
CN111679303B (en) * 2019-12-30 2023-07-28 全球能源互联网研究院有限公司 Comprehensive positioning method and device for multi-source positioning information fusion
CN111380535A (en) * 2020-05-13 2020-07-07 广东星舆科技有限公司 Navigation method and device based on visual label, mobile machine and readable medium
CN113706592A (en) * 2021-08-24 2021-11-26 北京百度网讯科技有限公司 Method and device for correcting positioning information, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (en) * 2009-04-22 2010-10-27 中国科学院自动化研究所 Method for automatically registering scanned document images
JP2012068736A (en) * 2010-09-21 2012-04-05 Yaskawa Electric Corp Mobile body
CN103473774A (en) * 2013-09-09 2013-12-25 长安大学 Vehicle locating method based on matching of road surface image characteristics
CN105700532A (en) * 2016-04-19 2016-06-22 长沙理工大学 Vision-based navigation and positioning control method for transformer substation inspection robot
CN106127180A (en) * 2016-06-30 2016-11-16 广东电网有限责任公司电力科学研究院 A kind of robot assisted localization method and device
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4798450B2 (en) * 2006-12-07 2011-10-19 株式会社Ihi Navigation device and control method thereof
US8738179B2 (en) * 2008-12-01 2014-05-27 Kabushiki Kaisha Yaskawa Denki Robot system
CN105246039B (en) * 2015-10-20 2018-05-29 深圳大学 A kind of indoor orientation method and system based on image procossing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872475A (en) * 2009-04-22 2010-10-27 中国科学院自动化研究所 Method for automatically registering scanned document images
JP2012068736A (en) * 2010-09-21 2012-04-05 Yaskawa Electric Corp Mobile body
CN103473774A (en) * 2013-09-09 2013-12-25 长安大学 Vehicle locating method based on matching of road surface image characteristics
CN105700532A (en) * 2016-04-19 2016-06-22 长沙理工大学 Vision-based navigation and positioning control method for transformer substation inspection robot
CN106127180A (en) * 2016-06-30 2016-11-16 广东电网有限责任公司电力科学研究院 A kind of robot assisted localization method and device
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vision-Based Autonomous Vehicle Guidance for Indoor Security Patrolling by a SIFT-Based Vehicle-Localization Technique;Chen K C et al.;《IEEE Transactions on Vehicular Technology》;20100607;第59卷(第7期);第3261-3271页 *
基于GPS与图像融合的智能车辆高精度定位算法;李祎承 等;《交通运输系统工程与信息》;20170630;第17卷(第3期);第112-119页 *

Also Published As

Publication number Publication date
CN107451593A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN107451593B (en) High-precision GPS positioning method based on image feature points
CN107505644B (en) Three-dimensional high-precision map generation system and method based on vehicle-mounted multi-sensor fusion
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
CN108362281B (en) Long-baseline underwater submarine matching navigation method and system
CN108868268B (en) Unmanned parking space posture estimation method based on point-to-surface distance and cross-correlation entropy registration
Agarwal et al. Metric localization using google street view
CN102353377B (en) High altitude long endurance unmanned aerial vehicle integrated navigation system and navigating and positioning method thereof
Wu et al. Vehicle localization using road markings
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
CN112129281B (en) High-precision image navigation positioning method based on local neighborhood map
JP5286653B2 (en) Stationary object map generator
CN111383205B (en) Image fusion positioning method based on feature points and three-dimensional model
CN104281148A (en) Mobile robot autonomous navigation method based on binocular stereoscopic vision
CN105352509A (en) Unmanned aerial vehicle motion target tracking and positioning method under geographic information space-time constraint
Dumble et al. Airborne vision-aided navigation using road intersection features
CN111238488A (en) Aircraft accurate positioning method based on heterogeneous image matching
CN110160503B (en) Unmanned aerial vehicle landscape matching positioning method considering elevation
CN108921896B (en) Downward vision compass integrating dotted line characteristics
Chellappa et al. On the positioning of multisensor imagery for exploitation and target recognition
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method
Gakne et al. Tackling the scale factor issue in a monocular visual odometry using a 3D city model
Kim Aerial map-based navigation using semantic segmentation and pattern matching
Aggarwal Machine vision based SelfPosition estimation of mobile robots
CN114234967B (en) Six-foot robot positioning method based on multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant