CN106793086B - Indoor positioning method - Google Patents

Indoor positioning method Download PDF

Info

Publication number
CN106793086B
CN106793086B CN201710152882.7A CN201710152882A CN106793086B CN 106793086 B CN106793086 B CN 106793086B CN 201710152882 A CN201710152882 A CN 201710152882A CN 106793086 B CN106793086 B CN 106793086B
Authority
CN
China
Prior art keywords
positioning
wifi
image
fingerprint
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710152882.7A
Other languages
Chinese (zh)
Other versions
CN106793086A (en
Inventor
胡钊政
谢静茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201710152882.7A priority Critical patent/CN106793086B/en
Publication of CN106793086A publication Critical patent/CN106793086A/en
Application granted granted Critical
Publication of CN106793086B publication Critical patent/CN106793086B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes

Abstract

The invention relates to an indoor positioning method, which relates to a wireless communication network technology for the purpose of network management, and adopts a method of combining positioning based on WiFi fingerprints and visual positioning based on marks. The method overcomes the defects that the existing WiFi fingerprint positioning technology is low in precision and the existing single vision positioning method technology is not suitable for indoor positioning.

Description

Indoor positioning method
Technical Field
The technical scheme of the invention relates to a wireless communication network technology for the purpose of network management, in particular to an indoor positioning method.
Background
The indoor positioning means that position positioning is realized in an indoor environment, and a set of indoor position positioning system is formed by mainly integrating multiple technologies such as wireless communication, base station positioning, inertial navigation positioning and the like, so that position monitoring of personnel, objects and the like in an indoor space is realized. Because the indoor environment is complex and changeable and cannot receive a Global Positioning System (GPS) signal, indoor Positioning is difficult at present. When the satellite positioning cannot be used in an indoor environment, the indoor positioning technology is used as auxiliary positioning of the satellite positioning, the problems that satellite signals are weak and cannot penetrate through buildings when reaching the ground are solved, and the current position of an object is finally positioned.
From the currently published literature and technical means, more indoor positioning technologies are developed: Wi-Fi technology, which is based on the indoor positioning of WLAN (Wireless Local Area Network), needs to arrange a Wireless access point AP (Access Point) in advance, and can cause resource waste when no positioning requirement exists; the Ultra Wide Band technology is based on indoor positioning of UWB (Ultra Wide Band), at least three signal receivers are required at present, and an obstacle cannot exist between the receiver and a transmitter; the Inertial navigation positioning technology is based on indoor positioning of Inertial sensors (Inertial sensors), and positioning accuracy is inevitably influenced by noise of a micro-electromechanical system due to the Inertial sensors.
CN103402256B discloses an indoor positioning method based on Wi F i fingerprints; CN106304331A discloses a WiFi fingerprint indoor positioning method; CN103582119B announces a fingerprint database construction method of a Wi F i indoor positioning system. From the prior published literature about the WiFi positioning technology, the positioning technology based on the WiFi fingerprint totally adopts the MAC address and the signal strength RSSI value to construct the fingerprint, the fingerprint library constructed by the method is complex, and the positioning accuracy is easily influenced by the indoor environment change.
CN106295512A discloses a method for constructing a multi-correction-line indoor visual database based on an identifier and an indoor positioning method, where the method is based on a camera, and positioning and navigation are implemented by retrieving images with specific identifiers, which is difficult to implement in a real indoor environment, and a series of image sets with the same identifier need to be arranged indoors, and the indoor environment needs to be changed; CN106228538A discloses a binocular vision indoor positioning method based on logo, wherein two cameras are required to obtain images in the positioning process, and a camera is used to position a target point, which relates to the calibration of internal and external parameters of the camera and the conversion between a camera coordinate system, an image coordinate system and a world coordinate system, and is generally applied to the field of three-dimensional positioning of mobile robots, and is difficult to popularize and apply to the general public. In summary, the existing single visual positioning method technology is not suitable for indoor positioning.
In summary, there is currently no economical and mature indoor positioning technology. With the rapid development of key technologies such as internet, wireless communication, computer technology, mapping technology and equipment manufacturing, indoor positioning will develop towards the direction of complementary combination of different indoor positioning technologies. The disadvantage of a certain indoor positioning method is made up by a complementary combination mode of different indoor positioning technologies, and how to organically combine various indoor positioning technologies is a hot point of research in the technical field.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the indoor positioning method is a method combining positioning based on WiFi fingerprint and visual positioning based on marks to realize high-precision indoor positioning, and overcomes the defects that the precision is low in the existing WiFi fingerprint positioning technology and the existing single visual positioning method technology is not suitable for indoor positioning.
The technical scheme adopted by the invention for solving the technical problem is as follows: an indoor positioning method is a method combining positioning based on WiFi fingerprints and visual positioning based on marks, firstly, a WiFi positioning range and WiFi positioning coordinates are obtained by utilizing a WiFi position fingerprint positioning algorithm, then, visual positioning coordinates are obtained according to feature matching and visual positioning of a tested image, and finally, the WiFi position fingerprint positioning and the visual positioning are combined, and the specific steps are as follows:
first, generating a WiFi fingerprint:
developing an Android App based on Java language, obtaining an MAC address of a WiFi signal, generating a txt file and storing the txt file in a smart phone, and generating a WiFi fingerprint;
secondly, constructing a WiFi position fingerprint database:
selecting a corridor area under an indoor environment as a positioning area, selecting 30-60 WiFi sampling points in the corridor area, wherein the coordinates of each sampling point are known, detecting detectable WiFi signals at each sampling point by using an installed App, obtaining an MAC address sequence of the WiFi sampling points and storing the MAC address sequence as a fingerprint in a WiFi position fingerprint database, forming a WiFi position fingerprint database by a series of stored MAC address sequences, wherein each fingerprint corresponds to unique position information, setting 60 WiFi sampling points in the selected positioning area, taking the MAC address acquired by each sampling point as the fingerprint of the sampling point, traversing all the sampling points to obtain 60 fingerprints, storing the 60 fingerprints in the WiFi position fingerprint database, and thus completing the construction of the WiFi position fingerprint database;
thirdly, WiFi fingerprint positioning:
in the positioning stage, the position coordinate of the position x to be positioned is set to be (x, y) in the positioning area selected in the second step, and N can be receivedxPersonal WiFiSignal, using App detection in the first step to obtain the actual measurement fingerprint of the positioning point x
Figure GDA0002270799940000021
Matching the actual measurement fingerprint xf with the fingerprints in the WiFi position fingerprint database in the second step by using a fingerprint matching algorithm according to the rule that each fingerprint corresponds to unique position information in the second step to obtain three fingerprints with the highest matching degree, thereby obtaining the WiFi positioning range (x)0~x1,y0~y1) And WiFi positioning coordinates (x)w,yw) Realizing WiFi fingerprint positioning, wherein x0、x1As the abscissa, x, of the point to be located0~x1The range of the abscissa to be positioned is in m; wherein y is0、y1As the ordinate, y, of the point to be located0~y1Is a vertical coordinate range to be positioned, and the unit is m;
fourthly, generating a training image set:
in the positioning area of the second step, the coordinates of all doorplates are known, the doorplates are mark sampling points, namely to-be-positioned points, the doorplates are shot by using a smart phone, all mark sampling points are traversed, a training image set is generated, the coordinates of the mark sampling points are known as the mark sampling points are part of the WiFi sampling points, and the coordinates of the training image are the coordinates of the mark sampling points;
fifthly, calculating SURF global feature descriptors:
firstly, preprocessing the training image obtained in the fourth step, wherein the preprocessing comprises normalization processing and graying processing, then calculating a SURF global feature descriptor, and the SURF global feature descriptor comprises two parts, wherein the first part is feature point positioning, the second part is feature descriptor calculation, the central point of the image after the normalization processing is used as a feature point, the whole image is used as a single neighborhood of the feature point, and the calculated feature descriptor is used as the SURF global feature descriptor of the whole image;
sixthly, calculating an ORB global feature descriptor:
(6.1) determining the main direction of the characteristic points:
taking the central point of the normalized image as a feature point, calculating the main direction of the feature point by using the image moment, and calculating the image moment of any feature point
Figure GDA0002270799940000031
Where I (x, y) is the gray value at point (x, y), the centroid of the neighborhood image of the feature point
Figure GDA0002270799940000032
The included angle between the centroid of the neighborhood image and the feature point is theta, arctan2 (m)01,m10) I.e. the main direction of the feature point;
(6.2) generating BRIEF feature descriptors:
the BRIEF feature descriptor is generated as follows: p1 represents a smoothed image neighborhood, and the binary test at any location point x and y is the logical result of the two intensity tests:
Figure GDA0002270799940000033
wherein p1(x) represents the intensity of a point x on the image neighborhood p1, p1(y) represents the intensity of a point y on the image neighborhood p1, and an n-dimensional vector, namely a BRIEF feature descriptor, is obtained through n binary tests
Figure GDA0002270799940000034
Here, n is 256, and a 256-bit binary character string is obtained;
(6.3) computing ORB global feature descriptors:
in order to make the BRIEF feature descriptor have rotation invariance, the direction of the BRIEF feature descriptor is set according to the direction of the feature point determined in the step (6.1), and the pixel point (x) of any image is subjected toi,yi) Defining a 2 x n matrix of feature sets obtained from the n-bit binary test set
Figure GDA0002270799940000035
The principal of the feature points determined by the step (6.1)To calculate affine transformation matrix
Figure GDA0002270799940000036
Thereby obtaining Sθ=RθS,SθNamely, the BRIEF feature descriptor with the rotation invariance, and finally, the ORB global feature descriptor with the rotation invariance is calculated: gn(P,θ):=fn(P)|(xi,yi)∈SθHere, n is 256;
and seventhly, collecting a tested image:
in the positioning area in the second step, a doorplate closest to the positioning area is shot at the positioning point by using a smart phone, and a tested image is acquired;
eighth step, feature matching and visual positioning of the tested image:
the feature matching method of the tested image is that ① calculates the euclidean distance between two SURF global feature descriptors:
two SURF Global feature descriptors L1,L2Euclidean distance between them
Figure GDA0002270799940000041
Where i is the ith dimension of the 64-dimensional feature vector, ② calculates the Hamming distance between two ORB global feature descriptors, R1,R2The Hamming distance between the two binary character strings T1,T2The result of the bit exclusive-or operation is obtained,
Figure GDA0002270799940000042
where i is the ith bit of the 256-bit string; the smaller the distance between the two feature descriptors is, the higher the image matching degree is;
the visual positioning method includes the steps of firstly adopting the fifth step and the sixth step to calculate the SURF global feature descriptor and the ORB global feature descriptor of the tested image acquired in the seventh step, and then utilizing the feature matching method of the tested image to calculate the SURF global feature descriptor and the ORB global feature descriptor of the tested image respectivelyCalculating three neighbors in SURF matching space by KNN algorithm, namely calculating three training images obtained in the fourth step with minimum Euclidean distance to the SURF global feature descriptor of the tested image, calculating two neighbors in ORB matching space, namely calculating two training images obtained in the fourth step with minimum Hamming distance to the ORB global feature descriptor of the tested image, finally taking the intersection of the three neighbors and the two neighbors as the training images in the fourth step training image set closest to the tested image, namely called matching images, wherein the position coordinate corresponding to the matching images is visual positioning coordinate (x is the corresponding position coordinate of the matching images)v,yv) Thereby completing the visual positioning;
ninth, the location that wiFi fingerprint location and visual positioning combined together:
after the WiFi positioning range is obtained in the third step, the training images in the training image set generated in the fourth step with the coordinates in the WiFi positioning range form a matching image set, and when the matching image in the eighth step is located in the matching image set, the visual positioning coordinates (x) obtained in the eighth step are usedv,yv) As the final indoor positioning position coordinate, otherwise, the WiFi positioning coordinate (x) obtained in the third step is usedw,yw) And as final position coordinates, positioning combining WiFi fingerprint positioning and visual positioning is completed.
In the indoor positioning method, the fingerprint matching algorithm in the third step compares the MAC address sequence of the actually measured fingerprint with the MAC address sequences of all the fingerprints in the WiFi location fingerprint database one by one, and when the MAC addresses are the same, the matching is successful, and the matching degree is high
Figure GDA0002270799940000043
Wherein xf (MAC) refers to an address sequence of a measured fingerprint, lf (MAC) refers to a MAC address sequence of the ith fingerprint in the WiFi location fingerprint database, l ═ (1, 2.. multidot., m), Num [ xf (MAC) ═ lf (MAC)]The number of successful matching of the MAC address is shown, and finally the matching degree is determinedThe corresponding positions of three fingerprints from high to low are used as the rough positioning range (x)0~x1,y0~y1) On the basis, obtaining WiFi positioning coordinates (x)w,yw),
Figure GDA0002270799940000044
In the above indoor positioning method, the method for calculating the SURF global feature descriptor in the fifth step is as follows:
(1) calculating the main direction of the feature points: taking the characteristic point as the center, in the circular neighborhood with the radius of 6s, calculating the sum m of Haar wavelet responses of all points in the sector of 60 degrees in the x and y directionswThe response values are gaussian weighted when the sum is calculated,dxand dyThe information of Haar wavelet response in x and y directions, a 60-degree fan-shaped sliding window rotates by 5 degrees in one step, and the angle theta of the synthesized vector is calculatedwThen, the maximum value of the mode length of the synthetic vector of each direction sector is obtained:
Figure GDA0002270799940000053
the angle corresponding to the maximum value of the vector mode length is the main direction of the feature point;
(2) compute SURF global feature descriptors: after the main direction of the feature point is obtained through the step (1), a square frame is taken around the feature point, the side length of the frame is 20s, the square is divided into 4 × 4 subregions, for each subregion, Haar wavelet responses of the feature points at 5 × 5 fixed intervals are calculated, and the feature descriptor v of each subregion is calculated as follows: v ═ Σ dx,Σdy,∑|dx|,∑|dy|), the descriptions of all 16 sub-regions are matched to form the SURF feature descriptor of the feature point, and the final SURF global feature descriptor is a 64-dimensional vector: v ═ V1,v2,...,v16In which v isi(i 1, 2.., 16) is a feature descriptor of the i-th sub-region;
s is a scale factor.
The invention has the beneficial effects that: compared with the prior art, the invention has the prominent substantive characteristics as follows:
(1) the invention discloses an indoor positioning method, which utilizes a WiFi fingerprint positioning principle and a visual positioning principle, adopts a method of combining positioning based on WiFi fingerprints and visual positioning based on marks, firstly utilizes a WiFi position fingerprint positioning algorithm to obtain a WiFi positioning range and WiFi positioning coordinates, and then obtains the visual positioning coordinates according to the characteristic matching and the visual positioning of a tested image, thereby creating a high-precision indoor positioning method based on the integration of WiFi fingerprints and vision. In principle or in actual operation, WiFi fingerprint positioning and visual positioning are difficult to be simultaneously carried out on the same platform, WiFi fingerprints need to be collected by using a smart phone, and a fingerprint matching algorithm needs to be realized in a C + + environment; in visual positioning, a training image and a tested image need to be shot by a mobile phone, and calculation of the SURF global feature descriptor and the ORB global feature descriptor and a feature matching algorithm need to be realized in a C + + environment. The inventor of the invention provides a method for correcting the positioning result by respectively obtaining and combining the WiFi fingerprint positioning result and the visual positioning result through long-term and arduous research and development work. Experimental results show that the positioning method combining the two positioning technologies provided by the invention has high accuracy and smaller error.
(2) In the method, the complementation of WiFi fingerprint positioning and visual positioning is as follows: the result obtained by WiFi positioning only has larger error and low accuracy, and can be corrected by visual positioning; however, when accidental large errors occur in the visual positioning, the accurate coordinates of the WiFi fingerprint positioning can be used for replacing the visual positioning coordinates, so that the positioning errors are reduced, and high-precision indoor positioning is realized.
Compared with the prior art, the invention has the following remarkable improvements:
(1) according to the invention, only the MAC address is used when the WiFi fingerprint is constructed, the fingerprint database is simplified, the fingerprint is unchanged as long as the wireless access point AP is not removed, and the positioning accuracy is irrelevant to the change of the indoor environment.
(2) The invention creatively combines WiFi fingerprint positioning and visual positioning, makes good use of advantages and avoids disadvantages, and can conveniently realize high-precision indoor positioning.
(3) The method can effectively improve the indoor positioning precision, the precision of the traditional indoor positioning method based on WiFi fingerprint positioning is generally about 10m, the WiFi fingerprint positioning and visual positioning are combined, the positioning precision can reach 6m, and the positioning accuracy can reach 80%.
(4) The method firstly utilizes WiFi fingerprint positioning to obtain a rough positioning range and an exact position coordinate, and then focuses on utilizing visual positioning to correct the exact position coordinate obtained by WiFi fingerprint positioning, so that the method can be used in an environment with less WiFi signals.
(5) The method of the invention does not need to arrange a fixed WiFi access point, has simple operation and low cost, does not need additional devices and does not need to change the indoor environment.
(6) The method of the invention uses the indoor doorplate for visual positioning, and is suitable for all indoor doorplate places, such as meeting rooms, indoor activity centers and office buildings.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a schematic block diagram of the process flow of the method of the present invention.
FIG. 2 is a schematic diagram of the Android App developed by the method of the present invention.
FIG. 3 is a schematic diagram of the positioning area and the distribution of sampling points in the method of the present invention.
In the figure, 1 to 60 are WiFi sampling points in the positioning area, and 1, 4, 7, 11, 14, 17, 20, 23, 38, 41, 45, 47, 51, 54, 57, and 59 indicated by a triangular arrow are mark sampling points of the training image, that is, to-be-positioned points.
Detailed Description
The embodiment shown in fig. 1 shows that the process of the present invention comprises the following steps: generating WiFi fingerprints → constructing a WiFi position fingerprint database → WiFi fingerprint positioning → generating a training image set → calculating SURF global feature descriptors → calculating ORB global feature descriptors → acquiring tested images → feature matching and visual positioning of the tested images → positioning combining WiFi fingerprint positioning and visual positioning.
The embodiment shown in fig. 2 shows that the Android App developed by the method of the present invention obtains the MAC address of the WiFi signal (MAC ADDRESS), generates a txt file, and stores the txt file in the smart phone, thereby generating a WiFi fingerprint; the figure shows that the fingerprint at a WiFi sampling point in the WiFi location fingerprint database consists of a series of MAC addresses, and the App can receive 17 WiFi signals at the sampling point, and correspondingly has 17 MAC addresses, and can save the fingerprint as a txt file by pressing a "save" button in the smartphone.
Fig. 3 shows an embodiment of a positioning area and distribution of sampling points, where the positioning area selects a section of a corridor, the length of the corridor is 60m, the width of the corridor is 3m, WiFi sampling points in the positioning area are distributed in a mesh shape, 60 WiFi sampling points are distributed in the positioning area, each sampling point is spaced by 2m, and 16 marker sampling points are provided, that is, 1, 4, 7, 11, 14, 17, 20, 23, 38, 41, 45, 47, 51, 54, 57, and 59 indicated by a triangular arrow in the drawing are marker sampling points of a training image, and are also to-be-positioned points.
Examples
First, generating a WiFi fingerprint:
developing an Android App based on Java language, obtaining an MAC address of a WiFi signal, generating a txt file and storing the txt file in a smart phone, and generating a WiFi fingerprint; as can be seen from the embodiment shown in fig. 2, 17 WiFi signals can be received at a selected sampling point by App, and accordingly 17 MAC addresses are available, and this fingerprint can be saved as txt file by pressing a "save" button in the smartphone;
secondly, constructing a WiFi position fingerprint database:
selecting a corridor area under an indoor environment as a positioning area, selecting 60 WiFi sampling points in the corridor area, wherein the coordinates of each sampling point are known, detecting detectable WiFi signals at each sampling point by using an installed App, obtaining an MAC address sequence of the WiFi sampling points and storing the MAC address sequence as a fingerprint in a WiFi position fingerprint database, forming the WiFi position fingerprint database by a series of stored MAC address sequences, wherein each fingerprint corresponds to unique position information, setting 60 WiFi sampling points in the selected positioning area, taking the MAC address acquired by each sampling point as the fingerprint of the sampling point, traversing all the sampling points to obtain 60 fingerprints, storing the 60 fingerprints in the WiFi position fingerprint database, and thus completing the construction of the WiFi position fingerprint database; as shown in the embodiment shown in fig. 3, in this embodiment 1, a section of a corridor is selected from a positioning area, the length of the section is 60m, the width of the section is 3m, WiFi sampling points in the positioning area are distributed in a mesh shape, the total number of the WiFi sampling points is 60, and the interval between each WiFi sampling point is 2 m;
thirdly, WiFi fingerprint positioning:
in the positioning stage, the position coordinate of the position x to be positioned is set to be (x, y) in the positioning area selected in the second step, and N can be receivedxThe WiFi signal is detected by the App in the first step to obtain the actual measurement fingerprint of the positioning point x
Figure GDA0002270799940000071
Matching the actual measurement fingerprint xf with the fingerprints in the WiFi position fingerprint database in the second step by using a fingerprint matching algorithm according to the rule that each fingerprint corresponds to unique position information in the second step to obtain three fingerprints with the highest matching degree, thereby obtaining the WiFi positioning range (x)0~x1,y0~y1) And WiFi positioning coordinates (x)w,yw) Realizing WiFi fingerprint positioning, wherein x0、x1As the abscissa, x, of the point to be located0~x1The range of the abscissa to be positioned is in m; wherein y is0、y1As the ordinate, y, of the point to be located0~y1Is a vertical coordinate range to be positioned, and the unit is m;
the fingerprint matching algorithm is to compare the MAC address sequence of the actual measurement fingerprint with the MAC address sequences of all the fingerprints in the WiFi position fingerprint databaseComparing one by one, when the MAC addresses are the same, the matching is successful, and the matching degree
Figure GDA0002270799940000072
Wherein xf (MAC) refers to an address sequence of a measured fingerprint, lf (MAC) refers to a MAC address sequence of the ith fingerprint in the WiFi location fingerprint database, l ═ (1, 2.. multidot., m), Num [ xf (MAC) ═ lf (MAC)]The number of successful matching of the MAC address is represented, and finally, the positions corresponding to three fingerprints with high matching degree to low matching degree are used as a rough positioning range (x)0~x1,y0~y1) On the basis, obtaining WiFi positioning coordinates (x)w,yw),
Figure GDA0002270799940000073
Fourthly, generating a training image set:
in the positioning area of the second step, the coordinates of all doorplates are known, the doorplates are mark sampling points, namely to-be-positioned points, the doorplates are shot by using a smart phone, all mark sampling points are traversed, a training image set is generated, the coordinates of the mark sampling points are known as the mark sampling points are part of the WiFi sampling points, and the coordinates of the training image are the coordinates of the mark sampling points; as shown in the embodiment shown in fig. 3, the total number of the marker sampling points of this embodiment 1 is 16, that is, 1, 4, 7, 11, 14, 17, 20, 23, 38, 41, 45, 47, 51, 54, 57, and 59 indicated by the triangular arrow in the figure are marker sampling points of the training image and are also to-be-located points.
Fifthly, calculating SURF global feature descriptors:
firstly, preprocessing the training image obtained in the fourth step, wherein the preprocessing comprises normalization processing and graying processing, then calculating a SURF global feature descriptor, and the SURF global feature descriptor comprises two parts, wherein the first part is feature point positioning, the second part is feature descriptor calculation, the central point of the image after the normalization processing is used as a feature point, the whole image is used as a single neighborhood of the feature point, and the calculated feature descriptor is used as the SURF global feature descriptor of the whole image;
the method for calculating the SURF global feature descriptor is as follows:
(1) calculating the main direction of the feature points: taking the characteristic point as the center, in the circular neighborhood with the radius of 6s, calculating the sum m of Haar wavelet responses of all points in the sector of 60 degrees in the x and y directionswThe response values are gaussian weighted when the sum is calculated,
Figure GDA0002270799940000081
dxand dyThe information of Haar wavelet response in x and y directions, a 60-degree fan-shaped sliding window rotates by 5 degrees in one step, and the angle theta of the synthesized vector is calculatedw
Figure GDA0002270799940000082
Then, the maximum value of the mode length of the synthetic vector of each direction sector is obtained:
Figure GDA0002270799940000083
the angle corresponding to the maximum value of the vector mode length is the main direction of the feature point;
(2) compute SURF global feature descriptors: after the main direction of the feature point is obtained through the step (1), a square frame is taken around the feature point, the side length of the frame is 20s, the square is divided into 4 × 4 subregions, for each subregion, Haar wavelet responses of the feature points at 5 × 5 fixed intervals are calculated, and the feature descriptor v of each subregion is calculated as follows: v ═ d (∑ d)x,Σdy,Σ|dx|,Σ|dy|), the descriptions of all 16 sub-regions are matched to form the SURF feature descriptor of the feature point, and the final SURF global feature descriptor is a 64-dimensional vector: v ═ V1,v2,...,v16In which v isi( i 1, 2.., 16) is a feature descriptor of the i-th sub-region;
s is a scale factor;
sixthly, calculating an ORB global feature descriptor:
(6.1) determining the main direction of the characteristic points:
taking the central point of the normalized image as the fifth stepFeature points whose principal directions are calculated using image moments of any one feature point
Figure GDA0002270799940000084
Where I (x, y) is the gray value at point (x, y), the centroid of the neighborhood image of the feature point
Figure GDA0002270799940000085
The included angle between the centroid of the neighborhood image and the feature point is theta, arctan2 (m)01,m10) I.e. the main direction of the feature point;
(6.2) generating BRIEF feature descriptors:
the BRIEF feature descriptor is generated as follows: p1 represents a smoothed image neighborhood, and the binary test at any location point x and y is the logical result of the two intensity tests:
Figure GDA0002270799940000086
wherein p1(x) represents the intensity of a point x on the image neighborhood p1, p1(y) represents the intensity of a point y on the image neighborhood p1, and an n-dimensional vector, namely a BRIEF feature descriptor, is obtained through n binary tests
Figure GDA0002270799940000091
Here, n is 256, and a 256-bit binary character string is obtained;
(6.3) computing ORB global feature descriptors:
in order to make the BRIEF feature descriptor have rotation invariance, the direction of the BRIEF feature descriptor is set according to the direction of the feature point determined in the step (6.1), and the pixel point (x) of any image is subjected toi,yi) Defining a 2 x n matrix of feature sets obtained from the n-bit binary test set
Figure GDA0002270799940000092
Calculating affine transformation matrix from principal direction of feature point determined in the above step (6.1)Thereby obtaining Sθ=RθS,SθNamely, the BRIEF feature descriptor with the rotation invariance, and finally, the ORB global feature descriptor with the rotation invariance is calculated: gn(P,θ):=fn(P)|(xi,yi)∈SθHere, n is 256;
and seventhly, collecting a tested image:
in the positioning area in the second step, a doorplate closest to the positioning area is shot at the positioning point by using a smart phone, and a tested image is acquired;
eighth step, feature matching and visual positioning of the tested image:
the characteristic matching method of the tested image is that ① calculates the Euclidean distance between two SURF global characteristic descriptors L1,L2Euclidean distance between themWhere i is the ith dimension of the 64-dimensional feature vector, ② calculates the Hamming distance between two ORB global feature descriptors, R1,R2The Hamming distance between the two binary character strings T1,T2The result of the bit exclusive-or operation is obtained,
Figure GDA0002270799940000095
where i is the ith bit of the 256-bit string; the smaller the distance between the two feature descriptors is, the higher the image matching degree is;
the visual positioning method comprises the steps of firstly respectively adopting the fifth step and the sixth step to calculate the SURF global feature descriptor and the ORB global feature descriptor of the tested image acquired in the seventh step, then respectively calculating the distances between the SURF global feature descriptor and the ORB global feature descriptor of the tested image and the SURF global feature descriptor and the ORB global feature descriptor of the training image acquired in the fourth step by using the feature matching method of the tested image, and then calculating the SURF matching space by using the KNN algorithmCalculating three neighbors, namely calculating three training images obtained by the fourth step with the minimum Euclidean distance with the SURF global feature descriptor of the tested image, calculating two neighbors in an ORB matching space, namely calculating two training images obtained by the fourth step with the minimum Hamming distance with the ORB global feature descriptor of the tested image, and finally taking the intersection of the three neighbors and the two neighbors as the training images in the fourth step training image set which is the closest to the tested image and is called matching images, wherein the position coordinates corresponding to the matching images are the visual positioning coordinates (x is the position coordinates of the tested image)v,yv) Thereby completing the visual positioning;
ninth, the location that wiFi fingerprint location and visual positioning combined together:
after the WiFi positioning range is obtained in the third step, the training images in the training image set generated in the fourth step with the coordinates in the WiFi positioning range form a matching image set, and when the matching image in the eighth step is located in the matching image set, the visual positioning coordinates (x) obtained in the eighth step are usedv,yv) As the final indoor positioning position coordinate, otherwise, the WiFi positioning coordinate (x) obtained in the third step is usedw,yw) And as final position coordinates, positioning combining WiFi fingerprint positioning and visual positioning is completed.
In this embodiment, the indoor pedestrian street with one floor as a test site, all pictures taken by the mobile phone are 4160 × 3120 (pixels), and the positioning results are shown in table 1.
TABLE 1 positioning test results of indoor pedestrian street first floor
Figure GDA0002270799940000101
The comparison between the real coordinates of the points to be positioned and the positioning coordinates of the embodiment proves that the indoor positioning method of the invention adopts a method of combining positioning based on WiFi fingerprints and visual positioning based on marks, thereby realizing high-precision indoor positioning.
Example 2
Except for selecting a corridor area under an indoor environment as a positioning area, selecting 30 WiFi sampling points in the corridor area; in the selected positioning area, 30 WiFi sampling points are set, the MAC address acquired by each sampling point is used as the fingerprint of the sampling point, 30 fingerprints can be obtained by traversing all the sampling points and stored in the WiFi position fingerprint database, and thus, the construction of the WiFi position fingerprint database is completed, which is the same as that of embodiment 1.
Example 3
Except for selecting a corridor area under the indoor environment as a positioning area, selecting 45 WiFi sampling points in the corridor area; in the selected positioning area, 45 WiFi sampling points are set, the MAC address acquired by each sampling point is used as the fingerprint of the sampling point, 45 fingerprints can be obtained by traversing all the sampling points, and the fingerprints are stored in a WiFi position fingerprint database, so that the construction of a WiFi position fingerprint database is completed, except for the embodiment 1.

Claims (3)

1. An indoor positioning method, characterized in that: the method combines positioning based on WiFi fingerprint and visual positioning based on marks, firstly obtains WiFi positioning range and WiFi positioning coordinates by utilizing a WiFi position fingerprint positioning algorithm, then obtains visual positioning coordinates according to feature matching and visual positioning of a tested image, and finally combines WiFi position fingerprint positioning and visual positioning, and comprises the following specific steps:
first, generating a WiFi fingerprint:
developing an Android App based on Java language, obtaining an MAC address of a WiFi signal, generating a txt file and storing the txt file in a smart phone, and generating a WiFi fingerprint;
secondly, constructing a WiFi position fingerprint database:
selecting a corridor area under an indoor environment as a positioning area, selecting 30-60 WiFi sampling points in the corridor area, wherein the coordinates of each sampling point are known, detecting detectable WiFi signals at each sampling point by using an installed App, obtaining an MAC address sequence of the WiFi sampling points and storing the MAC address sequence as a fingerprint in a WiFi position fingerprint database, forming a WiFi position fingerprint database by a series of stored MAC address sequences, wherein each fingerprint corresponds to unique position information, setting 60 WiFi sampling points in the selected positioning area, taking the MAC address acquired by each sampling point as the fingerprint of the sampling point, traversing all the sampling points to obtain 60 fingerprints, storing the 60 fingerprints in the WiFi position fingerprint database, and thus completing the construction of the WiFi position fingerprint database;
thirdly, WiFi fingerprint positioning:
in the positioning stage, the position coordinate of the position x to be positioned is set to be (x, y) in the positioning area selected in the second step, and N can be receivedxThe WiFi signal is detected by the App in the first step to obtain the actual measurement fingerprint of the positioning point x
Figure FDA0002270799930000011
Matching the actual measurement fingerprint xf with the fingerprints in the WiFi position fingerprint database in the second step by using a fingerprint matching algorithm according to the rule that each fingerprint corresponds to unique position information in the second step to obtain three fingerprints with the highest matching degree, thereby obtaining the WiFi positioning range (x)0~x1,y0~y1) And WiFi positioning coordinates (x)w,yw) Realizing WiFi fingerprint positioning, wherein x0、x1As the abscissa, x, of the point to be located0~x1The range of the abscissa to be positioned is in m; wherein y is0、y1As the ordinate, y, of the point to be located0~y1Is a vertical coordinate range to be positioned, and the unit is m;
fourthly, generating a training image set:
in the positioning area of the second step, the coordinates of all doorplates are known, the doorplates are mark sampling points, namely to-be-positioned points, the doorplates are shot by using a smart phone, all mark sampling points are traversed, a training image set is generated, the coordinates of the mark sampling points are known as the mark sampling points are part of the WiFi sampling points, and the coordinates of the training image are the coordinates of the mark sampling points;
fifthly, calculating SURF global feature descriptors:
firstly, preprocessing the training image obtained in the fourth step, wherein the preprocessing comprises normalization processing and graying processing, then calculating a SURF global feature descriptor, and the SURF global feature descriptor comprises two parts, wherein the first part is feature point positioning, the second part is feature descriptor calculation, the central point of the image after the normalization processing is used as a feature point, the whole image is used as a single neighborhood of the feature point, and the calculated feature descriptor is used as the SURF global feature descriptor of the whole image;
sixthly, calculating an ORB global feature descriptor:
(6.1) determining the main direction of the characteristic points:
taking the central point of the normalized image as a feature point, calculating the main direction of the feature point by using the image moment, and calculating the image moment of any feature point
Figure FDA0002270799930000021
Where I (x, y) is the gray value at point (x, y), the centroid of the neighborhood image of the feature point
Figure FDA0002270799930000022
The included angle between the centroid of the neighborhood image and the feature point is theta, arctan2 (m)01,m10) I.e. the main direction of the feature point;
(6.2) generating BRIEF feature descriptors:
the BRIEF feature descriptor is generated as follows: p1 represents a smoothed image neighborhood, and the binary test at any location point x and y is the logical result of the two intensity tests:wherein p1(x) represents the intensity of a point x on the image neighborhood p1, p1(y) represents the intensity of a point y on the image neighborhood p1, and an n-dimensional vector, namely a BRIEF feature descriptor, is obtained through n binary testsWhere n is 256, the result is 256 bitsA binary string;
(6.3) computing ORB global feature descriptors:
in order to make the BRIEF feature descriptor have rotation invariance, the direction of the BRIEF feature descriptor is set according to the direction of the feature point determined in the step (6.1), and the pixel point (x) of any image is subjected toi,yi) Defining a 2 x n matrix of feature sets obtained from the n-bit binary test set
Figure FDA0002270799930000025
Calculating affine transformation matrix from principal direction of feature point determined in the above step (6.1)
Figure FDA0002270799930000026
Thereby obtaining Sθ=RθS,SθNamely, the BRIEF feature descriptor with the rotation invariance, and finally, the ORB global feature descriptor with the rotation invariance is calculated: gn(P,θ):=fn(P)|(xi,yi)∈SθHere, n is 256;
and seventhly, collecting a tested image:
in the positioning area in the second step, a doorplate closest to the positioning area is shot at the positioning point by using a smart phone, and a tested image is acquired;
eighth step, feature matching and visual positioning of the tested image:
the characteristic matching method of the tested image is that ① calculates the Euclidean distance between two SURF global characteristic descriptors L1,L2Euclidean distance between them
Figure FDA0002270799930000027
Where i is the ith dimension of the 64-dimensional feature vector, ② calculates the Hamming distance between two ORB global feature descriptors, R1,R2The Hamming distance between the two binary character strings T1,T2The result of the bit exclusive-or operation is obtained,
Figure FDA0002270799930000028
where i is the ith bit of the 256-bit string; the smaller the distance between the two feature descriptors is, the higher the image matching degree is;
the visual localization method comprises the steps of firstly respectively adopting the fifth step and the sixth step to calculate the SURF global feature descriptor and the ORB global feature descriptor of the tested image acquired in the seventh step, then respectively calculating the distance between the SURF global feature descriptor and the ORB global feature descriptor of the tested image and the SURF global feature descriptor and the ORB global feature descriptor of the training image obtained in the fourth step by using the feature matching method of the tested image, then calculating three neighbors in a SURF matching space by using a KNN algorithm, namely calculating three training images obtained in the fourth step with the minimum Euclidean distance with the SURF global feature descriptor of the tested image, calculating two neighbors in the ORB matching space, namely calculating two training images obtained in the fourth step with the minimum Hamming distance with the ORB global feature descriptor of the tested image, finally, the intersection of the three neighbors and the two neighbors is taken as a training image in the fourth step training image set which is closest to the tested image and is called as a matching image, and the position coordinate corresponding to the matching image is the visual positioning coordinate (x)v,yv) Thereby completing the visual positioning;
ninth, the location that wiFi fingerprint location and visual positioning combined together:
after the WiFi positioning range is obtained in the third step, the training images in the training image set generated in the fourth step with the coordinates in the WiFi positioning range form a matching image set, and when the matching image in the eighth step is located in the matching image set, the visual positioning coordinates (x) obtained in the eighth step are usedv,yv) As the final indoor positioning position coordinate, otherwise, the WiFi positioning coordinate (x) obtained in the third step is usedw,yw) And as final position coordinates, positioning combining WiFi fingerprint positioning and visual positioning is completed.
2. The indoor positioning method according to claim 1, wherein: the fingerprint matching algorithm in the third step is to compare the MAC address sequence of the actual measurement fingerprint with the MAC address sequences of all the fingerprints in the WiFi position fingerprint database one by one, when the MAC addresses are the same, the matching is successful, and the matching degree is high
Figure FDA0002270799930000031
Wherein xf (MAC) refers to an address sequence of a measured fingerprint, lf (MAC) refers to a MAC address sequence of the ith fingerprint in the WiFi location fingerprint database, l ═ (1, 2.. multidot., m), Num [ xf (MAC) ═ lf (MAC)]The number of successful matching of the MAC address is represented, and finally, the positions corresponding to three fingerprints with high matching degree to low matching degree are used as a rough positioning range (x)0~x1,y0~y1) On the basis, obtaining WiFi positioning coordinates (x)w,yw),
Figure FDA0002270799930000032
3. The indoor positioning method according to claim 1, wherein: the method of calculating the SURF global feature descriptor in the fifth step is as follows:
(1) calculating the main direction of the feature points: taking the characteristic point as the center, in the circular neighborhood with the radius of 6s, calculating the sum m of Haar wavelet responses of all points in the sector of 60 degrees in the x and y directionswThe response values are gaussian weighted when the sum is calculated,
Figure FDA0002270799930000033
dxand dyThe information of Haar wavelet response in x and y directions, a 60-degree fan-shaped sliding window rotates by 5 degrees in one step, and the angle theta of the synthesized vector is calculatedw
Figure FDA0002270799930000034
Then, the maximum value of the mode length of the synthetic vector of each direction sector is obtained:
Figure FDA0002270799930000041
the angle corresponding to the maximum value of the vector mode length is the main direction of the feature point;
(2) compute SURF global feature descriptors: after the main direction of the feature point is obtained through the step (1), a square frame is taken around the feature point, the side length of the frame is 20s, the square is divided into 4 × 4 subregions, for each subregion, Haar wavelet responses of the feature points at 5 × 5 fixed intervals are calculated, and the feature descriptor v of each subregion is calculated as follows: v ═ d (∑ d)x,∑dy,∑|dx|,∑|dy|), the descriptions of all 16 sub-regions are matched to form the SURF feature descriptor of the feature point, and the final SURF global feature descriptor is a 64-dimensional vector: v ═ V1,v2,...,v16In which v isi(i 1, 2.., 16) is a feature descriptor of the i-th sub-region;
s is a scale factor.
CN201710152882.7A 2017-03-15 2017-03-15 Indoor positioning method Expired - Fee Related CN106793086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710152882.7A CN106793086B (en) 2017-03-15 2017-03-15 Indoor positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710152882.7A CN106793086B (en) 2017-03-15 2017-03-15 Indoor positioning method

Publications (2)

Publication Number Publication Date
CN106793086A CN106793086A (en) 2017-05-31
CN106793086B true CN106793086B (en) 2020-01-14

Family

ID=58961001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710152882.7A Expired - Fee Related CN106793086B (en) 2017-03-15 2017-03-15 Indoor positioning method

Country Status (1)

Country Link
CN (1) CN106793086B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107036602B (en) * 2017-06-15 2020-04-03 北京大学 Indoor autonomous navigation system and method of hybrid unmanned aerial vehicle based on environment information code
CN107886752B (en) * 2017-11-08 2019-11-26 武汉理工大学 A kind of high-precision vehicle positioning system and method based on transformation lane line
WO2019104665A1 (en) * 2017-11-30 2019-06-06 深圳市沃特沃德股份有限公司 Robot cleaner and repositioning method therefor
CN110360999B (en) * 2018-03-26 2021-08-27 京东方科技集团股份有限公司 Indoor positioning method, indoor positioning system, and computer readable medium
CN108692720B (en) * 2018-04-09 2021-01-22 京东方科技集团股份有限公司 Positioning method, positioning server and positioning system
CN109540144A (en) * 2018-11-29 2019-03-29 北京久其软件股份有限公司 A kind of indoor orientation method and device
CN109612455A (en) * 2018-12-04 2019-04-12 天津职业技术师范大学 A kind of indoor orientation method and system
US10660062B1 (en) 2019-03-14 2020-05-19 International Business Machines Corporation Indoor positioning
CN111225440A (en) * 2019-11-22 2020-06-02 三一重工股份有限公司 Cooperative positioning method and device and electronic equipment
CN110940316B (en) * 2019-12-09 2022-03-18 国网智能科技股份有限公司 Navigation method and system for fire-fighting robot of transformer substation in complex environment
CN111076733B (en) * 2019-12-10 2022-06-14 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111132013B (en) * 2019-12-30 2020-12-11 广东博智林机器人有限公司 Indoor positioning method and device, storage medium and computer equipment
CN111323024B (en) * 2020-02-10 2022-11-15 Oppo广东移动通信有限公司 Positioning method and device, equipment and storage medium
CN111511017B (en) * 2020-04-09 2022-08-16 Oppo广东移动通信有限公司 Positioning method and device, equipment and storage medium
CN111457925B (en) * 2020-04-15 2022-03-22 湖南赛吉智慧城市建设管理有限公司 Community path navigation method and device, computer equipment and storage medium
CN111521971B (en) * 2020-05-13 2021-04-09 北京洛必德科技有限公司 Robot positioning method and system
CN111664848B (en) * 2020-06-01 2022-02-11 上海大学 Multi-mode indoor positioning navigation method and system
CN111928852B (en) * 2020-07-23 2022-08-23 武汉理工大学 Indoor robot positioning method and system based on LED position coding
CN112165684B (en) * 2020-09-28 2021-09-14 上海大学 High-precision indoor positioning method based on joint vision and wireless signal characteristics
CN112560818B (en) * 2021-02-22 2021-07-27 深圳阜时科技有限公司 Fingerprint identification method applied to narrow-strip fingerprint sensor and storage medium
CN113316080B (en) * 2021-04-19 2023-04-07 北京工业大学 Indoor positioning method based on Wi-Fi and image fusion fingerprint
CN113382376B (en) * 2021-05-08 2022-05-10 湖南大学 Indoor positioning method based on WIFI and visual integration
US11698467B2 (en) * 2021-08-30 2023-07-11 Nanning Fulian Fugui Precision Industrial Co., Ltd. Indoor positioning method based on image visual features and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484887A (en) * 2015-01-19 2015-04-01 河北工业大学 External parameter calibration method used when camera and two-dimensional laser range finder are used in combined mode
CN105137389A (en) * 2015-09-02 2015-12-09 安宁 Video-assisted radiofrequency positioning method and apparatus
CN105718549A (en) * 2016-01-16 2016-06-29 深圳先进技术研究院 Airship based three-dimensional WiFi (Wireless Fidelity) fingerprint drawing system and method
CN105828296A (en) * 2016-05-25 2016-08-03 武汉域讯科技有限公司 Indoor positioning method based on convergence of image matching and WI-FI

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8996302B2 (en) * 2012-11-30 2015-03-31 Apple Inc. Reduction of the impact of hard limit constraints in state space models

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484887A (en) * 2015-01-19 2015-04-01 河北工业大学 External parameter calibration method used when camera and two-dimensional laser range finder are used in combined mode
CN105137389A (en) * 2015-09-02 2015-12-09 安宁 Video-assisted radiofrequency positioning method and apparatus
CN105718549A (en) * 2016-01-16 2016-06-29 深圳先进技术研究院 Airship based three-dimensional WiFi (Wireless Fidelity) fingerprint drawing system and method
CN105828296A (en) * 2016-05-25 2016-08-03 武汉域讯科技有限公司 Indoor positioning method based on convergence of image matching and WI-FI

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mei Zhang;Wenbo Shen;Jinhui Zhu.WIFI and magnetic fingerprint positioning algorithm based on KDA-KNN.《IEEE》.2016, *
基于ORB全局特征与最近邻的交通标志快速识别算法;胡月志,李娜,胡钊政,李祎承;《交通信息与安全》;20160131;全文 *

Also Published As

Publication number Publication date
CN106793086A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106793086B (en) Indoor positioning method
CN110856112B (en) Crowd-sourcing perception multi-source information fusion indoor positioning method and system
Huang et al. WiFi and vision-integrated fingerprint for smartphone-based self-localization in public indoor scenes
CN106767810B (en) Indoor positioning method and system based on WIFI and visual information of mobile terminal
CN105813194B (en) Indoor orientation method based on fingerprint database secondary correction
KR102116824B1 (en) Positioning system based on deep learnin and construction method thereof
CN103905992B (en) Indoor positioning method based on wireless sensor networks of fingerprint data
CN105792353B (en) Crowd sensing type WiFi signal fingerprint assisted image matching indoor positioning method
CN112325883B (en) Indoor positioning method for mobile robot with WiFi and visual multi-source integration
CN105717483B (en) A kind of location determining method and device based on multi-source positioning method
CN104866873B (en) One kind is based on the matched indoor orientation method of handset image
CN104394588B (en) Indoor orientation method based on Wi Fi fingerprints and Multidimensional Scaling
CN110360999A (en) Indoor orientation method, indoor locating system and computer-readable medium
CN110536257B (en) Indoor positioning method based on depth adaptive network
Du et al. CRCLoc: A crowdsourcing-based radio map construction method for WiFi fingerprinting localization
CN105044659B (en) Indoor positioning device and method based on ambient light spectrum fingerprint
CN111901749A (en) High-precision three-dimensional indoor positioning method based on multi-source fusion
Li et al. Location estimation in large indoor multi-floor buildings using hybrid networks
KR20180055158A (en) Method and server for Correcting GPS Position in downtown environment using street view service
CN106197418B (en) A kind of indoor orientation method merged based on the fingerprint technique of sliding window with sensor
CN103196440B (en) M sequence discrete-type artificial signpost arrangement method and related mobile robot positioning method
Zhang et al. Dual-band wi-fi based indoor localization via stacked denosing autoencoder
CN109116298A (en) A kind of localization method, storage medium and positioning system
CN108512888A (en) A kind of information labeling method, cloud server, system, electronic equipment and computer program product
CN109640253B (en) Mobile robot positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200114