CN112887911A - Indoor positioning method based on WIFI and vision fusion - Google Patents

Indoor positioning method based on WIFI and vision fusion Download PDF

Info

Publication number
CN112887911A
CN112887911A CN202110066411.0A CN202110066411A CN112887911A CN 112887911 A CN112887911 A CN 112887911A CN 202110066411 A CN202110066411 A CN 202110066411A CN 112887911 A CN112887911 A CN 112887911A
Authority
CN
China
Prior art keywords
wifi
value
fingerprint
point
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110066411.0A
Other languages
Chinese (zh)
Other versions
CN112887911B (en
Inventor
孙炜
唐晨俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110066411.0A priority Critical patent/CN112887911B/en
Publication of CN112887911A publication Critical patent/CN112887911A/en
Application granted granted Critical
Publication of CN112887911B publication Critical patent/CN112887911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides an indoor positioning method based on WIFI and vision fusion, which comprises the following steps: collecting WIFI data of the coarse fingerprint points; calculating the average absolute error of the WIFI data of the rough fingerprint points, establishing an extended error matrix by adopting an interpolation method, calculating to obtain a gradient error matrix, establishing self-adaptive fingerprint grid points according to the gradient error matrix, and acquiring the WIFI data and the image data of the self-adaptive fingerprint grid points to establish an off-line database; placing the mobile robot at any position in an indoor positioning area, collecting WIFI data and image data of a current position point, and matching with an offline database by adopting a WIFI positioning algorithm and an image matching positioning algorithm to obtain the estimation probability of the position of the mobile robot; and screening out a plurality of estimation positions with high possibility by a slope method, establishing an unsupervised fusion system, obtaining the weight value of each estimation position, and realizing the final positioning of the mobile robot position. The positioning accuracy and the positioning stability of the positioning method provided by the invention are obviously improved.

Description

Indoor positioning method based on WIFI and vision fusion
Technical Field
The invention relates to the technical field of indoor positioning, in particular to an indoor positioning method based on WIFI and vision fusion.
Background
In recent years, indoor positioning technology of mobile robots has played a vital role in various fields, such as smart cities, medical tracking, security monitoring, and the like. The working efficiency and the service quality in such fields are closely related to the stability and the accuracy of indoor positioning, and in order to improve the efficiency and the quality, the indoor positioning technology has been rapidly developed, resulting in that the number of sensors based on indoor positioning is infinite, such as bluetooth, geomagnetism, cameras, and the like. The indoor positioning technology based on the WIFI is one of the most attractive technologies recently, but because the indoor environment is complex and changeable, the WIFI technology often causes signal changes due to interference of multiple factors such as temperature, obstacles, artificial movement and the like in the environment, and finally, the positioning stability and accuracy need to be improved.
Therefore, it is necessary to provide an indoor positioning method based on WIFI and visual fusion to solve the above problems.
Disclosure of Invention
Aiming at the technical problems to be solved, the invention provides the WIFI and vision integrated indoor positioning method which is strong in stability and high in positioning precision.
The invention provides an indoor positioning method based on WIFI and vision fusion, which comprises the following steps:
s1: establishing coarse fingerprint points in an indoor positioning area, and collecting WIFI data of the coarse fingerprint points;
s2: calculating the average absolute error of the WIFI data acquired by each coarse fingerprint point, establishing an extended error matrix by adopting an interpolation method, calculating according to the extended error matrix to obtain a gradient error matrix, establishing an adaptive fingerprint grid point of an indoor positioning area according to the gradient error matrix, and acquiring the WIFI data and the image data of the adaptive fingerprint grid point to establish an offline database;
s3: placing the mobile robot at any position in an indoor positioning area, collecting WIFI data and image data of a current position point, and matching the WIFI data and the image data with the offline database by adopting a WIFI positioning algorithm and an image matching positioning algorithm to obtain an estimated probability of the position of the mobile robot;
s4: and screening out a plurality of estimation positions with high possibility by a slope method, establishing an unsupervised fusion system, obtaining the weight value of each estimation position, and realizing the final positioning of the mobile robot position.
Preferably, the step S1 includes the following steps:
s11: setting N grid fingerprints of 4M by 4M in an indoor positioning area as coarse fingerprint points, collecting WIFI signals sent by M WIFI access points at the positions of the coarse fingerprint points, collecting L sample data at each coarse fingerprint point, and representing initial WIFI data of each coarse fingerprint point as Loci={RSSIi,xi,yi1,2, N, wherein,
Figure BDA0002904176590000022
the strength vectors of the M WIFI access point signals collected on the ith coarse fingerprint point are represented,
Figure BDA0002904176590000023
the signal intensity x of the Mth WIFI access point is measured on the ith coarse fingerprint pointiAnd yiCoordinates of the ith coarse fingerprint point on an x axis and a y axis respectively;
s12: preprocessing the initial WIFI data of each coarse fingerprint point, eliminating missing values, screening and discarding bad values by adopting a Dixon criterion, and obtaining the WIFI data of each coarse fingerprint point after preprocessing.
Preferably, the step S12 includes the following steps:
s121, selecting an obvious level alpha through L sample data, and searching a Dixon table to obtain a critical value D (alpha, L);
s122: calculating sample data X received by each WIFI access point of each coarse fingerprint pointi1,2, L, the average value X of LavgAnd absolute error e, and sorting the absolute error e, wherein the average value XavgAnd the absolute error e is calculated as follows:
Figure BDA0002904176590000021
ei=Xi-Xavg
e1≤e2≤...≤eL
s123: constructing statistics, if the value to be measured is the maximum value
Figure BDA0002904176590000031
If the value to be measured is the minimum value
Figure BDA0002904176590000032
S124: when r is the maximum value for the value to be measuredmaxIf the value is more than D (alpha, L), the value to be measured is an abnormal value and should be removed, otherwise, the value is reserved; when r is the minimum value for the value to be measuredminIf the measured value is more than D (alpha, L), the measured value is an abnormal value and should be removed, otherwise, the measured value is reserved.
Preferably, the step S2 specifically includes:
s21: after the missing value and the bad value are removed, the average absolute error of the sample data of each coarse fingerprint point is calculated
Figure BDA0002904176590000033
Obtaining an error matrix E, expanding the error matrix E by adopting an interpolation method, and fitting an expanded error matrix EE of the 1 m-1 m grid fingerprint points;
s22: the error gradient matrix EF is calculated by extending the average absolute error in the error matrix EE, the calculation formula is as follows:
Figure BDA0002904176590000034
s23: setting two thresholds delta 1 and delta 2, wherein delta 1 is smaller than delta 2, dividing indoor positioning areas into three types, and adopting 4 m-4 m grids, wherein the area is smaller than the threshold delta 1; in the region between the threshold δ 1 and the threshold δ 2, a grid of 2m × 2m is used; areas larger than the threshold value delta 2 adopt 1 m-1 m grids;
s24: and acquiring WIFI data and image data of the self-adaptive grid fingerprint points, and establishing an offline database.
Preferably, the step S3 specifically includes:
the mobile robot is placed at any position in an indoor positioning area, WIFI data and image data of a current position point are collected, the WIFI data of the current position point are matched with WIFI in an off-line database through a WIFI positioning algorithm, and therefore the estimated probability P of the WIFI position is obtainedi 1,i=1,2,...,Nad,NadIs the total number of the self-adaptive fingerprint grid points and satisfies
Figure BDA0002904176590000035
Matching the image data of the current position point with the image data in the off-line database by adopting an image characteristic matching algorithm to obtain the estimated probability P of the image positioni 2,i=1,2,...,NadAnd satisfy
Figure BDA0002904176590000036
Preferably, the WIFI positioning algorithm comprises a K neighbor algorithm, a random forest algorithm and a support vector machine; the image feature matching algorithm comprises a feature extraction algorithm, a scale invariant feature transformation algorithm and an accelerated robust feature algorithm.
Preferably, the step S4 includes the following steps:
s41: self-adaptive screening is carried out on the estimated probability of the WIFI position and the image position obtained by calculation by adopting a slope method, and the point with the maximum slope is found out, so that the estimated probability P of the screened WIFI position and the estimated probability P of the image position are respectively obtainedi 1,
Figure BDA0002904176590000041
And Pi 2,
Figure BDA0002904176590000042
Figure BDA0002904176590000043
And
Figure BDA0002904176590000044
the number is the number after screening by a slope method;
s42: establishing unsupervised fusion positioning system, objective function
Figure BDA0002904176590000045
At the same time need to satisfy
Figure BDA0002904176590000046
And
Figure BDA0002904176590000047
wherein loci,i=1,2,...,NLIs the estimated probability P of the filtered WIFI position and image positioni 1And Pi 2The position of the corresponding self-adaptive fingerprint grid point; w is ai,i=1,2,...,NLThe normalized weighted values are respectively adapted to the positions of the fingerprint grid points, the Loc represents the estimated position, and the Lagrange multiplier method is adopted to obtain wiAnd Loc's update formula:
Figure BDA0002904176590000048
Figure BDA0002904176590000049
s43: setting up initial values
Figure BDA00029041765900000410
Iteration is carried out, the weight value and the estimated position are continuously updated, and the threshold lambda is set at the same time until the threshold lambda is met
Figure BDA00029041765900000411
And then, finishing iteration and realizing the final positioning result of the position of the mobile robot.
Compared with the related technology, the indoor positioning method based on WIFI and visual fusion provided by the invention establishes self-adaptive grid fingerprint points by using the error gradient matrix of the WIFI data, effectively reduces the number of fingerprint point acquisition, and simultaneously establishes an unsupervised fusion positioning system by adopting the WIFI and visual image fusion positioning, thereby realizing the final positioning result of the position of the mobile robot. The method effectively fuses WIFI and visual images, and obviously improves the positioning accuracy and the positioning stability.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an indoor positioning method based on WIFI and vision fusion, which comprises the following steps:
s1: and establishing coarse fingerprint points in an indoor positioning area, and acquiring WIFI data of the coarse fingerprint points.
Establishing coarse fingerprint points in an indoor positioning area, and acquiring WIFI data of the coarse fingerprint points, wherein the step S1 specifically comprises the following steps:
s11: setting N grid fingerprints of 4M by 4M in an indoor positioning area as coarse fingerprint points, collecting WIFI signals sent by M WIFI access points at the positions of the coarse fingerprint points, collecting L sample data at each coarse fingerprint point, and representing initial WIFI data of each coarse fingerprint point as Loci={RSSIi,xi,yi1,2, N, wherein,
Figure BDA0002904176590000051
the strength vectors of the M WIFI access point signals collected on the ith coarse fingerprint point are represented,
Figure BDA0002904176590000052
the signal intensity x of the Mth WIFI access point is measured on the ith coarse fingerprint pointiAnd yiThe seating of the ith coarse fingerprint point on the x-axis and the y-axis, respectivelyMarking;
s12: preprocessing the initial WIFI data of each coarse fingerprint point, eliminating missing values, screening and discarding bad values by adopting a Dixon criterion, and obtaining the WIFI data of each coarse fingerprint point after preprocessing.
The step S12 includes the following steps:
s121, selecting an obvious level alpha through L sample data, and searching a Dixon table to obtain a critical value D (alpha, L);
s122: calculating sample data X received by each WIFI access point of each coarse fingerprint pointi1,2, L, the average value X of LavgAnd absolute error e, and sorting the absolute error e, wherein the average value XavgAnd the absolute error e is calculated as follows:
Figure BDA0002904176590000053
ei=Xi-Xavg
e1≤e2≤...≤eL
s123: constructing statistics, if the value to be measured is the maximum value
Figure BDA0002904176590000061
If the value to be measured is the minimum value
Figure BDA0002904176590000062
S124: when r is the maximum value for the value to be measuredmaxIf the value is more than D (alpha, L), the value to be measured is an abnormal value and should be removed, otherwise, the value is reserved; when r is the minimum value for the value to be measuredminIf the measured value is more than D (alpha, L), the measured value is an abnormal value and should be removed, otherwise, the measured value is reserved.
S2: calculating the average absolute error of the WIFI data acquired by each coarse fingerprint point, establishing an extended error matrix by adopting an interpolation method, calculating according to the extended error matrix to obtain a gradient error matrix, establishing an adaptive fingerprint grid point of an indoor positioning area according to the gradient error matrix, and acquiring the WIFI data and the image data of the adaptive fingerprint grid point to establish an offline database.
In the embodiment, 4m × 4m coarse positioning grid points are set in an indoor positioning environment, forty seconds are spent on acquiring WIFI signals at each grid point, a gradient error matrix is obtained by adopting an interpolation method and related calculation according to acquired data, and adaptive grid points are established according to gradient errors, so that the acquisition of fingerprint points is reduced to the greatest extent.
The step S2 specifically includes:
s21: after the missing value and the bad value are removed, the average absolute error of the sample data of each coarse fingerprint point is calculated
Figure BDA0002904176590000063
Obtaining an error matrix E, expanding the error matrix E by adopting an interpolation method, and fitting an expanded error matrix EE of the 1 m-1 m grid fingerprint points;
s22: the error gradient matrix EF is calculated by extending the average absolute error in the error matrix EE, the calculation formula is as follows:
Figure BDA0002904176590000064
s23: setting two thresholds delta 1 and delta 2, wherein delta 1 is smaller than delta 2, dividing indoor positioning areas into three types, and adopting 4 m-4 m grids, wherein the area is smaller than the threshold delta 1; in the region between the threshold δ 1 and the threshold δ 2, a grid of 2m × 2m is used; areas larger than the threshold value delta 2 adopt 1 m-1 m grids;
a region smaller than the threshold value delta 1 is high in positioning stability and high in precision, so that a 4 m-4 m grid is adopted; in the region between the threshold value δ 1 and the threshold value δ 2, the stability is better in all aspects, so that a grid of 2m × 2m is adopted; the area larger than the threshold value delta 2 has low positioning precision and poor stability, so a 1 m-1 m grid is adopted.
S24: and acquiring WIFI data and image data of the self-adaptive grid fingerprint points, and establishing an offline database.
S3: and placing the mobile robot at any position in a positioning area, acquiring WIFI data and image data of a current position point, and matching with the offline database by adopting a WIFI positioning algorithm and an image matching positioning algorithm to obtain the estimation probability of the position of the mobile robot.
WIFI signal and image signal gather simultaneously, because WIFI data acquisition time is longer, consequently carry out 360 degrees rotation collection in fingerprint point position when gathering image signal to acquire more environmental information.
The step S3 specifically includes:
place mobile robot in the optional position of location intra-area, gather the WIFI data and the image data of current position point, in this application embodiment, WIFI signal and image signal gather simultaneously, because WIFI data acquisition time is longer, carry out 360 degrees rotation collection at the fingerprint point position when consequently gathering image signal to acquire more environmental information.
Matching the WIFI data of the current position point with a WIFI offline database through a WIFI positioning algorithm so as to obtain the estimated probability P of the WIFI positioni 1,i=1,2,...,Nad,NadIs the total number of the self-adaptive fingerprint grid points and satisfies
Figure BDA0002904176590000071
Matching the image data with an image off-line database by adopting an image feature matching algorithm to obtain the estimation probability P of the image positioni 2,i=1,2,...,NadAnd satisfy
Figure BDA0002904176590000072
The WIFI positioning algorithm comprises a K nearest neighbor algorithm, a random forest algorithm and a support vector machine; the image feature matching algorithm comprises a feature extraction algorithm (ORB), a scale invariant feature transform algorithm (SIFT) and an accelerated robust feature algorithm (SURF).
S4: and screening out a plurality of estimation positions with high possibility by a slope method, establishing an unsupervised fusion system, obtaining the weight value of each estimation position, and realizing the final positioning of the mobile robot position.
And forming self-adaption by adopting a slope method to obtain a plurality of estimated positions, and performing unsupervised model training through the screened plurality of estimated position points to obtain corresponding self-adaption weight values so as to realize final positioning.
The step S4 includes the following steps:
s41: self-adaptive screening is carried out on the estimated probability of the WIFI position and the image position obtained by calculation by adopting a slope method, and the point with the maximum slope is found out, so that the estimated probability P of the screened WIFI position and the estimated probability P of the image position are respectively obtainedi 1,
Figure BDA0002904176590000081
And Pi 2,
Figure BDA0002904176590000082
Figure BDA0002904176590000083
And
Figure BDA0002904176590000084
the number is the number after screening by a slope method;
firstly, calculating the slope of the estimated probability of the WIFI position and the image position, wherein the formula is
Figure BDA0002904176590000085
And
Figure BDA0002904176590000086
i=1,2,...,Nad-1, then finding the position with the maximum value of each slope, and obtaining the estimated probability P of each position after screeningi 1,
Figure BDA0002904176590000087
And Pi 2,
Figure BDA0002904176590000088
S42: building (2)Vertical unsupervised fusion positioning system, objective function
Figure BDA0002904176590000089
At the same time need to satisfy
Figure BDA00029041765900000810
And
Figure BDA00029041765900000811
wherein loci,i=1,2,...,NLIs the estimated probability P of the filtered WIFI position and image positioni 1And Pi 2The position of the corresponding self-adaptive fingerprint grid point; w is ai,i=1,2,...,NLThe normalized weighted values are respectively adapted to the positions of the fingerprint grid points, the Loc represents the estimated position, and the Lagrange multiplier method is adopted to obtain wiAnd Loc's update formula:
Figure BDA00029041765900000812
Figure BDA00029041765900000813
s43: setting up initial values
Figure BDA00029041765900000814
Iteration is carried out, the weight value and the estimated position are continuously updated, and the threshold lambda is set at the same time until the threshold lambda is met
Figure BDA00029041765900000815
And then, finishing iteration and realizing the final positioning result of the position of the mobile robot.
Compared with the related technology, the method and the device have the advantages that the error gradient matrix of the WIFI data is used for establishing the self-adaptive grid fingerprint points, the number of the collected fingerprint points is effectively reduced, meanwhile, the WIFI and the visual image are used for fusion positioning, and an unsupervised fusion positioning system is established, so that the final positioning result of the position of the mobile robot is realized. The method effectively fuses WIFI and visual images, and obviously improves the positioning accuracy and the positioning stability.
While the foregoing is directed to embodiments of the present invention, it will be understood by those skilled in the art that various changes may be made without departing from the spirit and scope of the invention.

Claims (7)

1. An indoor positioning method based on WIFI and visual fusion is characterized by comprising the following steps:
s1: establishing coarse fingerprint points in an indoor positioning area, and collecting WIFI data of the coarse fingerprint points;
s2: calculating the average absolute error of the WIFI data acquired by each coarse fingerprint point, establishing an extended error matrix by adopting an interpolation method, calculating according to the extended error matrix to obtain a gradient error matrix, establishing an adaptive fingerprint grid point of an indoor positioning area according to the gradient error matrix, and acquiring the WIFI data and the image data of the adaptive fingerprint grid point to establish an offline database;
s3: placing the mobile robot at any position in an indoor positioning area, collecting WIFI data and image data of a current position point, and matching the WIFI data and the image data with the offline database by adopting a WIFI positioning algorithm and an image matching positioning algorithm to obtain an estimated probability of the position of the mobile robot;
s4: and screening out a plurality of estimation positions with high possibility by a slope method, establishing an unsupervised fusion system, obtaining the weight value of each estimation position, and realizing the final positioning of the mobile robot position.
2. The WIFI and visual fusion based indoor positioning method according to claim 1, wherein the step S1 comprises the steps of:
s11: setting N grid fingerprints of 4M by 4M in an indoor positioning area as coarse fingerprint points, collecting WIFI signals sent by M WIFI access points at the positions of the coarse fingerprint points, and collecting WIFI signals at each coarse fingerprint pointL sample data, wherein the initial WIFI data of each coarse fingerprint point is represented as Loci={RSSIi,xi,yi1,2, N, wherein,
Figure FDA0002904176580000011
the strength vectors of the M WIFI access point signals collected on the ith coarse fingerprint point are represented,
Figure FDA0002904176580000012
the signal intensity x of the Mth WIFI access point is measured on the ith coarse fingerprint pointiAnd yiCoordinates of the ith coarse fingerprint point on an x axis and a y axis respectively;
s12: preprocessing the initial WIFI data of each coarse fingerprint point, eliminating missing values, screening and discarding bad values by adopting a Dixon criterion, and obtaining the WIFI data of each coarse fingerprint point after preprocessing.
3. The WIFI and visual fusion based indoor positioning method according to claim 2, wherein the step S12 comprises the steps of:
s121, selecting an obvious level alpha through L sample data, and searching a Dixon table to obtain a critical value D (alpha, L);
s122: calculating sample data X received by each WIFI access point of each coarse fingerprint pointi1,2, L, the average value X of LavgAnd absolute error e, and sorting the absolute error e, wherein the average value XavgAnd the absolute error e is calculated as follows:
Figure FDA0002904176580000021
ei=Xi-Xavg
e1≤e2≤...≤eL
s123: constructing statistics, if the value to be measured is the maximum value
Figure FDA0002904176580000022
If the value to be measured is the minimum value
Figure FDA0002904176580000023
S124: when r is the maximum value for the value to be measuredmaxIf the value is more than D (alpha, L), the value to be measured is an abnormal value and should be removed, otherwise, the value is reserved; when r is the minimum value for the value to be measuredminIf the measured value is more than D (alpha, L), the measured value is an abnormal value and should be removed, otherwise, the measured value is reserved.
4. The indoor positioning method based on WIFI and visual fusion of claim 2, wherein the step S2 specifically is:
s21: after the missing value and the bad value are removed, the average absolute error of the sample data of each coarse fingerprint point is calculated
Figure FDA0002904176580000024
Obtaining an error matrix E, expanding the error matrix E by adopting an interpolation method, and fitting an expanded error matrix EE of the 1 m-1 m grid fingerprint points;
s22: the error gradient matrix EF is calculated by extending the average absolute error in the error matrix EE, the calculation formula is as follows:
Figure FDA0002904176580000025
s23: setting two thresholds delta 1 and delta 2, wherein delta 1 is smaller than delta 2, dividing indoor positioning areas into three types, and adopting 4 m-4 m grids, wherein the area is smaller than the threshold delta 1; in the region between the threshold δ 1 and the threshold δ 2, a grid of 2m × 2m is used; areas larger than the threshold value delta 2 adopt 1 m-1 m grids;
s24: and acquiring WIFI data and image data of the self-adaptive grid fingerprint points, and establishing an offline database.
5. The indoor positioning method based on WIFI and visual fusion of claim 4, wherein the step S3 specifically is:
the mobile robot is placed at any position in an indoor positioning area, WIFI data and image data of a current position point are collected, the WIFI data of the current position point are matched with WIFI in an off-line database through a WIFI positioning algorithm, and therefore the estimated probability P of the WIFI position is obtainedi 1,i=1,2,...,Nad,NadIs the total number of the self-adaptive fingerprint grid points and satisfies
Figure FDA0002904176580000031
Matching the image data of the current position point with the image data in the off-line database by adopting an image characteristic matching algorithm to obtain the estimated probability P of the image positioni 2,i=1,2,...,NadAnd satisfy
Figure FDA0002904176580000032
6. The indoor positioning method based on WIFI and visual fusion of claim 5, wherein the WIFI positioning algorithm comprises a K nearest neighbor algorithm, a random forest algorithm and a support vector machine; the image feature matching algorithm comprises a feature extraction algorithm, a scale invariant feature transformation algorithm and an accelerated robust feature algorithm.
7. The WIFI and visual fusion based indoor positioning method according to claim 5, wherein the step S4 comprises the steps of:
s41: self-adaptive screening is carried out on the estimated probabilities of the WIFI position and the image position obtained through calculation by adopting a slope method, and the point with the maximum slope is found out, so that the estimated probabilities of the screened WIFI position and the image position are obtained respectively
Figure FDA0002904176580000033
And
Figure FDA0002904176580000034
Figure FDA0002904176580000035
and
Figure FDA0002904176580000036
the number is the number after screening by a slope method;
s42: establishing unsupervised fusion positioning system, objective function
Figure FDA0002904176580000037
At the same time need to satisfy
Figure FDA0002904176580000038
And
Figure FDA0002904176580000039
wherein loci,i=1,2,...,NLIs the estimated probability P of the filtered WIFI position and image positioni 1And Pi 2The position of the corresponding self-adaptive fingerprint grid point; w is ai,i=1,2,...,NLThe normalized weighted values are respectively adapted to the positions of the fingerprint grid points, the Loc represents the estimated position, and the Lagrange multiplier method is adopted to obtain wiAnd Loc's update formula:
Figure FDA0002904176580000041
Figure FDA0002904176580000042
s43: setting up initial values
Figure FDA0002904176580000043
Carry out iteration, continuously make moreNew weight value and estimated position, while setting threshold lambda until satisfied
Figure FDA0002904176580000044
And then, finishing iteration and realizing the final positioning result of the position of the mobile robot.
CN202110066411.0A 2021-01-19 2021-01-19 Indoor positioning method based on WIFI and vision fusion Active CN112887911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110066411.0A CN112887911B (en) 2021-01-19 2021-01-19 Indoor positioning method based on WIFI and vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110066411.0A CN112887911B (en) 2021-01-19 2021-01-19 Indoor positioning method based on WIFI and vision fusion

Publications (2)

Publication Number Publication Date
CN112887911A true CN112887911A (en) 2021-06-01
CN112887911B CN112887911B (en) 2022-03-15

Family

ID=76049334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110066411.0A Active CN112887911B (en) 2021-01-19 2021-01-19 Indoor positioning method based on WIFI and vision fusion

Country Status (1)

Country Link
CN (1) CN112887911B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484654A (en) * 2014-12-19 2015-04-01 苏州大学 Drowning prevention identification method based on mode identification and fingerprint matching and positioning algorithm
CN106792559A (en) * 2016-12-28 2017-05-31 北京航空航天大学 The automatic update method of fingerprint base in a kind of WiFi indoor locating systems
CN109803234A (en) * 2019-03-27 2019-05-24 成都电科慧安科技有限公司 Unsupervised fusion and positioning method based on the constraint of weight different degree
US10542517B1 (en) * 2019-01-17 2020-01-21 Cisco Technology, Inc. Characterizing movement behaviors of wireless nodes in a network
CN111639968A (en) * 2020-05-25 2020-09-08 腾讯科技(深圳)有限公司 Trajectory data processing method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484654A (en) * 2014-12-19 2015-04-01 苏州大学 Drowning prevention identification method based on mode identification and fingerprint matching and positioning algorithm
CN106792559A (en) * 2016-12-28 2017-05-31 北京航空航天大学 The automatic update method of fingerprint base in a kind of WiFi indoor locating systems
US10542517B1 (en) * 2019-01-17 2020-01-21 Cisco Technology, Inc. Characterizing movement behaviors of wireless nodes in a network
CN109803234A (en) * 2019-03-27 2019-05-24 成都电科慧安科技有限公司 Unsupervised fusion and positioning method based on the constraint of weight different degree
CN111639968A (en) * 2020-05-25 2020-09-08 腾讯科技(深圳)有限公司 Trajectory data processing method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱博: "机器人室内语义建图中的场所感知方法综述", 《自动化学报》 *

Also Published As

Publication number Publication date
CN112887911B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN107948930B (en) Indoor positioning optimization method based on position fingerprint algorithm
CN107292911B (en) Multi-target tracking method based on multi-model fusion and data association
CN110958575B (en) Positioning method and system based on WiFi fusion prediction
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN107182036A (en) The adaptive location fingerprint positioning method merged based on multidimensional characteristic
CN112325883B (en) Indoor positioning method for mobile robot with WiFi and visual multi-source integration
CN108038415B (en) Unmanned aerial vehicle automatic detection and tracking method based on machine vision
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN114742820B (en) Bolt loosening detection method, system and storage medium based on deep learning
CN105120479A (en) Signal strength difference correction method of Wi-Fi signals between terminals
CN114189809A (en) Indoor positioning method based on convolutional neural network and high-dimensional 5G observation characteristics
JP3850602B2 (en) Moving body detection device and moving body detection method
CN111050282A (en) Multi-time fuzzy inference weighted KNN positioning method
JP2011113398A (en) Attitude estimation device
CN110967667A (en) Indoor track acquisition method based on crowdsourcing Wi-Fi fingerprint positioning
CN109448024B (en) Visual tracking method and system for constructing constraint correlation filter by using depth data
CN108845289B (en) Positioning method and system for shopping cart and shopping cart
CN112887911B (en) Indoor positioning method based on WIFI and vision fusion
CN110660084A (en) Multi-target tracking method and device
CN115144828B (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN109803234A (en) Unsupervised fusion and positioning method based on the constraint of weight different degree
CN115761693A (en) Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image
CN114511803A (en) Target occlusion detection method for visual tracking task
Zhang et al. Indoor localization algorithm based on combination of Kalman filter and clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant