CN114513746A - Indoor positioning method integrating triple visual matching model and multi-base-station regression model - Google Patents
Indoor positioning method integrating triple visual matching model and multi-base-station regression model Download PDFInfo
- Publication number
- CN114513746A CN114513746A CN202111550033.XA CN202111550033A CN114513746A CN 114513746 A CN114513746 A CN 114513746A CN 202111550033 A CN202111550033 A CN 202111550033A CN 114513746 A CN114513746 A CN 114513746A
- Authority
- CN
- China
- Prior art keywords
- rss
- base station
- reference point
- regression model
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000000007 visual effect Effects 0.000 title claims abstract description 41
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 238000012353 t test Methods 0.000 claims description 50
- 238000001914 filtration Methods 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 10
- 239000000126 substance Substances 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 3
- 238000000692 Student's t-test Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Navigation (AREA)
Abstract
The invention provides an indoor positioning method integrating a triple visual matching model and a multi-base-station regression model, which comprises the steps that firstly, a base station acquires multi-angle images, RSS (received signal strength) and position information of different indoor reference points, and uploads the acquired images and reference point position data to a cloud; secondly, after RSS data processing, each base station trains an RSS-distance regression model in a local server, and the cloud end constructs an image-position fingerprint database by using the multi-angle images and the reference point positions; and finally, positioning and fusing the triple visual matching model and the multi-base station regression model to obtain a final positioning result. The invention can better represent the indoor environment by adopting a multi-base-station RSS-distance model and multi-angle visual acquisition data, and realizes fusion positioning by adopting a multi-base-station RSS-distance regression model and a triple visual matching model, thereby improving the single positioning method and solving the problem of insufficient precision of single fingerprint matching and having very wide application scenes.
Description
Technical Field
The invention belongs to the field of indoor positioning and the field of machine vision, and particularly relates to an indoor positioning method fusing a triple vision matching model and a multi-base-station regression model.
Background
In recent years, with the development of position service industries such as vehicle fleet navigation and disaster relief, high-precision positioning has become a popular research direction for wireless positioning technology. Initially, people put focus on outdoor scenes to realize outdoor positioning dominated by Global Navigation Satellite System (GNSS), but due to the shielding of buildings, GNSS signals cannot be applied to indoor accurate positioning. According to the statistics of recent survey, people are located indoors for 80% -90% of the time, so that the research of indoor positioning is more and more focused. Moreover, as the scale of factory enterprises is continuously enlarged, the safety of workers is difficult to prevent, so that the workers need to be accurately positioned indoors in a specific area. The above results show that the accurate positioning of the indoor environment becomes a positioning research hotspot, and the research results not only bring great economic benefits, but also can be innovated by combining other fields.
The existing indoor positioning methods include technologies such as an adjacent detection method, a centroid positioning method, a multilateral positioning method, a triangulation positioning method, a pole method, a fingerprint positioning method and a dead reckoning method, but the methods have more or less defects, for example, although the adjacent detection method is simple and easy to implement, only approximate positioning information can be provided, and intensive base station deployment is required for realizing high-precision positioning by the centroid positioning method, which causes huge economic cost. Comprehensively, indoor positioning generally adopts a multilateral positioning method and a fingerprint positioning method, and both positioning methods can obtain good positioning accuracy. The principle of the multilateration method is that the distance is calculated by the received signal strength and the attenuation model of the signal, and the positioning position is solved by establishing a linear equation set according to the distance. Indoor positioning based on a fingerprint positioning method is a matching or mapping method of an offline fingerprint library. For example, according to RSS, a classification algorithm is used for matching reference points in an offline fingerprint library, and then the positions of the reference points are weighted to obtain a positioning position; or using regression algorithm to construct mapping relation according to RSS to obtain the positioning position. Multilateration is affected by complex indoor environments and base station deployment locations, and the positioning accuracy is reduced by using a type of attenuation model to represent the RSS-distance relationship. In addition, fingerprint location using RSS data also results in poor location accuracy due to RSS signal fluctuations.
Disclosure of Invention
The technical problem is as follows: the invention provides an indoor positioning method fusing a triple visual matching model and a multi-base station regression model from two aspects of multi-base station model training and triple visual matching, and on one hand, the multi-base station model training is used for obtaining an RSS-distance model of each base station and multi-angle visual acquisition data to better represent an indoor environment; on the other hand, a multi-base station RSS-distance model and a triple vision matching technique are used to achieve higher accuracy indoor positioning.
The technical scheme is as follows: the invention provides an indoor positioning method integrating a triple visual matching model and a multi-base-station regression model, which comprises the steps that firstly, a base station acquires multi-angle images, RSS (received signal strength) and position information of different indoor reference points, and the acquired images and reference point position data are uploaded to a cloud; secondly, after RSS data processing, each base station trains an RSS-distance regression model in a local server, and the cloud end constructs an image-position fingerprint database by using the multi-angle images and the reference point positions; and finally, positioning and fusing the triple visual matching model and the multi-base station regression model to send more accurate position prediction to the mobile equipment.
The specific content of the scheme is as follows:
the indoor positioning method fusing the triple visual matching model and the multi-base-station regression model comprises the following steps:
(1) in an off-line stage, a mobile device communicates with a base station, the base station acquires Received Signal Strength (RSS) data (the base station acquires multiple RSS data of the same reference point), pictures shot in 8 directions (east, west, south, north, south, east, north, west, south and north) of the position of the mobile device and the position of the mobile device, wherein the base station establishes an RSS-distance fingerprint library in a local server by using the RSS data, the position of the mobile device and the position of the base station, the base station transmits the shot pictures and the position of the mobile device to a cloud, and the cloud establishes an image-position fingerprint library;
(2) each local server utilizes the RSS-distance fingerprint library to carry out RSS-distance regression model learning;
(3) the cloud end utilizes a triple visual matching model to perform fingerprint positioning according to the fingerprint database of the image-position;
(4) acquiring an RSS vector of the test point according to the base station, and performing multilateral positioning by combining the cloud with the RSS-distance regression model in the step (2);
(5) and (4) weighting and fusing the multilateral positioning result in the step (4) and the fingerprint positioning result in the step (3) by the cloud end to obtain a fusion positioning result of the test point.
Further, the step (2) is realized as follows: firstly, carrying out Gaussian filtering and threshold filtering on RSS data corresponding to each base station in an RSS-distance fingerprint database; then training a neural network based on the filtered fingerprint library to obtain an RSS-distance regression model; the neural network takes RSS data of a reference point as input, and takes the distance from the reference point to the base station as output.
The RSS vectors collected by the m base stations for the r reference point are:
wherein N is1Indicating the number of RSS data collected by the base station for a certain reference point,representing RSSm,rN of (1)1And (4) data.
After gaussian filtering, the RSS vector of the reference point r acquired by the m base station changes:
wherein N is2Representing RSSm,rThe number of RSS data remaining after gaussian filtering,representing RSSm,r gN of (1)2And (4) data.
After threshold filtering, the RSS vector quantity of the r reference point acquired by the m base station is changed as follows:
wherein, N3Representing RSSm,rThe number of RSS data remaining after gaussian filtering and threshold filtering,the expression RSSm is used for expressing the RSSm,r yn of (1)3And (4) data.
After the filtering processing, the m base stations train the neural network at the local server to obtain a corresponding RSS-distance regression model, which is expressed as:
distance fm(RSSm,r y)
Wherein f ism(. cndot.) represents the mapping relationship of the RSS-distance regression model corresponding to the m base stations.
Further, the step (3) is realized as follows:
for the triple visual matching model, firstly, a YOLO target detection system is adopted to carry out image detection on a shot image of a t test point. the identification information of the image captured in the t test point θ direction is represented as follows:
where θ represents the angle at which t test images are taken, cqRepresents a Q-recognition product (Q-1, 2, …, Q1). In addition, the YOLO target detection system can also obtain the identification area where each identification object is located.
Similarity matching (first importance perception matching) of the t test point and the same identification object of the r reference point in different directions is performed in combination with θ, which is expressed as follows:
wherein, the first and the second end of the pipe are connected with each other,indicating that the image taken in the theta direction at the t-reference point is the same as the image taken in the theta 1 direction at the r-reference pointThe identification object information of (a) is,and the identification object information of the image shot by the r reference point in the theta 1 direction is represented, and the theta 1 belongs to { east, west, south, north, southeast, northeast, southwest and northwest }. If it isIf the | is | represents the number of the elements of the set, the subsequent process is carried out, otherwise, the image shot in the theta 1 direction of the r reference point is considered to be incapable of positioning the t test point, and the image shot in the theta 1 direction is discarded.
Similarity matching (second visual matching) of the directional relationship between the t-test point and the identifier information of the r-reference point in different directions is performed, and is expressed as follows:
wherein the content of the first and second substances,the relative direction of the pixels of the centers of the identification areas where the two identification objects are located in the image shot in the direction of the t test point theta is represented,the relative directions of pixels representing the centers of recognition regions where two recognized objects are located in an image captured in the r reference point theta 1 direction, cc and ct representG (-) is a 0-1 transfer function, expressed as follows:
similarity matching (third re-visual matching) of the distance relationship between the t-test point and the identifier information of the r-reference point in different directions is performed, and is expressed as follows:
wherein the content of the first and second substances,the relative distance of the pixels of the centers of the identification areas where the two identification objects are located in the image shot in the direction of the t test point theta is represented,and the relative distance of pixels at the centers of the identification areas where the two identification objects are located in the image shot in the direction of the r reference point theta 1 is represented.
Matching is carried out by combining data of triple visual matching, and similarity of an image shot in the theta direction of the t test point and an image shot in the theta 1 direction of the r reference point in different directions is calculated and expressed as follows:
sorting the obtained similarity from big to small, and selecting the top K1Weighting the reference points corresponding to the similarity, and obtaining the fingerprint positioning result of the triple visual matching of the t test points as follows:
wherein (x)k,yk) Is the coordinate of the reference point r, K is the front K1A set of reference points corresponding to the respective similarities.
Further, the step (4) is realized as follows:
RSS vector for t test pointst=[RSS1,t,…,RSSm,t,…,RSSM,t]Is judged, and is abandoned less than the threshold value alpha1Is not used for t-test point positioning.
And selecting base stations (marked as m1, m2 and m3) corresponding to 3 maximum values in the residual elements of the RSS vectors of the test points, and performing multilateral positioning according to the distances from the t test points to the base stations of m1, m2 and m 3. the distances from the t test point to the 3 base stations are respectively as follows:
d1=fm1(RSSm1,t),d2=fm2(RSSm2,t),d3=fm3(RSSm3,t)
wherein f ism1(·),fm2(·),fm3(. cndot.) represents the mapping relation of RSS-distance regression models corresponding to three base stations of m1, m2 and m 3.
The specific formula of multilateral positioning is:
wherein x ism1,xm2,xm3And xt1Abscissa, y, representing the results of multilateration calculations for base station m1, m2, m3 and t test pointsm1,ym2,ym3And y andt1and the ordinate of the calculation result of the multilateral positioning of the base station m1, m2, m3 and the t test point is represented.
Then, according to D1={m|RSSm,tNot less than threshold value alpha1And D2={m|RSSm,tNot less than threshold value alpha2And performing weighted fusion on the multilateral positioning result and the fingerprint positioning result subjected to triple visual matching to obtain a final positioning result.
If | D2| ≧ 3, the weight of the multilateration result is represented as:
if | D1|+|D2| ≧ 3 and 0<|D2|<3, the weight of the multilateration result is represented as:
if | D1|+|D2| is not less than 3 and | D2The weight of the multilateration result is represented as:
if | D1|+|D2|<3, the weight of the multilateration result is represented as:
w=0
the final positioning positions are:
(x,y)=w*(xt1,yt1)+(1-w)*(xt2,yt2)。
wherein alpha is1Less than alpha2,α1Representing the upper limit of the signal difference, α2Indicating a good lower bound for the signal. The two thresholds may vary, taking into account different devices. In the present invention-100 dBm<α1≤-80dBm,-70dBm<α2≤-45dBm。
Has the advantages that: the invention can better represent the indoor environment by adopting a multi-base-station RSS-distance model and multi-angle visual acquisition data, and realizes fusion positioning by adopting the multi-base-station RSS-distance model and a triple visual matching model, thereby improving the single positioning method and solving the problem of insufficient precision of single fingerprint matching and having very wide application scenes.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of multi-base station model training;
fig. 3 is a triple visual matching flow chart.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings:
the invention provides an indoor positioning method fusing a triple visual matching model and a multi-base-station regression model, as shown in figure 1, the indoor positioning method fusing the triple visual matching model and the multi-base-station regression model is provided by the invention. And finally, positioning and fusing the triple visual matching model and the multi-base station regression model to send more accurate position prediction to the mobile equipment.
The invention mainly comprises three contents: firstly, training a regression model of multiple base stations, and respectively training RSS-distance models by different base stations according to the RSS and a reference point position; secondly, a triple visual matching positioning algorithm is proposed by using the image and the reference point position; and thirdly, the cloud realizes fusion positioning according to the triple visual matching model and the multi-base-station regression model so as to predict the position of the mobile equipment.
1. Regression model training for multiple base stations
A common RSS-distance model in indoor positioning is a logarithmic attenuation model, but the complexity of an indoor environment (such as the blockage of a wall) and the surrounding environment of a position where a base station is located are different, and the positioning accuracy is not sufficient due to the adoption of one type of logarithmic attenuation model for all positioning base stations, so that a neural network needs to be adopted for each base station to train the RSS-distance model.
The Gaussian filtering is mainly used for solving the problem of data abnormity caused by a complex environment. In order to ensure the accuracy of the data, each base station collects multiple RSS data of the same reference point and performs Gaussian filtering. The data collected by the m base station to the r reference point are as follows:
wherein N is1Indicating the number of RSS collected for the same reference point. According to the collected RSSm,rMean μ and variance σ2Expressed as:
if it is notThe data will be filtered out. After gaussian filtering, the RSS data of the r reference point acquired by the m base station is changed into:
wherein N is2Representing RSSm,rAnd after Gaussian filtering, the number of the residual RSSs of the same reference point is determined.
The threshold filtering is mainly because the positioning accuracy of the multilateral positioning method realized by the RSS-distance model is poorer when the RSS value is small, so that the RSS-distance model can not be influenced by the small RSS value by using the threshold filtering although the RSS range of the RSS-distance model is reduced, and the model accuracy is improved. After gaussian filtering and threshold filtering (RSS value is greater than or equal to-90 dBm), the RSS data of the reference point r collected by the m base station changes as follows:
wherein N is3Representing RSSm,rAnd after Gaussian filtering and threshold filtering are carried out, the number of the residual RSSs of the same reference point is increased. After the data filtering process, the local server performs RSS-distance model training, which is a regression model trained by taking RSS of a reference point as input and distance from the reference point to the base station as output through a neural network, as shown in fig. 2. The RSS-distance regression model trained by m base stations is represented as:
distance fm(RSSm,r y)
Wherein f ism(. cndot.) represents the mapping relationship of the m base station training models.
2. Triple visual matching model:
before fingerprint positioning, an offline fingerprint library needs to be constructed, and the problem of positioning errors caused by fluctuation of RSS is considered. The invention relates to a fingerprint positioning method considering multi-angle images and reference point positions. For the reference point, the image information of 8 orientations and the position of the reference point create an image-position fingerprint library (the picture taken at each angle and the position of the reference point constitute a set of samples). In fingerprint localization, fingerprint matching is performed using a triple visual matching model, as shown in fig. 3.
First-order perceptual matching: and (4) carrying out image detection on the shot images of the t test points by adopting a YOLO target detection system, and comparing the image similarity according to the number of the same identification objects. The YOLO target detection system detects the image, and obtains the identifier information of the image shot by the t test point (the test point position is the position where the mobile device is located), which is expressed as follows:
wherein theta represents the angle of image shooting of the t test point, cqRepresents a Q-recognition product (Q-1, 2, …, Q1). In addition, the YOLO target detection system can also obtain the identification area where each identification object is located.
In combination with θ, the similarity matching of the test point and the same identifier of the r reference point in different directions is expressed as follows:
wherein the content of the first and second substances,representing the same identification of the image taken at the t-test point theta direction as the image taken at the r-reference point,and identification information indicating an image of the r reference point captured in the θ 1 direction. If it isSubsequent matching is performed.
Second visual matching: and the relative orientation information between the identification objects of the shot images of the t test points can be obtained by utilizing the YOLO target detection system to the identification area where the identification objects are located. And excavating the relative direction information of the identification object to establish a direction relation by restricting the direction of the identification object in the image. the directional relation of the identification object in the t test point shooting image is expressed as follows:
wherein the content of the first and second substances,and the relative direction of pixels in the centers of the identification areas where the two identification objects are located in the image shot in the direction of the t test point theta is represented.
the similarity matching of the direction relation of the t test point and the identification objects of the r reference points in different directions is expressed as follows:
wherein the content of the first and second substances,and G (-) is a 0-1 conversion function and is expressed as follows:
the third visual matching: and the relative distance information between the identification objects of the shot images of the t test points can be obtained by combining the identification area where the identification objects are located by the YOLO target detection system. By performing pixel analysis on the image, the distance relationship of the identification objects in the t test point shooting image is represented as follows:
wherein the content of the first and second substances,and the relative distance of the pixels of the centers of the identification areas where the two identification objects are located in the image shot in the direction of the t test point theta is represented. Similarity matching of the distance relationship between the t test point and the identification objects of the r reference points in different directions is represented as follows:
wherein, the first and the second end of the pipe are connected with each other,and the relative distance of pixels at the centers of the recognition areas where the two recognized objects are located in the image shot by the r reference point in the theta 1 direction is represented.
According to triple visual matching, the similarity of the t test point and the images of the r reference points in different directions is expressed as:
sorting the obtained similarity from big to small, and selecting the top K1Weighting the reference points corresponding to the 10 similarity degrees, and obtaining a fingerprint positioning result of the triple visual matching of the t test point as follows:
3. fusion and positioning:
in the above, the advantages and disadvantages of the multilateral positioning method and the fingerprint positioning method are considered, the warp is changed respectively, and in order to obtain a good positioning result, a fusion positioning mode is adopted, and the advantages of the two positioning methods are fully utilized. The base stations available for positioning are selected according to the RSS value, because the larger the RSS, the more accurate the signal. For RSStIs judged if the RSS ism,t<90dBm, then the RSS of the m base stations will not be used for t test point location. According to RSStThe sizes of the remaining elements in the list are sorted, and the base stations (respectively denoted as m1, m2 and m3) corresponding to the 3 maximum RSS values are selected for multilateral positioning.
the distances from the t test point to the 3 base stations are respectively as follows:
d1=fm1(RSSm1,t),d2=fm2(RSSm2,t),d3=fm3(RSSm3,t)
wherein f ism1(·),fm2(·),fm3(. cndot.) represents the mapping relationship of RSS-distance regression models corresponding to the three base stations m1, m2 and m3 respectively.
Then, the specific formula of multilateration is:
wherein x ism1,xm2,xm3And xt1Abscissa, y, representing base stations m1, m2, m3 and multilateration resultsm1,ym2,ym3And y andt1the ordinate of the base station m1, m2, m3 and the multilateration result is shown.
And then the multilateral positioning result and the fingerprint positioning result of the triple visual matching are subjected to weighted fusion. According to D1={m|RSSm,tNot less than-90 dBm } and D2={m|RSSm,tMore than or equal to-60 dBm, and the weighted fusion positioning is carried out by the following three categories.
If | D2| ≧ 3, the weight of the multilateration result is represented as:
if | D1|+|D2| is more than or equal to 3 and | D is more than 02< 3, the weight of the multilateration result is expressed as:
if | D1|+|D2| is not less than 3 and | D2The weight of the multilateration result is represented as:
if | D1|+|D2< 3, the weight of the multilateration result is expressed as:
w=0
the final positioning position is obtained as follows:
(x,y)=w(xt1,yt1)+(1-w)(xt2,yt2)。
in the above description, the indoor positioning method fusing the triple vision matching model and the multi-base-station regression model provided in the embodiment of the present invention is described in detail, and a person having ordinary skill in the art may change the specific implementation and application scope according to the idea of the embodiment of the present invention.
Claims (10)
1. The indoor positioning method fusing the triple visual matching model and the multi-base-station regression model is characterized by comprising the following steps of:
(1) in an off-line stage, the mobile equipment communicates with the base station, and the base station acquires Received Signal Strength (RSS) data, the position of the mobile equipment and images shot in multiple directions at the position of the mobile equipment; the base station establishes an RSS-distance fingerprint database in a local server by using RSS data, the position of the mobile equipment and the position of the base station; the base station uploads the shot image and the position of the mobile device to the cloud, and the cloud establishes an image-position fingerprint database;
(2) each local server utilizes the RSS-distance fingerprint library to carry out RSS-distance regression model learning;
(3) the cloud end utilizes a triple visual matching model to perform fingerprint positioning according to the fingerprint database of the image-position;
(4) acquiring an RSS vector of the test point according to the base station, and performing multilateral positioning by combining the cloud with the RSS-distance regression model in the step (2);
(5) and (4) weighting and fusing the multilateral positioning result in the step (4) and the fingerprint positioning result in the step (3) by the cloud end to obtain a fusion positioning result of the test point.
2. The indoor positioning method based on fusion of the triple vision matching model and the multi-base-station regression model as claimed in claim 1, wherein in the step (1), the base station obtains images shot in eight directions, namely east, west, south, north, southeast, northeast, southwest and northwest, where the mobile device is located.
3. The indoor positioning method based on fusion of triple visual matching model and multi-base-station regression model as claimed in claim 1, wherein in step (1), the base station obtains RSS data of the same reference point for a plurality of times.
4. The indoor positioning method fusing triple visual matching model and multi-base station regression model according to claim 3, wherein the step (2) is realized by the following steps: firstly, carrying out Gaussian filtering and threshold filtering on RSS data corresponding to each base station in an RSS-distance fingerprint database; then training a neural network based on the filtered fingerprint library to obtain an RSS-distance regression model; the neural network takes RSS data of a reference point as input, and takes the distance from the reference point to a base station as output.
5. The indoor positioning method fusing the triple visual matching model and the multi-base-station regression model according to claim 4, wherein the RSS vector collected by the m base station for the r reference point is:
wherein N is1Indicating the number of RSS data collected by the base station for a certain reference point,representing RSSm,rN of (1)1A piece of data;
RSSm,rafter gaussian filtering, the RSS vector of m base stations to the r reference point changes as:
wherein N is2Representing RSSm,rThe number of RSS data remaining after gaussian filtering,representing RSSm,r gN of (1)2A piece of data;
RSSm,r gafter threshold filtering, the RSS vector change of the m base stations to the r reference point is:
wherein N is3Representing RSSm,r gThe number of RSS data remaining after threshold filtering,representing RSSm,r yN of (1)3A piece of data;
based on the RSS vectors after Gaussian filtering and threshold filtering, the m base station trains the neural network at the local server to obtain a corresponding RSS-distance regression model, which is expressed as:
distance fm(RSSm,r y)
Wherein f ism(. cndot.) represents the mapping relationship of the RSS-distance regression model corresponding to the m base stations.
6. The indoor positioning method fusing triple visual matching model and multi-base station regression model according to claim 1, wherein the step (3) is realized by the following steps:
(3.1) carrying out image detection on the images shot by the t test points by adopting a YOLO target detection system to obtain the identification object information of each image and the identification area where each identification object is located, wherein the identification object information of the images shot in the theta direction of the t test points is expressed as follows:
wherein, cqQ-1, 2, …, Q1, Q1 indicates the number of recognizers;
(3.2) carrying out similarity matching between the t test point and the same identification objects of r reference points in different directions, and expressing the similarity matching as follows:
wherein the content of the first and second substances,the same identification information indicating the image photographed in the t test point theta direction and the image photographed in the r reference point theta 1 direction,the identification object information of the image shot in the r reference point theta 1 direction is represented, and theta 1 belongs to { east, west, south, north, southeast, northeast, southwest and northwest };
if it isStep (3,3) is executed, |, represents the number of elements of the set; otherwise, the image shot in the theta 1 direction of the r reference point is considered to be incapable of positioning the t test point, and the image shot in the theta 1 direction is discarded;
(3.3) performing similarity matching of the direction relation of the t test point and the identifier information of the r reference point in different directions, and expressing the similarity matching as follows:
wherein the content of the first and second substances,the relative direction of the pixels of the centers of the identification areas where the two identification objects are located in the image shot in the direction of the t test point theta is represented,the relative directions of pixels representing the centers of recognition regions where two recognized objects are located in an image captured in the r reference point theta 1 direction, cc and ct representTwo identifiers of (1); g (-) is a 0-1 transfer function;
(3.4) performing similarity matching of the distance relationship between the t test point and the identifier information of the r reference point in different directions, wherein the similarity matching is expressed as follows:
wherein the content of the first and second substances,the relative distance of the pixels of the centers of the identification areas where the two identification objects are located in the image shot in the direction of the t test point theta is represented,the relative distance of pixels in the centers of the recognition areas where the two recognized objects are located in the image shot in the direction of the r reference point theta 1 is represented;
(3.5) the similarity between the image shot in the theta direction of the t test point and the image shot in the theta 1 direction of the r reference point in different directions is expressed as follows:
(3.6) sorting the similarity degrees obtained in (3.5) from big to small, and selecting the top K1Weighting the reference points corresponding to the similarity, and obtaining the fingerprint positioning result of the triple visual matching of the t test points as follows:
wherein (x)k,yk) Is the coordinate of the reference point K, K being the front K1A set of reference points corresponding to the respective similarities.
8. the indoor positioning method fusing triple visual matching model and multi-base station regression model according to claim 1, wherein the step (4) is realized by the following steps:
(4.1) the RSS vector of the t test point is RSSt=[RSS1,t,…,RSSm,t,…,RSSM,t]Abandon RSStIs less than a threshold value alpha1An element of (1); wherein M represents the number of base stations, RSSm,tRepresenting the RSS value collected by the m base station for the t test point;
(4.2) selecting base stations corresponding to the 3 maximum RSS values obtained in the step (4.1), and recording the base stations as m1, m2 and m 3;
(4.3) performing multilateral positioning according to the distances from the t test point to the m1, m2 and m3 base stations, wherein the specific formula of the multilateral positioning is as follows:
wherein d1, d2 and d3 respectively represent the distances from the t test point to the m1, m2 and m3 base stations, and xm1,xm2,xm3And abscissas, y, representing base stations m1, m2, m3, respectively, and multilateration calculationsm1,ym2,ym3And ordinate (x) representing base station m1, m2, m3 and multilateration calculations, respectivelyt1,yt1) And representing the multilateral positioning result of the t test point.
9. The fused triple vision matching model of claim 8 and multiAn indoor positioning method of a base station regression model is characterized in that d1 ═ fm1(RSSm1,t),d2=fm2(RSSm2,t),d3=fm3(RSSm3,t) Wherein f ism1(·),fm2(·),fm3(. cndot.) represents the mapping relationship of RSS-distance regression models corresponding to the base stations m1, m2 and m 3.
10. The indoor positioning method fusing the triple vision matching model and the multi-base-station regression model according to claim 7, wherein the multilateral positioning result and the triple vision matching fingerprint positioning result are weighted and fused to obtain the final positioning result of the t test point:
(x,y)=w*(xt1,yt1)+(1-w)*(xt2,yt2)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111550033.XA CN114513746B (en) | 2021-12-17 | 2021-12-17 | Indoor positioning method integrating triple vision matching model and multi-base station regression model |
JP2022077904A JP7479715B2 (en) | 2021-12-17 | 2022-05-11 | 5G indoor smart positioning method combining triple visual matching and multi-base station regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111550033.XA CN114513746B (en) | 2021-12-17 | 2021-12-17 | Indoor positioning method integrating triple vision matching model and multi-base station regression model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114513746A true CN114513746A (en) | 2022-05-17 |
CN114513746B CN114513746B (en) | 2024-04-26 |
Family
ID=81547465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111550033.XA Active CN114513746B (en) | 2021-12-17 | 2021-12-17 | Indoor positioning method integrating triple vision matching model and multi-base station regression model |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7479715B2 (en) |
CN (1) | CN114513746B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018167500A1 (en) * | 2017-03-16 | 2018-09-20 | Ranplan Wireless Network Design Ltd | Wifi multi-band fingerprint-based indoor positioning |
CN109814066A (en) * | 2019-01-24 | 2019-05-28 | 西安电子科技大学 | RSSI indoor positioning distance measuring method, indoor positioning platform based on neural network learning |
CN112584311A (en) * | 2020-12-15 | 2021-03-30 | 西北工业大学 | Indoor three-dimensional space fingerprint positioning method based on WKNN fusion |
CN112867066A (en) * | 2021-01-26 | 2021-05-28 | 南京邮电大学 | Edge calculation migration method based on 5G multi-cell deep reinforcement learning |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3165391B2 (en) * | 1996-03-22 | 2001-05-14 | 松下電器産業株式会社 | Mobile radio communication system and method for detecting position of mobile station |
JP5803367B2 (en) | 2011-07-15 | 2015-11-04 | 富士通株式会社 | Self-position estimation apparatus, self-position estimation method and program |
EP3523753A4 (en) | 2017-12-11 | 2019-10-23 | Beijing Didi Infinity Technology and Development Co., Ltd. | Systems and methods for identifying and positioning objects around a vehicle |
US11586212B2 (en) | 2020-02-19 | 2023-02-21 | Ford Global Technologies, Llc | Vehicle device localization |
-
2021
- 2021-12-17 CN CN202111550033.XA patent/CN114513746B/en active Active
-
2022
- 2022-05-11 JP JP2022077904A patent/JP7479715B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018167500A1 (en) * | 2017-03-16 | 2018-09-20 | Ranplan Wireless Network Design Ltd | Wifi multi-band fingerprint-based indoor positioning |
CN109814066A (en) * | 2019-01-24 | 2019-05-28 | 西安电子科技大学 | RSSI indoor positioning distance measuring method, indoor positioning platform based on neural network learning |
CN112584311A (en) * | 2020-12-15 | 2021-03-30 | 西北工业大学 | Indoor three-dimensional space fingerprint positioning method based on WKNN fusion |
CN112867066A (en) * | 2021-01-26 | 2021-05-28 | 南京邮电大学 | Edge calculation migration method based on 5G multi-cell deep reinforcement learning |
Non-Patent Citations (1)
Title |
---|
赵龙;陶冶;: "基于WiFi指纹库的室内定位研究进展和展望", 导航定位与授时, no. 03, 7 May 2018 (2018-05-07) * |
Also Published As
Publication number | Publication date |
---|---|
CN114513746B (en) | 2024-04-26 |
JP7479715B2 (en) | 2024-05-09 |
JP2023090610A (en) | 2023-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dewan et al. | Motion-based detection and tracking in 3d lidar scans | |
Pink et al. | Visual features for vehicle localization and ego-motion estimation | |
CN112325883B (en) | Indoor positioning method for mobile robot with WiFi and visual multi-source integration | |
CN109819406B (en) | Indoor positioning method based on crowdsourcing | |
CN108711172B (en) | Unmanned aerial vehicle identification and positioning method based on fine-grained classification | |
CN112085003A (en) | Automatic identification method and device for abnormal behaviors in public places and camera equipment | |
CN106370160A (en) | Robot indoor positioning system and method | |
Ruotsalainen et al. | Heading change detection for indoor navigation with a smartphone camera | |
CN112823321A (en) | Position positioning system and method for mixing position identification results based on multiple types of sensors | |
Kato et al. | NLOS satellite detection using a fish-eye camera for improving GNSS positioning accuracy in urban area | |
CN114034296A (en) | Navigation signal interference source detection and identification method and system | |
CN114222240A (en) | Multi-source fusion positioning method based on particle filtering | |
CN110276379B (en) | Disaster information rapid extraction method based on video image analysis | |
CN111783849B (en) | Indoor positioning method and device, electronic equipment and storage medium | |
CN115767424A (en) | Video positioning method based on RSS and CSI fusion | |
CN116862832A (en) | Three-dimensional live-action model-based operator positioning method | |
CN114513746A (en) | Indoor positioning method integrating triple visual matching model and multi-base-station regression model | |
CN115808170A (en) | Indoor real-time positioning method integrating Bluetooth and video analysis | |
Ou et al. | A low-cost indoor positioning system based on data-driven modeling for robotics research and education | |
CN110268438B (en) | Image database construction device, position and inclination estimation device, and image database construction method | |
JP2019174910A (en) | Information acquisition device and information aggregation system and information aggregation device | |
Xu et al. | Indoor localization using region-based convolutional neural network | |
Lee et al. | A feasibility study on smartphone localization using image registration with segmented 3d building models based on multi-material classes | |
Ai et al. | Surround Mask Aiding GNSS/LiDAR SLAM for 3D Mapping in the Dense Urban Environment | |
CN110738167A (en) | pedestrian identification method based on multi-domain spatial attribute correlation analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |