CN106793086B - Indoor positioning method - Google Patents
Indoor positioning method Download PDFInfo
- Publication number
- CN106793086B CN106793086B CN201710152882.7A CN201710152882A CN106793086B CN 106793086 B CN106793086 B CN 106793086B CN 201710152882 A CN201710152882 A CN 201710152882A CN 106793086 B CN106793086 B CN 106793086B
- Authority
- CN
- China
- Prior art keywords
- positioning
- wifi
- image
- fingerprint
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000000007 visual effect Effects 0.000 claims abstract description 54
- 238000005070 sampling Methods 0.000 claims description 64
- 238000012549 training Methods 0.000 claims description 38
- 238000012360 testing method Methods 0.000 claims description 16
- 238000010276 construction Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims 3
- 238000007781 pre-processing Methods 0.000 claims 2
- 238000005516 engineering process Methods 0.000 abstract description 24
- 238000004891 communication Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract description 2
- 239000003550 marker Substances 0.000 description 6
- 230000000295 complement effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000002131 composite material Substances 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
本发明一种室内定位方法,涉及为了网络管理的目的无线通信网络技术,是采用基于WiFi指纹的定位与基于标志的视觉定位相结合的方法,首先利用WiFi位置指纹定位算法得到WiFi定位范围及WiFi定位坐标,再根据被测试图像的特征匹配与视觉定位得到视觉定位坐标,最后将WiFi位置指纹定位与视觉定位相结合,实现高精度的室内定位。该方法克服了现有WiFi指纹定位技术存在的精度低和现有的单一的视觉定位方法技术不适用于室内定位的缺陷。
The present invention is an indoor positioning method, which relates to the wireless communication network technology for the purpose of network management. It adopts a method combining WiFi fingerprint-based positioning and sign-based visual positioning. First, the WiFi location fingerprint positioning algorithm is used to obtain the WiFi positioning range and WiFi Positioning coordinates, and then obtain visual positioning coordinates according to the feature matching and visual positioning of the tested image, and finally combine WiFi location fingerprint positioning with visual positioning to achieve high-precision indoor positioning. The method overcomes the defects of low precision existing in the existing WiFi fingerprint positioning technology and the existing single visual positioning method technology is not suitable for indoor positioning.
Description
技术领域technical field
本发明的技术方案涉及为了网络管理的目的无线通信网络技术,具体地说是一种室内定位方法。The technical scheme of the present invention relates to a wireless communication network technology for the purpose of network management, in particular to an indoor positioning method.
背景技术Background technique
室内定位是指在室内环境中实现位置定位,主要采用无线通讯、基站定位、惯导定位等多种技术集成形成一套室内位置定位体系,从而实现人员、物体等在室内空间中的位置监控。由于室内环境复杂多变,且无法接收GPS(Global Positioning System全球定位系统)信号,使得目前室内定位有一定的困难。在室内环境无法使用卫星定位时,使用室内定位技术作为卫星定位的辅助定位,解决卫星信号到达地面时较弱、不能穿透建筑物的问题,最终定位物体当前所处的位置。Indoor positioning refers to the realization of position positioning in the indoor environment. It mainly uses wireless communication, base station positioning, inertial navigation positioning and other technologies to integrate to form an indoor position positioning system, so as to realize the position monitoring of people and objects in indoor space. Due to the complex and changeable indoor environment and the inability to receive GPS (Global Positioning System) signals, it is difficult to locate indoors at present. When satellite positioning cannot be used in the indoor environment, indoor positioning technology is used as an auxiliary positioning for satellite positioning to solve the problem that the satellite signal is weak and cannot penetrate buildings when it reaches the ground, and finally locates the current position of the object.
从目前公开的文献和技术手段来看,研发较多的室内定位技术有:Wi-Fi技术,该技术是基于WLAN(Wireless Local Area Network无线局域网)的室内定位,需要事先布置好无线接入点AP(Access Point),在无定位需求时会造成资源浪费;超宽带技术,是基于UWB(Ultra Wide Band超宽带技术)的室内定位,目前至少需要三个信号接收机,且接收机与发射机之间不能存在障碍物;惯导定位技术,是基于惯性传感器(Inertial sensors)的室内定位,由于惯性传感器受微电子机械系统噪声的影响,不可避免地影响定位精度。Judging from the currently published literature and technical means, the indoor positioning technologies that have been developed more and more include: Wi-Fi technology, which is based on WLAN (Wireless Local Area Network) indoor positioning, and requires wireless access points to be arranged in advance. AP (Access Point), which will cause waste of resources when there is no positioning requirement; UWB technology is based on UWB (Ultra Wide Band ultra-wideband technology) indoor positioning. At present, at least three signal receivers are required, and the receiver and transmitter are There can be no obstacles between them; the inertial navigation positioning technology is based on the indoor positioning of inertial sensors. Because the inertial sensors are affected by the noise of the micro-electromechanical system, the positioning accuracy is inevitably affected.
CN103402256B公开了一种基于Wi F i指纹的室内定位方法;CN106304331A披露了一种WiFi指纹室内定位方法;CN103582119B公告了Wi F i室内定位系统的指纹数据库构建方法。从目前公开的有关WiFi定位技术的文献来看,基于WiFi指纹的定位技术全部采用MAC地址与信号强度RSSI值构建指纹,该方法构建的指纹库复杂,且定位准确率易受室内环境变化的影响。CN103402256B discloses an indoor positioning method based on Wi-Fi fingerprints; CN106304331A discloses a WiFi fingerprint indoor positioning method; CN103582119B announces a fingerprint database construction method of Wi-Fi indoor positioning system. According to the published literature on WiFi positioning technology, the positioning technology based on WiFi fingerprints all use MAC address and signal strength RSSI value to construct fingerprints. The fingerprint database constructed by this method is complex, and the positioning accuracy is easily affected by indoor environment changes. .
CN106295512A公开了基于标识的多纠正线室内视觉数据库构建方法以及室内定位方法,此方法基于相机,通过检索带有特定标识的图像来实现定位与导航,这在现实的室内环境中难以实现,需要在室内布置一系列带有同一标识的图像集,需改动室内环境;CN106228538A公开了基于logo的双目视觉室内定位方法,定位过程需要双台相机来获取图像,利用摄像机对目标点进行定位,涉及到摄像机内外参数的标定以及摄像机坐标系、图像坐标系与世界坐标系三者之间的转换,一般应用于移动机器人的三维定位领域,向普通大众推广应用较困难。总之,现有的单一的视觉定位方法技术不适用于室内定位。CN106295512A discloses an identification-based multi-correction line indoor visual database construction method and an indoor positioning method. The method is based on a camera and realizes positioning and navigation by retrieving images with specific identifications. This is difficult to achieve in a real indoor environment and needs to be A series of image sets with the same logo are arranged indoors, and the indoor environment needs to be changed; CN106228538A discloses a logo-based binocular vision indoor positioning method. The positioning process requires two cameras to obtain images, and the cameras are used to locate the target point, which involves The calibration of the internal and external parameters of the camera and the conversion between the camera coordinate system, the image coordinate system and the world coordinate system are generally used in the field of 3D positioning of mobile robots, and it is difficult to popularize and apply to the general public. In conclusion, the existing single visual localization method technology is not suitable for indoor localization.
综上所述,目前尚无经济且成熟的室内定位技术。随着互联网、无线通信、计算机技术、测绘技术和设备制造等关键技术的飞速发展,室内定位将向着不同室内定位技术互补结合的方向发展。通过不同室内定位技术互补结合的方式弥补某一项室内定位方法的劣势,如何将各种不同的室内定位技术有机地结合起来将是该技术领域研究的热点。To sum up, there is currently no economical and mature indoor positioning technology. With the rapid development of key technologies such as the Internet, wireless communication, computer technology, surveying and mapping technology and equipment manufacturing, indoor positioning will develop towards the complementary combination of different indoor positioning technologies. The disadvantage of a certain indoor positioning method is compensated by the complementary combination of different indoor positioning technologies. How to organically combine various indoor positioning technologies will be a research hotspot in this technical field.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是:提供一种室内定位方法,是采用基于WiFi指纹的定位与基于标志的视觉定位相结合的方法,实现高精度的室内定位,该方法克服了现有WiFi指纹定位技术存在的精度低和现有的单一的视觉定位方法技术不适用于室内定位的缺陷。The technical problem to be solved by the present invention is: to provide an indoor positioning method, which adopts a method combining WiFi fingerprint-based positioning and sign-based visual positioning to achieve high-precision indoor positioning, and the method overcomes the existing WiFi fingerprint positioning. The technology has low precision and the existing single visual positioning method technology is not suitable for indoor positioning defects.
本发明解决该技术问题所采用的技术方案是:一种室内定位方法,是采用基于WiFi指纹的定位与基于标志的视觉定位相结合的方法,首先利用WiFi位置指纹定位算法得到WiFi定位范围及WiFi定位坐标,再根据被测试图像的特征匹配与视觉定位得到视觉定位坐标,最后将WiFi位置指纹定位与视觉定位相结合,具体步骤如下:The technical solution adopted by the present invention to solve the technical problem is as follows: an indoor positioning method, which adopts a method combining WiFi fingerprint-based positioning and sign-based visual positioning. First, the WiFi location fingerprint positioning algorithm is used to obtain the WiFi positioning range and WiFi Positioning coordinates, and then obtain the visual positioning coordinates according to the feature matching and visual positioning of the tested image, and finally combine the WiFi location fingerprint positioning with the visual positioning. The specific steps are as follows:
第一步,生成WiFi指纹:The first step is to generate a WiFi fingerprint:
基于Java语言开发一项Android App,获得WiFi信号的MAC地址,并生成txt文件保存在智能手机中,由此生成WiFi指纹;Develop an Android App based on Java language, obtain the MAC address of the WiFi signal, and generate a txt file and save it in the smartphone, thereby generating WiFi fingerprints;
第二步,构建WiFi位置指纹数据库:The second step is to build a WiFi location fingerprint database:
选取室内环境下的走廊区域为定位区域,在该走廊区域中,选取30~60个WiFi采样点,每个采样点的坐标已知,在每个采样点利用已安装的App检测可以检测到的WiFi信号,获得其MAC地址序列并保存为WiFi位置指纹数据库中的指纹,由保存的一系列的MAC地址序列组成WiFi位置指纹数据库,每一个指纹对应唯一的位置信息,在选定的定位区域中,设定60个WiFi采样点,将每个采样点采集到的MAC地址作为该采样点的指纹,遍历所有采样点便可得到60个指纹,保存入WiFi位置指纹数据库,由此完成构建WiFi位置指纹数据库;The corridor area in the indoor environment is selected as the positioning area. In the corridor area, 30 to 60 WiFi sampling points are selected. The coordinates of each sampling point are known, and the installed App is used to detect the detectable points at each sampling point. WiFi signal, obtain its MAC address sequence and save it as a fingerprint in the WiFi location fingerprint database. , set 60 WiFi sampling points, use the MAC address collected at each sampling point as the fingerprint of the sampling point, traverse all sampling points to get 60 fingerprints, and save them into the WiFi location fingerprint database, thus completing the construction of WiFi location fingerprint database;
第三步,WiFi指纹定位:The third step, WiFi fingerprint positioning:
在定位阶段,在上述第二步选取的定位区域内设定待定位点x处的位置坐标为(x,y),可接收到Nx个WiFi信号,利用第一步中的App检测得到此定位点x的实测指纹利用指纹匹配算法依照上述第二步所述的每一个指纹对应唯一的位置信息的规则,将这个实测指纹xf与上述第二步中的WiFi位置指纹数据库中的指纹进行匹配,得到匹配度最高的三个指纹,从而得到WiFi定位范围(x0~x1,y0~y1)及WiFi定位坐标(xw,yw),实现WiFi指纹定位,其中的x0、x1为待定位点的横坐标,x0~x1为待定位的横坐标范围,单位为m;其中的y0、y1为待定位点的纵坐标,y0~y1为待定位的纵坐标范围,单位为m;In the positioning stage, in the positioning area selected in the second step above, set the position coordinates of the to-be-located point x as (x, y), and N x WiFi signals can be received. The App detection in the first step is used to obtain this The measured fingerprint of the anchor point x The fingerprint matching algorithm is used to match the measured fingerprint xf with the fingerprints in the WiFi location fingerprint database in the second step above according to the rule that each fingerprint corresponds to unique location information described in the second step above, and the highest matching degree is obtained. Three fingerprints are obtained to obtain the WiFi positioning range (x 0 ~ x 1 , y 0 ~ y 1 ) and WiFi positioning coordinates (x w , y w ) to realize WiFi fingerprint positioning, where x 0 and x 1 are the points to be located. x 0 ~ x 1 is the abscissa range to be positioned, and the unit is m; y 0 and y 1 are the vertical coordinates of the point to be positioned, and y 0 ~ y 1 are the vertical coordinate range to be positioned, in units of is m;
第四步,生成训练图像集:The fourth step is to generate a training image set:
在上述第二步的定位区域内,所有门牌的坐标是已知的,这些门牌都是标志采样点,即待定位点,利用智能手机拍摄门牌,遍历所有的标志采样点,生成训练图像集,由于标志采样点是WiFi采样点中的一部分,因此标志采样点的坐标已知,训练图像的坐标就是标志采样点的坐标;In the positioning area of the second step above, the coordinates of all the house numbers are known, and these house numbers are the sign sampling points, that is, the points to be located. Use the smartphone to shoot the house number, traverse all the sign sampling points, and generate a training image set. Since the marker sampling point is a part of the WiFi sampling point, the coordinates of the marker sampling point are known, and the coordinates of the training image are the coordinates of the marker sampling point;
第五步,计算SURF全局特征描述符:The fifth step is to calculate the SURF global feature descriptor:
首先对上述第四步得到的训练图像进行预处理,包括归一化处理和灰度化处理,然后计算SURF全局特征描述符,包括两部分,第一部分是特征点定位,第二部分是特征描述符计算,将归一化处理后的图像的中心点作为特征点,将该整幅图像作为此特征点的单一邻域,由此计算出的特征描述符作为该整幅图片的SURF全局特征描述符;First, the training images obtained in the fourth step above are preprocessed, including normalization and grayscale processing, and then the SURF global feature descriptor is calculated, including two parts, the first part is the feature point location, and the second part is the feature description Symbol calculation, the center point of the normalized image is used as the feature point, the whole image is regarded as a single neighborhood of the feature point, and the feature descriptor calculated from this is used as the SURF global feature description of the whole image. symbol;
第六步,计算ORB全局特征描述符:The sixth step is to calculate the ORB global feature descriptor:
(6.1)确定特征点的主方向:(6.1) Determine the main direction of the feature point:
同上述第五步将归一化处理后的图像的中心点作为特征点,该特征点的主方向利用图像矩来计算,对于任意一个特征点,其图像矩为其中I(x,y)是点(x,y)处的灰度值,该特征点的邻域图像的质心邻域图像的质心与特征点的夹角为θ=arctan2(m01,m10),即为特征点的主方向;In the same way as in the fifth step above, the center point of the normalized image is used as the feature point, and the main direction of the feature point is calculated by using the image moment. For any feature point, its image moment is where I(x,y) is the gray value at point (x,y), the centroid of the neighborhood image of this feature point The angle between the centroid of the neighborhood image and the feature point is θ=arctan2(m 01 , m 10 ), which is the main direction of the feature point;
(6.2)生成BRIEF特征描述子:(6.2) Generate BRIEF feature descriptor:
BRIEF特征描述子的生成过程如下:p1表示一个经平滑处理的图像邻域,在任意位置点x和y的二进制测试是这两点强度测试的逻辑结果:其中,p1(x)表示图像邻域p1上一点x处的强度,p1(y)表示图像邻域p1上一点y处的强度,经过n次二进制测试,得到一个n维向量,即BRIEF特征描述子这里选取n=256,得到的是256位的二进制字符串;The generation process of the BRIEF feature descriptor is as follows: p1 represents a smoothed image neighborhood, and the binary test of x and y at any position is the logical result of the intensity test of these two points: Among them, p1(x) represents the intensity of a point x on the image neighborhood p1, and p1(y) represents the intensity of a point y on the image neighborhood p1. After n binary tests, an n-dimensional vector is obtained, that is, the Brief feature description son Here n=256 is selected, and a 256-bit binary string is obtained;
(6.3)计算出ORB全局特征描述符:(6.3) Calculate the ORB global feature descriptor:
为了使BRIEF特征描述子具有旋转不变性,根据上述(6.1)步确定的特征点的方向设置BRIEF特征描述子的方向,对于任意图像像素点(xi,yi)的n位二进制测试集得到的特征集合,定义一个2×n矩阵由上述(6.1)步确定的特征点的主方向计算出仿射变换矩阵由此得到Sθ=RθS,Sθ即为具有旋转不变性的BRIEF特征描述子,最后计算出具有旋转不变性的ORB全局特征描述符:gn(P,θ):=fn(P)|(xi,yi)∈Sθ,这里选取n=256;In order to make the Brief feature descriptor have rotation invariance, the orientation of the Brief feature descriptor is set according to the direction of the feature points determined in the above step (6.1), and the n-bit binary test set for any image pixel point (x i , y i ) is obtained. The feature set of , defines a 2×n matrix Calculate the affine transformation matrix from the main direction of the feature points determined in the above (6.1) step From this, S θ = R θ S, S θ is the Brief feature descriptor with rotation invariance, and finally the ORB global feature descriptor with rotation invariance is calculated: g n (P, θ):=f n ( P)|(x i ,y i )∈S θ , where n=256 is selected;
第七步,采集被测试图像:The seventh step is to collect the image to be tested:
在上述第二步所述的定位区域内,利用智能手机在待定位点拍摄与之最近的门牌,采集到被测试图像;In the positioning area described in the second step above, use a smartphone to photograph the nearest house number at the to-be-located point, and collect the image to be tested;
第八步,被测试图像的特征匹配与视觉定位:The eighth step, the feature matching and visual positioning of the tested image:
被测试图像的特征匹配方法是,①计算两个SURF全局特征描述符之间的欧氏距离:The feature matching method of the tested image is: ① Calculate the Euclidean distance between two SURF global feature descriptors:
两个SURF全局特征描述符L1,L2之间的欧氏距离其中i为64维特征向量的第i维;②计算两个ORB全局特征描述符之间的汉明距离:两个ORB全局特征描述符R1,R2之间的汉明距离通过对两个二进制字符串T1,T2进行位异或运算得到,其中i为256位字符串的第i位;上述两个特征描述符的距离越小,图像匹配度越高;Euclidean distance between two SURF global feature descriptors L1 , L2 where i is the ith dimension of the 64-dimensional feature vector; ② Calculate the Hamming distance between the two ORB global feature descriptors: The Hamming distance between the two ORB global feature descriptors R 1 , R 2 is calculated by comparing the two Binary strings T 1 , T 2 are obtained by bitwise XOR operation, where i is the ith bit of a 256-bit string; the smaller the distance between the two feature descriptors above, the higher the image matching degree;
视觉定位的方法是,首先分别采用上述第五步和第六步的方法计算上述第七步采集到的被测试图像的SURF全局特征描述符与ORB全局特征描述符,然后利用上述被测试图像的特征匹配方法分别计算该被测试图像的SURF全局特征描述符与ORB全局特征描述符与上述第四步得到的训练图像的SURF全局特征描述符与ORB全局特征描述符之间的距离,再利用KNN算法在SURF匹配空间计算三个近邻,即计算与该被测试图像的SURF全局特征描述符的欧氏距离最小的三个上述第四步得到的训练图像,在ORB匹配空间计算二个近邻,即计算与该被测试图像的ORB全局特征描述符的汉明距离最小的二个上述第四步得到的训练图像,最后取三个近邻与二个近邻的交集,作为与该被测试图像最相近的上述第四步训练图像集中的训练图像,称为匹配图像,此匹配图像对应的位置坐标即为视觉定位坐标(xv,yv),由此完成视觉定位;The method of visual positioning is to first calculate the SURF global feature descriptor and the ORB global feature descriptor of the tested image collected in the above seventh step by using the methods of the fifth step and the sixth step above, and then use the above test image. The feature matching method calculates the distance between the SURF global feature descriptor and ORB global feature descriptor of the tested image and the SURF global feature descriptor and ORB global feature descriptor of the training image obtained in the fourth step above, and then uses KNN The algorithm calculates three nearest neighbors in the SURF matching space, that is, calculates the three training images obtained in the fourth step above with the smallest Euclidean distance from the SURF global feature descriptor of the tested image, and calculates two nearest neighbors in the ORB matching space, namely Calculate the two training images obtained in the fourth step above with the smallest Hamming distance to the ORB global feature descriptor of the image to be tested, and finally take the intersection of the three neighbors and the two neighbors as the closest neighbor to the image to be tested. The training images in the training image set in the fourth step above are called matching images, and the position coordinates corresponding to the matching images are the visual positioning coordinates (x v , y v ), thereby completing the visual positioning;
第九步,WiFi指纹定位与视觉定位相结合的定位:The ninth step, the combination of WiFi fingerprint positioning and visual positioning:
在上述第三步得到WiFi定位范围后,将坐标位于WiFi定位范围内的上述第四步生成训练图像集中的训练图像构成匹配图像集,当上述第八步的匹配图像位于该匹配图像集内,则将上述第八步得到的视觉定位坐标(xv,yv)作为最终的室内定位位置坐标,反之将上述第三步得到的WiFi定位坐标(xw,yw)作为最终的位置坐标,由此完成WiFi指纹定位与视觉定位相结合的定位。After obtaining the WiFi positioning range in the third step above, the training images in the training image set generated in the fourth step above with the coordinates located within the WiFi positioning range form a matching image set, and when the matching image set in the eighth step above is located in the matching image set, Then take the visual positioning coordinates (x v , y v ) obtained in the above eighth step as the final indoor positioning position coordinates, otherwise take the WiFi positioning coordinates (x w , y w ) obtained in the third step above as the final position coordinates, Thus, the positioning of the combination of WiFi fingerprint positioning and visual positioning is completed.
上述一种室内定位方法,所述第三步中的指纹匹配算法,是将实测指纹的MAC地址序列与上述WiFi位置指纹数据库中所有指纹的MAC地址序列一一进行比较,当MAC地址相同,则匹配成功,匹配度其中,xf(MAC)是指实测指纹的地址序列,lf(MAC)是指上述WiFi位置指纹数据库中第l个指纹的MAC地址序列,l=(1,2,...,m),Num[xf(MAC)=lf(MAC)]表示MAC地址匹配成功的数目,最后将匹配度由高到低的三个指纹对应的位置作为粗略定位范围(x0~x1,y0~y1),在此基础上,得到WiFi定位坐标(xw,yw), In the above-mentioned indoor positioning method, the fingerprint matching algorithm in the third step is to compare the MAC address sequence of the measured fingerprint with the MAC address sequence of all fingerprints in the above-mentioned WiFi location fingerprint database one by one. When the MAC addresses are the same, then match success Among them, xf(MAC) refers to the address sequence of the measured fingerprint, lf(MAC) refers to the MAC address sequence of the lth fingerprint in the above WiFi location fingerprint database, l=(1,2,...,m), Num [xf(MAC)=lf(MAC)] indicates the number of successful MAC address matching, and finally the positions corresponding to the three fingerprints with matching degrees from high to low are used as the rough positioning range (x 0 ~ x 1 , y 0 ~ y 1 ), on this basis, the WiFi positioning coordinates (x w , y w ) are obtained,
上述一种室内定位方法,所述第五步中的计算SURF全局特征描述符的方法是如下:In the above-mentioned indoor positioning method, the method for calculating the SURF global feature descriptor in the fifth step is as follows:
(1)计算特征点主方向:以特征点为中心,在半径为6s的圆形邻域内,计算60度扇形内所有点在x和y方向的Haar小波响应的总和mw,计算总和时要对这些响应值进行高斯加权,dx和dy是Haar小波响应在x和y方向的信息,60度扇形滑动窗以5度一步转动,计算合成向量角度θw,再求出各方向扇形的合成向量模长最大值:向量模长最大值对应的角度即为特征点主方向;(1) Calculate the main direction of the feature point: take the feature point as the center, in a circular neighborhood with a radius of 6s, calculate the sum m w of the Haar wavelet responses of all points in the 60-degree sector in the x and y directions. Gaussian weighting these response values, d x and dy are the information of the Haar wavelet response in the x and y directions, the 60-degree fan-shaped sliding window rotates in 5-degree steps, and the resultant vector angle θ w is calculated, Then find the maximum value of the composite vector modulus length of the fan in each direction: The angle corresponding to the maximum vector modulus length is the main direction of the feature point;
(2)计算SURF全局特征描述符:经上述(1)步得到特征点主方向后,在特征点周围取一个正方形框,框的边长为20s,将该正方形划分为4×4个子区域,对于每一个子区域,计算以5×5个固定间隔特征点的Haar小波响应,每一个子区域的特征描述符v计算如下:v=(Σdx,Σdy,∑|dx|,∑|dy|),所有的16个子区域的描述符合起来形成此特征点的SURF特征描述符,最终的SURF全局特征描述符是一个64维的向量:V={v1,v2,...,v16},其中vi(i=1,2,...,16)是第i个子区域的特征描述符;(2) Calculate the SURF global feature descriptor: After obtaining the main direction of the feature point through the above step (1), take a square frame around the feature point, the side length of the frame is 20s, and the square is divided into 4 × 4 sub-regions, For each subregion, calculate the Haar wavelet response with 5×5 feature points at regular intervals, and the feature descriptor v of each subregion is calculated as follows: v=(Σd x ,Σd y ,Σ|d x |,Σ| d y |), the descriptions of all 16 sub-regions are matched to form the SURF feature descriptor of this feature point, and the final SURF global feature descriptor is a 64-dimensional vector: V={v 1 ,v 2 ,... ,v 16 }, where v i (i=1,2,...,16) is the feature descriptor of the ith subregion;
上述s为尺度因子。The above s is a scale factor.
本发明的有益效果是:与现有技术相比,本发明的突出的实质性特点如下:The beneficial effects of the present invention are: compared with the prior art, the outstanding substantive features of the present invention are as follows:
(1)本发明一种室内定位方法是利用WiFi指纹定位的原理与视觉定位的原理,采用基于WiFi指纹的定位与基于标志的视觉定位相结合的方法,首先利用WiFi位置指纹定位算法得到WiFi定位范围及WiFi定位坐标,再根据被测试图像的特征匹配与视觉定位得到视觉定位坐标,创造出一种基于WiFi指纹与视觉融合的高精度室内定位方法。无论从原理上讲还是在实际操作中来说,WiFi指纹定位和视觉定位难以在同一平台下同时进行,WiFi指纹需利用智能手机进行采集,而指纹匹配算法需在C++环境下实现;视觉定位中训练图像和被测试图像需利用手机进行拍摄,而SURF全局特征描述符与ORB全局特征描述符的计算以及特征匹配算法均需在C++环境下实现。本发明发明人经过长期艰辛的研发工作,分别得到WiFi指纹定位的结果与视觉定位的结果并将其相结合,提出了一种将定位结果加以修正的方法。实验结果表明,本发明提出的两种定位技术相结合的定位方法准确率高,误差较小。(1) An indoor positioning method of the present invention utilizes the principle of WiFi fingerprint positioning and the principle of visual positioning, adopts the method of combining WiFi fingerprint-based positioning and sign-based visual positioning, and first uses WiFi location fingerprint positioning algorithm to obtain WiFi positioning The range and WiFi positioning coordinates are obtained, and then the visual positioning coordinates are obtained according to the feature matching and visual positioning of the tested image, and a high-precision indoor positioning method based on the fusion of WiFi fingerprint and vision is created. Both in principle and in practice, it is difficult to perform WiFi fingerprint positioning and visual positioning at the same time on the same platform. WiFi fingerprints need to be collected by smartphones, and the fingerprint matching algorithm needs to be implemented in the C++ environment; The training image and the tested image need to be taken with a mobile phone, and the calculation of the SURF global feature descriptor and the ORB global feature descriptor and the feature matching algorithm need to be implemented in the C++ environment. After long-term and arduous research and development work, the inventor of the present invention has obtained the results of WiFi fingerprint positioning and the results of visual positioning respectively and combined them, and proposed a method for correcting the positioning results. The experimental results show that the positioning method combining the two positioning technologies proposed by the present invention has high accuracy and small error.
(2)本发明方法中,WiFi指纹定位与视觉定位的互补之处在于:仅利用WiFi定位得到的结果误差较大,正确率低,可利用视觉定位来修正;但当视觉定位出现偶然的较大误差时,可将WiFi指纹定位的确切坐标来代替视觉定位坐标,从而减小定位误差,实现高精度的室内定位,本发明方法克服了现有WiFi指纹定位技术存在的精度低的缺陷。(2) In the method of the present invention, WiFi fingerprint positioning and visual positioning are complementary in that: the results obtained only by WiFi positioning have large errors and low accuracy, and can be corrected by visual positioning; When the error is large, the exact coordinates of the WiFi fingerprint positioning can be used to replace the visual positioning coordinates, thereby reducing the positioning error and realizing high-precision indoor positioning.
与现有技术相比,本发明的显著进步如下:Compared with the prior art, the significant progress of the present invention is as follows:
(1)本发明在构建WiFi指纹时仅使用MAC地址,简化了指纹数据库,且只要无线接入点AP不被移除,指纹就不变,定位准确率与室内环境的改变无关。(1) The present invention only uses the MAC address when constructing the WiFi fingerprint, which simplifies the fingerprint database, and as long as the wireless access point AP is not removed, the fingerprint remains unchanged, and the positioning accuracy rate has nothing to do with changes in the indoor environment.
(2)本发明创新地将WiFi指纹定位与视觉定位相结合,扬长避短,可方便地实现高精度的室内定位。(2) The present invention innovatively combines WiFi fingerprint positioning with visual positioning, and can easily realize high-precision indoor positioning.
(3)本发明方法可以有效提高室内定位的精度,传统的基于WiFi指纹定位的室内定位方法的精度一般在10m左右,本发明方法将WiFi指纹定位与视觉定位相结合,定位精度可达到6m,定位准确率可达到80%。(3) The method of the present invention can effectively improve the accuracy of indoor positioning. The accuracy of the traditional indoor positioning method based on WiFi fingerprint positioning is generally about 10m. The method of the present invention combines WiFi fingerprint positioning and visual positioning, and the positioning accuracy can reach 6m. The positioning accuracy can reach 80%.
(4)本发明方法首先利用WiFi指纹定位得到粗略定位范围及确切的位置坐标,之后重点在于利用视觉定位来修正WiFi指纹定位得到的确切位置坐标,因此在WiFi信号较少的环境下也可以使用。(4) The method of the present invention first uses WiFi fingerprint positioning to obtain a rough positioning range and exact position coordinates, and then focuses on using visual positioning to correct the exact position coordinates obtained by WiFi fingerprint positioning, so it can also be used in an environment with fewer WiFi signals. .
(5)本发明方法不需要布署固定的WiFi接入点,操作简单,成本小,不需要额外的装置,不需要改动室内环境。(5) The method of the present invention does not require the deployment of fixed WiFi access points, is simple in operation, low in cost, does not require additional devices, and does not require modification of the indoor environment.
(6)本发明方法利用室内门牌做视觉定位,适用于室内所有有门牌的场所,如会议室、室内活动中心和办公楼。(6) The method of the present invention utilizes indoor house numbers for visual positioning, and is suitable for all indoor places with house numbers, such as conference rooms, indoor activity centers and office buildings.
附图说明Description of drawings
下面结合附图和实施例对本发明进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
图1为本发明方法的步骤流程示意框图。FIG. 1 is a schematic block diagram of the steps of the method of the present invention.
图2为本发明方法开发的Android App的使用示意图。FIG. 2 is a schematic diagram of the use of an Android App developed by the method of the present invention.
图3为本发明方法的定位区域与采样点分布示意图。FIG. 3 is a schematic diagram of the positioning area and the distribution of sampling points in the method of the present invention.
图中,1~60为定位区域内的WiFi采样点,三角箭头所指的1,4,7,11,14,17,20,23,38,41,45,47,51,54,57,59为训练图像的标志采样点,也即待定位点。In the figure, 1 to 60 are WiFi sampling points in the positioning area, 1, 4, 7, 11, 14, 17, 20, 23, 38, 41, 45, 47, 51, 54, 57, 59 is the mark sampling point of the training image, that is, the point to be located.
具体实施方式Detailed ways
图1所示实施例表明,本发明方法的步骤流程是:生成WiFi指纹→构建WiFi位置指纹数据库→WiFi指纹定位→生成训练图像集→计算SURF全局特征描述符→计算ORB全局特征描述符→采集被测试图像→被测试图像的特征匹配与视觉定位→WiFi指纹定位与视觉定位相结合的定位。The embodiment shown in FIG. 1 shows that the steps of the method of the present invention are: generating WiFi fingerprints → building WiFi location fingerprint database → WiFi fingerprint positioning → generating training image sets → calculating SURF global feature descriptors → calculating ORB global feature descriptors → acquisition Tested image → Feature matching and visual positioning of the tested image → Positioning combined with WiFi fingerprint positioning and visual positioning.
图2所示实施例表明,使用本发明方法开发的Android App,获得WiFi信号的MAC地址(MAC ADDRESS),并生成txt文件保存在智能手机中,由此生成WiFi指纹;图中显示了WiFi位置指纹数据库中的一个WiFi采样点处的指纹由一系列MAC地址构成,此App在该采样点处能接收到个17个WiFi信号,相应地有17个MAC地址,按下智能手机中的“保存”按钮即可将这个指纹保存为txt文件。The embodiment shown in FIG. 2 shows that the Android App developed using the method of the present invention obtains the MAC address (MAC ADDRESS) of the WiFi signal, and generates a txt file and saves it in the smart phone, thereby generating WiFi fingerprints; the WiFi location is shown in the figure The fingerprint at a WiFi sampling point in the fingerprint database is composed of a series of MAC addresses. The App can receive 17 WiFi signals at this sampling point, and there are 17 MAC addresses correspondingly. Press "Save" in the smartphone. " button to save the fingerprint as a txt file.
图3所示实施例显示了定位区域与采样点的分布情况,定位区域选取了走廊的一段,其长为60m,宽为3m,该定位区域内的WiFi采样点呈网状分布,共60个,每个采样点间隔2m,标志采样点共16个,即图中的三角箭头所指的1,4,7,11,14,17,20,23,38,41,45,47,51,54,57,59为训练图像的标志采样点,也是待定位点。The embodiment shown in Figure 3 shows the distribution of the positioning area and sampling points. The positioning area selects a section of the corridor, which is 60m long and 3m wide. The WiFi sampling points in the positioning area are distributed in a mesh, with a total of 60 , the interval between each sampling point is 2m, and there are 16 marked sampling points in total, namely 1, 4, 7, 11, 14, 17, 20, 23, 38, 41, 45, 47, 51, which are indicated by the triangular arrows in the figure. 54, 57, and 59 are the mark sampling points of the training image, which are also the points to be located.
实施例Example
第一步,生成WiFi指纹:The first step is to generate a WiFi fingerprint:
基于Java语言开发一项Android App,获得WiFi信号的MAC地址,并生成txt文件保存在智能手机中,由此生成WiFi指纹;由图2所示实施例可看出,用App在一个选定的采样点处能接收到个17个WiFi信号,相应地有17个MAC地址,按下智能手机中的“保存”按钮即可将这个指纹保存为txt文件;Develop an Android App based on Java language, obtain the MAC address of the WiFi signal, and generate a txt file and save it in the smartphone, thereby generating a WiFi fingerprint; from the embodiment shown in Figure 2, it can be seen that the App is used in a selected selected 17 WiFi signals can be received at the sampling point, correspondingly 17 MAC addresses, press the "Save" button in the smartphone to save the fingerprint as a txt file;
第二步,构建WiFi位置指纹数据库:The second step is to build a WiFi location fingerprint database:
选取室内环境下的走廊区域为定位区域,在该走廊区域中,选取60个WiFi采样点,每个采样点的坐标已知,在每个采样点利用已安装的App检测可以检测到的WiFi信号,获得其MAC地址序列并保存为WiFi位置指纹数据库中的指纹,由保存的一系列的MAC地址序列组成WiFi位置指纹数据库,每一个指纹对应唯一的位置信息,在选定的定位区域中,设定60个WiFi采样点,将每个采样点采集到的MAC地址作为该采样点的指纹,遍历所有采样点便可得到60个指纹,保存入WiFi位置指纹数据库,由此完成构建WiFi位置指纹数据库;如图3所示实施例显示,本实施例1定位区域选取了走廊的一段,其长为60m,宽为3m,该定位区域内的WiFi采样点呈网状分布,共60个,每个采样点间隔2m;The corridor area in the indoor environment is selected as the positioning area. In this corridor area, 60 WiFi sampling points are selected. The coordinates of each sampling point are known. Use the installed App to detect the detectable WiFi signal at each sampling point. , obtain its MAC address sequence and save it as a fingerprint in the WiFi location fingerprint database. The WiFi location fingerprint database is composed of a series of stored MAC address sequences. Each fingerprint corresponds to unique location information. In the selected positioning area, set
第三步,WiFi指纹定位:The third step, WiFi fingerprint positioning:
在定位阶段,在上述第二步选取的定位区域内设定待定位点x处的位置坐标为(x,y),可接收到Nx个WiFi信号,利用第一步中的App检测得到此定位点x的实测指纹利用指纹匹配算法依照上述第二步所述的每一个指纹对应唯一的位置信息的规则,将这个实测指纹xf与上述第二步中的WiFi位置指纹数据库中的指纹进行匹配,得到匹配度最高的三个指纹,从而得到WiFi定位范围(x0~x1,y0~y1)及WiFi定位坐标(xw,yw),实现WiFi指纹定位,其中的x0、x1为待定位点的横坐标,x0~x1为待定位的横坐标范围,单位为m;其中的y0、y1为待定位点的纵坐标,y0~y1为待定位的纵坐标范围,单位为m;In the positioning stage, in the positioning area selected in the second step above, set the position coordinates of the to-be-located point x as (x, y), and N x WiFi signals can be received. The App detection in the first step is used to obtain this The measured fingerprint of the anchor point x The fingerprint matching algorithm is used to match the measured fingerprint xf with the fingerprints in the WiFi location fingerprint database in the second step above according to the rule that each fingerprint corresponds to unique location information described in the second step above, and the highest matching degree is obtained. Three fingerprints are obtained to obtain the WiFi positioning range (x 0 ~ x 1 , y 0 ~ y 1 ) and WiFi positioning coordinates (x w , y w ) to realize WiFi fingerprint positioning, where x 0 and x 1 are the points to be located. x 0 ~ x 1 is the abscissa range to be positioned, and the unit is m; y 0 and y 1 are the vertical coordinates of the point to be positioned, and y 0 ~ y 1 are the vertical coordinate range to be positioned, in units of is m;
所述指纹匹配算法,是将实测指纹的MAC地址序列与上述WiFi位置指纹数据库中所有指纹的MAC地址序列一一进行比较,当MAC地址相同,则匹配成功,匹配度其中,xf(MAC)是指实测指纹的地址序列,lf(MAC)是指上述WiFi位置指纹数据库中第l个指纹的MAC地址序列,l=(1,2,...,m),Num[xf(MAC)=lf(MAC)]表示MAC地址匹配成功的数目,最后将匹配度由高到低的三个指纹对应的位置作为粗略定位范围(x0~x1,y0~y1),在此基础上,得到WiFi定位坐标(xw,yw), The fingerprint matching algorithm is to compare the MAC address sequence of the measured fingerprint with the MAC address sequence of all the fingerprints in the above WiFi location fingerprint database one by one, when the MAC addresses are the same, the matching is successful, and the matching degree is Among them, xf(MAC) refers to the address sequence of the measured fingerprint, lf(MAC) refers to the MAC address sequence of the lth fingerprint in the above WiFi location fingerprint database, l=(1,2,...,m), Num [xf(MAC)=lf(MAC)] indicates the number of successful MAC address matching, and finally the positions corresponding to the three fingerprints with matching degrees from high to low are used as the rough positioning range (x 0 ~ x 1 , y 0 ~ y 1 ), on this basis, the WiFi positioning coordinates (x w , y w ) are obtained,
第四步,生成训练图像集:The fourth step is to generate a training image set:
在上述第二步的定位区域内,所有门牌的坐标是已知的,这些门牌都是标志采样点,即待定位点,利用智能手机拍摄门牌,遍历所有的标志采样点,生成训练图像集,由于标志采样点是WiFi采样点中的一部分,因此标志采样点的坐标已知,训练图像的坐标就是标志采样点的坐标;如图3所示实施例显示,本实施例1的标志采样点共16个,即图中的三角箭头所指的1,4,7,11,14,17,20,23,38,41,45,47,51,54,57,59为训练图像的标志采样点,也是待定位点。In the positioning area of the second step above, the coordinates of all the house numbers are known, and these house numbers are the sign sampling points, that is, the points to be located. Use the smartphone to shoot the house number, traverse all the sign sampling points, and generate a training image set. Since the marker sampling point is a part of the WiFi sampling point, the coordinates of the marker sampling point are known, and the coordinates of the training image are the coordinates of the marker sampling point; as shown in the embodiment shown in FIG. 16, namely 1, 4, 7, 11, 14, 17, 20, 23, 38, 41, 45, 47, 51, 54, 57, 59 indicated by the triangular arrows in the figure are the sampling points of the training images. , is also the point to be positioned.
第五步,计算SURF全局特征描述符:The fifth step is to calculate the SURF global feature descriptor:
首先对上述第四步得到的训练图像进行预处理,包括归一化处理和灰度化处理,然后计算SURF全局特征描述符,包括两部分,第一部分是特征点定位,第二部分是特征描述符计算,将归一化处理后的图像的中心点作为特征点,将该整幅图像作为此特征点的单一邻域,由此计算出的特征描述符作为该整幅图片的SURF全局特征描述符;First, the training images obtained in the fourth step above are preprocessed, including normalization and grayscale processing, and then the SURF global feature descriptor is calculated, including two parts, the first part is the feature point location, and the second part is the feature description Symbol calculation, the center point of the normalized image is used as the feature point, the whole image is regarded as a single neighborhood of the feature point, and the feature descriptor calculated from this is used as the SURF global feature description of the whole image. symbol;
所述计算SURF全局特征描述符的方法是如下:The method for calculating the SURF global feature descriptor is as follows:
(1)计算特征点主方向:以特征点为中心,在半径为6s的圆形邻域内,计算60度扇形内所有点在x和y方向的Haar小波响应的总和mw,计算总和时要对这些响应值进行高斯加权,dx和dy是Haar小波响应在x和y方向的信息,60度扇形滑动窗以5度一步转动,计算合成向量角度θw,再求出各方向扇形的合成向量模长最大值:向量模长最大值对应的角度即为特征点主方向;(1) Calculate the main direction of the feature point: take the feature point as the center, in a circular neighborhood with a radius of 6s, calculate the sum m w of the Haar wavelet responses of all points in the 60-degree sector in the x and y directions. Gaussian weighting these response values, d x and dy are the information of the Haar wavelet response in the x and y directions, the 60-degree fan-shaped sliding window rotates in 5-degree steps, and the resultant vector angle θ w is calculated, Then find the maximum value of the composite vector modulus length of the fan in each direction: The angle corresponding to the maximum vector modulus length is the main direction of the feature point;
(2)计算SURF全局特征描述符:经上述(1)步得到特征点主方向后,在特征点周围取一个正方形框,框的边长为20s,将该正方形划分为4×4个子区域,对于每一个子区域,计算以5×5个固定间隔特征点的Haar小波响应,每一个子区域的特征描述符v计算如下:v=(∑dx,Σdy,Σ|dx|,Σ|dy|),所有的16个子区域的描述符合起来形成此特征点的SURF特征描述符,最终的SURF全局特征描述符是一个64维的向量:V={v1,v2,...,v16},其中vi(i=1,2,...,16)是第i个子区域的特征描述符;(2) Calculate the SURF global feature descriptor: After obtaining the main direction of the feature point through the above step (1), take a square frame around the feature point, the side length of the frame is 20s, and the square is divided into 4 × 4 sub-regions, For each subregion, calculate the Haar wavelet response with 5 × 5 feature points at regular intervals, and the feature descriptor v of each subregion is calculated as follows: v=(∑d x ,∑d y, ∑|d x |,∑ |d y |), the descriptions of all 16 sub-regions are matched to form the SURF feature descriptor of this feature point, and the final SURF global feature descriptor is a 64-dimensional vector: V={v 1 , v 2 , .. .,v 16 }, where v i (i=1,2,...,16) is the feature descriptor of the ith subregion;
上述s为尺度因子;The above s is the scale factor;
第六步,计算ORB全局特征描述符:The sixth step is to calculate the ORB global feature descriptor:
(6.1)确定特征点的主方向:(6.1) Determine the main direction of the feature point:
同上述第五步将归一化处理后的图像的中心点作为特征点,该特征点的主方向利用图像矩来计算,对于任意一个特征点,其图像矩为其中I(x,y)是点(x,y)处的灰度值,该特征点的邻域图像的质心邻域图像的质心与特征点的夹角为θ=arctan2(m01,m10),即为特征点的主方向;In the same way as in the fifth step above, the center point of the normalized image is used as the feature point, and the main direction of the feature point is calculated by using the image moment. For any feature point, its image moment is where I(x,y) is the gray value at point (x,y), the centroid of the neighborhood image of this feature point The angle between the centroid of the neighborhood image and the feature point is θ=arctan2(m 01 , m 10 ), which is the main direction of the feature point;
(6.2)生成BRIEF特征描述子:(6.2) Generate BRIEF feature descriptor:
BRIEF特征描述子的生成过程如下:p1表示一个经平滑处理的图像邻域,在任意位置点x和y的二进制测试是这两点强度测试的逻辑结果:其中,p1(x)表示图像邻域p1上一点x处的强度,p1(y)表示图像邻域p1上一点y处的强度,经过n次二进制测试,得到一个n维向量,即BRIEF特征描述子这里选取n=256,得到的是256位的二进制字符串;The generation process of the BRIEF feature descriptor is as follows: p1 represents a smoothed image neighborhood, and the binary test of x and y at any position is the logical result of the intensity test of these two points: Among them, p1(x) represents the intensity of a point x on the image neighborhood p1, and p1(y) represents the intensity of a point y on the image neighborhood p1. After n binary tests, an n-dimensional vector is obtained, that is, the Brief feature description son Here n=256 is selected, and a 256-bit binary string is obtained;
(6.3)计算出ORB全局特征描述符:(6.3) Calculate the ORB global feature descriptor:
为了使BRIEF特征描述子具有旋转不变性,根据上述(6.1)步确定的特征点的方向设置BRIEF特征描述子的方向,对于任意图像像素点(xi,yi)的n位二进制测试集得到的特征集合,定义一个2×n矩阵由上述(6.1)步确定的特征点的主方向计算出仿射变换矩阵由此得到Sθ=RθS,Sθ即为具有旋转不变性的BRIEF特征描述子,最后计算出具有旋转不变性的ORB全局特征描述符:gn(P,θ):=fn(P)|(xi,yi)∈Sθ,这里选取n=256;In order to make the Brief feature descriptor have rotation invariance, the orientation of the Brief feature descriptor is set according to the direction of the feature points determined in the above step (6.1), and the n-bit binary test set for any image pixel point (x i , y i ) is obtained. The feature set of , defines a 2×n matrix Calculate the affine transformation matrix from the main direction of the feature points determined in the above (6.1) step From this, S θ = R θ S, S θ is the Brief feature descriptor with rotation invariance, and finally the ORB global feature descriptor with rotation invariance is calculated: g n (P, θ):=f n ( P)|(x i ,y i )∈S θ , where n=256 is selected;
第七步,采集被测试图像:The seventh step is to collect the image to be tested:
在上述第二步所述的定位区域内,利用智能手机在待定位点拍摄与之最近的门牌,采集到被测试图像;In the positioning area described in the second step above, use a smartphone to photograph the nearest house number at the to-be-located point, and collect the image to be tested;
第八步,被测试图像的特征匹配与视觉定位:The eighth step, the feature matching and visual positioning of the tested image:
被测试图像的特征匹配方法是,①计算两个SURF全局特征描述符之间的欧氏距离:两个SURF全局特征描述符L1,L2之间的欧氏距离其中i为64维特征向量的第i维;②计算两个ORB全局特征描述符之间的汉明距离:两个ORB全局特征描述符R1,R2之间的汉明距离通过对两个二进制字符串T1,T2进行位异或运算得到,其中i为256位字符串的第i位;上述两个特征描述符的距离越小,图像匹配度越高;The feature matching method of the tested image is: ① Calculate the Euclidean distance between two SURF global feature descriptors: the Euclidean distance between the two SURF global feature descriptors L 1 , L 2 where i is the ith dimension of the 64-dimensional feature vector; ② Calculate the Hamming distance between the two ORB global feature descriptors: The Hamming distance between the two ORB global feature descriptors R 1 , R 2 is calculated by comparing the two Binary strings T 1 , T 2 are obtained by bitwise XOR operation, where i is the ith bit of a 256-bit string; the smaller the distance between the two feature descriptors above, the higher the image matching degree;
视觉定位的方法是,首先分别采用上述第五步和第六步的方法计算上述第七步采集到的被测试图像的SURF全局特征描述符与ORB全局特征描述符,然后利用上述被测试图像的特征匹配方法分别计算该被测试图像的SURF全局特征描述符与ORB全局特征描述符与上述第四步得到的训练图像的SURF全局特征描述符与ORB全局特征描述符之间的距离,再利用KNN算法在SURF匹配空间计算三个近邻,即计算与该被测试图像的SURF全局特征描述符的欧氏距离最小的三个上述第四步得到的训练图像,在ORB匹配空间计算二个近邻,即计算与该被测试图像的ORB全局特征描述符的汉明距离最小的二个上述第四步得到的训练图像,最后取三个近邻与二个近邻的交集,作为与该被测试图像最相近的上述第四步训练图像集中的训练图像,称为匹配图像,此匹配图像对应的位置坐标即为视觉定位坐标(xv,yv),由此完成视觉定位;The method of visual positioning is to first calculate the SURF global feature descriptor and the ORB global feature descriptor of the tested image collected in the above seventh step by using the methods of the fifth step and the sixth step above, and then use the above test image. The feature matching method calculates the distance between the SURF global feature descriptor and ORB global feature descriptor of the tested image and the SURF global feature descriptor and ORB global feature descriptor of the training image obtained in the fourth step above, and then uses KNN The algorithm calculates three nearest neighbors in the SURF matching space, that is, calculates the three training images obtained in the fourth step above with the smallest Euclidean distance from the SURF global feature descriptor of the tested image, and calculates two nearest neighbors in the ORB matching space, namely Calculate the two training images obtained in the fourth step above with the smallest Hamming distance to the ORB global feature descriptor of the image to be tested, and finally take the intersection of the three neighbors and the two neighbors as the closest neighbor to the image to be tested. The training images in the training image set in the fourth step above are called matching images, and the position coordinates corresponding to the matching images are the visual positioning coordinates (x v , y v ), thereby completing the visual positioning;
第九步,WiFi指纹定位与视觉定位相结合的定位:The ninth step, the combination of WiFi fingerprint positioning and visual positioning:
在上述第三步得到WiFi定位范围后,将坐标位于WiFi定位范围内的上述第四步生成训练图像集中的训练图像构成匹配图像集,当上述第八步的匹配图像位于该匹配图像集内,则将上述第八步得到的视觉定位坐标(xv,yv)作为最终的室内定位位置坐标,反之将上述第三步得到的WiFi定位坐标(xw,yw)作为最终的位置坐标,由此完成WiFi指纹定位与视觉定位相结合的定位。After obtaining the WiFi positioning range in the third step above, the training images in the training image set generated in the fourth step above with the coordinates located within the WiFi positioning range form a matching image set, and when the matching image set in the eighth step above is located in the matching image set, Then take the visual positioning coordinates (x v , y v ) obtained in the above eighth step as the final indoor positioning position coordinates, otherwise take the WiFi positioning coordinates (x w , y w ) obtained in the third step above as the final position coordinates, Thus, the positioning of the combination of WiFi fingerprint positioning and visual positioning is completed.
本实施例以室内步行街一楼为试验场地,手机拍摄的所有图片均为4160*3120(像素),定位结果如表1所示。In this embodiment, the first floor of the indoor pedestrian street is used as the test site, and all the pictures taken by the mobile phone are 4160*3120 (pixels), and the positioning results are shown in Table 1.
表1.室内步行街一楼的定位试验结果Table 1. Positioning test results on the first floor of the indoor pedestrian street
经过待定位点真实坐标与本实施例的定位坐标对比,证明本发明一种室内定位方法,所采用的基于WiFi指纹的定位与基于标志的视觉定位相结合的方法,实现了高精度的室内定位。By comparing the real coordinates of the point to be positioned and the positioning coordinates of this embodiment, it is proved that the indoor positioning method of the present invention adopts the method of combining WiFi fingerprint-based positioning and sign-based visual positioning to achieve high-precision indoor positioning. .
实施例2Example 2
除“选取室内环境下的走廊区域为定位区域,在该走廊区域中,选取30个WiFi采样点;在选定的定位区域中,设定30个WiFi采样点,将每个采样点采集到的MAC地址作为该采样点的指纹,遍历所有采样点便可得到30个指纹,保存入WiFi位置指纹数据库,由此完成构建WiFi位置指纹数据库”之外,其他同实施例1。In addition to "select the corridor area in the indoor environment as the positioning area, in the corridor area, select 30 WiFi sampling points; in the selected positioning area, set 30 WiFi sampling points, and collect the data collected from each sampling point. The MAC address is used as the fingerprint of the sampling point, and 30 fingerprints can be obtained by traversing all the sampling points, which are saved into the WiFi location fingerprint database, thereby completing the construction of the WiFi location fingerprint database”, the others are the same as in Example 1.
实施例3Example 3
除“选取室内环境下的走廊区域为定位区域,在该走廊区域中,选取45个WiFi采样点;在选定的定位区域中,设定45个WiFi采样点,将每个采样点采集到的MAC地址作为该采样点的指纹,遍历所有采样点便可得到45个指纹,保存入WiFi位置指纹数据库,由此完成构建WiFi位置指纹数据库”之外,其他同实施例1。In addition to "select the corridor area in the indoor environment as the positioning area, in the corridor area, select 45 WiFi sampling points; in the selected positioning area, set 45 WiFi sampling points, and collect the data collected from each sampling point. The MAC address is used as the fingerprint of the sampling point, and 45 fingerprints can be obtained by traversing all the sampling points, which are stored in the WiFi location fingerprint database, thereby completing the construction of the WiFi location fingerprint database.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710152882.7A CN106793086B (en) | 2017-03-15 | 2017-03-15 | Indoor positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710152882.7A CN106793086B (en) | 2017-03-15 | 2017-03-15 | Indoor positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106793086A CN106793086A (en) | 2017-05-31 |
CN106793086B true CN106793086B (en) | 2020-01-14 |
Family
ID=58961001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710152882.7A Expired - Fee Related CN106793086B (en) | 2017-03-15 | 2017-03-15 | Indoor positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106793086B (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107036602B (en) * | 2017-06-15 | 2020-04-03 | 北京大学 | Indoor autonomous navigation system and method for hybrid unmanned aerial vehicle based on environmental information code |
CN107886752B (en) * | 2017-11-08 | 2019-11-26 | 武汉理工大学 | A kind of high-precision vehicle positioning system and method based on transformation lane line |
WO2019104665A1 (en) * | 2017-11-30 | 2019-06-06 | 深圳市沃特沃德股份有限公司 | Robot cleaner and repositioning method therefor |
CN110360999B (en) * | 2018-03-26 | 2021-08-27 | 京东方科技集团股份有限公司 | Indoor positioning method, indoor positioning system, and computer readable medium |
CN108692720B (en) * | 2018-04-09 | 2021-01-22 | 京东方科技集团股份有限公司 | Positioning method, positioning server and positioning system |
CN109540144A (en) * | 2018-11-29 | 2019-03-29 | 北京久其软件股份有限公司 | A kind of indoor orientation method and device |
CN109612455A (en) * | 2018-12-04 | 2019-04-12 | 天津职业技术师范大学 | An indoor positioning method and system |
US10660062B1 (en) | 2019-03-14 | 2020-05-19 | International Business Machines Corporation | Indoor positioning |
CN111225440A (en) * | 2019-11-22 | 2020-06-02 | 三一重工股份有限公司 | Cooperative positioning method and device and electronic equipment |
CN110940316B (en) * | 2019-12-09 | 2022-03-18 | 国网智能科技股份有限公司 | Navigation method and system for fire-fighting robot of transformer substation in complex environment |
CN111076733B (en) * | 2019-12-10 | 2022-06-14 | 亿嘉和科技股份有限公司 | Robot indoor map building method and system based on vision and laser slam |
CN111132013B (en) * | 2019-12-30 | 2020-12-11 | 广东博智林机器人有限公司 | Indoor positioning method, device, storage medium and computer equipment |
CN111323024B (en) * | 2020-02-10 | 2022-11-15 | Oppo广东移动通信有限公司 | Positioning method and device, equipment, storage medium |
CN111511017B (en) * | 2020-04-09 | 2022-08-16 | Oppo广东移动通信有限公司 | Positioning method and device, equipment and storage medium |
CN111457925B (en) * | 2020-04-15 | 2022-03-22 | 湖南赛吉智慧城市建设管理有限公司 | Community path navigation method and device, computer equipment and storage medium |
CN111521971B (en) * | 2020-05-13 | 2021-04-09 | 北京洛必德科技有限公司 | Robot positioning method and system |
CN111664848B (en) * | 2020-06-01 | 2022-02-11 | 上海大学 | Multi-mode indoor positioning navigation method and system |
CN111928852B (en) * | 2020-07-23 | 2022-08-23 | 武汉理工大学 | Indoor robot positioning method and system based on LED position coding |
CN112165684B (en) * | 2020-09-28 | 2021-09-14 | 上海大学 | High-precision indoor positioning method based on joint vision and wireless signal characteristics |
CN112560818B (en) * | 2021-02-22 | 2021-07-27 | 深圳阜时科技有限公司 | Fingerprint identification method applied to narrow-strip fingerprint sensor and storage medium |
CN113316080B (en) * | 2021-04-19 | 2023-04-07 | 北京工业大学 | Indoor positioning method based on Wi-Fi and image fusion fingerprint |
CN113382376B (en) * | 2021-05-08 | 2022-05-10 | 湖南大学 | Indoor positioning method based on WIFI and visual integration |
US11698467B2 (en) * | 2021-08-30 | 2023-07-11 | Nanning Fulian Fugui Precision Industrial Co., Ltd. | Indoor positioning method based on image visual features and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104484887A (en) * | 2015-01-19 | 2015-04-01 | 河北工业大学 | External parameter calibration method used when camera and two-dimensional laser range finder are used in combined mode |
CN105137389A (en) * | 2015-09-02 | 2015-12-09 | 安宁 | Video-assisted radiofrequency positioning method and apparatus |
CN105718549A (en) * | 2016-01-16 | 2016-06-29 | 深圳先进技术研究院 | Airship based three-dimensional WiFi (Wireless Fidelity) fingerprint drawing system and method |
CN105828296A (en) * | 2016-05-25 | 2016-08-03 | 武汉域讯科技有限公司 | Indoor positioning method based on convergence of image matching and WI-FI |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8996302B2 (en) * | 2012-11-30 | 2015-03-31 | Apple Inc. | Reduction of the impact of hard limit constraints in state space models |
-
2017
- 2017-03-15 CN CN201710152882.7A patent/CN106793086B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104484887A (en) * | 2015-01-19 | 2015-04-01 | 河北工业大学 | External parameter calibration method used when camera and two-dimensional laser range finder are used in combined mode |
CN105137389A (en) * | 2015-09-02 | 2015-12-09 | 安宁 | Video-assisted radiofrequency positioning method and apparatus |
CN105718549A (en) * | 2016-01-16 | 2016-06-29 | 深圳先进技术研究院 | Airship based three-dimensional WiFi (Wireless Fidelity) fingerprint drawing system and method |
CN105828296A (en) * | 2016-05-25 | 2016-08-03 | 武汉域讯科技有限公司 | Indoor positioning method based on convergence of image matching and WI-FI |
Non-Patent Citations (2)
Title |
---|
Mei Zhang;Wenbo Shen;Jinhui Zhu.WIFI and magnetic fingerprint positioning algorithm based on KDA-KNN.《IEEE》.2016, * |
基于ORB全局特征与最近邻的交通标志快速识别算法;胡月志,李娜,胡钊政,李祎承;《交通信息与安全》;20160131;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN106793086A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106793086B (en) | Indoor positioning method | |
CN110856112B (en) | Crowd-sourcing perception multi-source information fusion indoor positioning method and system | |
Huang et al. | WiFi and vision-integrated fingerprint for smartphone-based self-localization in public indoor scenes | |
Du et al. | CRCLoc: A crowdsourcing-based radio map construction method for WiFi fingerprinting localization | |
CN111491367A (en) | An indoor positioning algorithm based on crowd-sensing and multi-fusion technology | |
CN110197157B (en) | Pavement crack growth detection method based on historical crack data | |
CN108120436A (en) | Real scene navigation method in a kind of iBeacon auxiliary earth magnetism room | |
CN110360999A (en) | Indoor orientation method, indoor locating system and computer-readable medium | |
US11113896B2 (en) | Geophysical sensor positioning system | |
CN108234927A (en) | Video frequency tracking method and system | |
CN109520500A (en) | One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method | |
CN110470295B (en) | Indoor walking navigation system and method based on AR positioning | |
CN106600652A (en) | Panoramic camera positioning method based on artificial neural network | |
CN106952289A (en) | WiFi target localization method combined with deep video analysis | |
KR101707279B1 (en) | Coordinate Calculation Acquisition Device using Stereo Image and Method Thereof | |
CN104661300A (en) | Positioning method, device, system and mobile terminal | |
CN106470478B (en) | Positioning data processing method, device and system | |
CN114727384A (en) | Bluetooth RSSI positioning method based on weighted min-max | |
Gufran et al. | VITAL: Vision transformer neural networks for accurate smartphone heterogeneity resilient indoor localization | |
CN105044659B (en) | Indoor positioning device and method based on ambient light spectrum fingerprint | |
CN114513746B (en) | Indoor positioning method integrating triple vision matching model and multi-base station regression model | |
Zhang et al. | Dual-band wi-fi based indoor localization via stacked denosing autoencoder | |
CN107872873A (en) | Internet-of-things terminal localization method and device | |
CN108512888A (en) | A kind of information labeling method, cloud server, system, electronic equipment and computer program product | |
CN104933702B (en) | Detection method and device for power transmission line tower |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200114 |
|
CF01 | Termination of patent right due to non-payment of annual fee |