CN113490146B - SLAM method based on WiFi and visual fusion - Google Patents
SLAM method based on WiFi and visual fusion Download PDFInfo
- Publication number
- CN113490146B CN113490146B CN202110497977.9A CN202110497977A CN113490146B CN 113490146 B CN113490146 B CN 113490146B CN 202110497977 A CN202110497977 A CN 202110497977A CN 113490146 B CN113490146 B CN 113490146B
- Authority
- CN
- China
- Prior art keywords
- image data
- data
- visual
- positioning
- wifi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 40
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000004927 fusion Effects 0.000 title claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims description 14
- 238000012216 screening Methods 0.000 claims description 12
- 230000008030 elimination Effects 0.000 claims description 7
- 238000003379 elimination reaction Methods 0.000 claims description 7
- 239000000203 mixture Substances 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W84/00—Network topologies
- H04W84/02—Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
- H04W84/10—Small scale networks; Flat hierarchical networks
- H04W84/12—WLAN [Wireless Local Area Networks]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a SLAM method based on WiFi and visual fusion, which comprises the following steps: planning an indoor environment path and setting a starting point; the mobile robot acquires data along the set path and simultaneously acquires WiFi and image data; establishing a WiFi-SLAM map; establishing a visual SLAM map; placing the mobile robot at any position of an indoor environment, and collecting WiFi data and image data at the current moment; matching the WiFi with the image to determine the current time position; and the mobile robot moves to obtain data at the next moment and determines and updates the position. The SLAM technology provided by the invention can effectively improve the robustness and accuracy of the SLAM technology.
Description
Technical Field
The invention relates to the technical field of indoor positioning, in particular to a SLAM method based on WiFi and visual fusion.
Background
Because the GPS positioning system cannot meet the needs of people for indoor activities, experts and scholars are dedicated to the development and research of indoor positioning systems, and thus indoor positioning technologies are rapidly developed. In consideration of the comprehensive aspects of cost, precision, application range, universality and the like, no indoor positioning system is widely applied to the market at present. And meanwhile, the requirement of positioning in an indoor complex environment cannot be met by adopting a single sensor. Particularly for service robots that move automatically, an accurate positioning technique is more desirable.
Therefore, it is necessary to provide a SLAM method based on WiFi and visual fusion to solve the above problems.
The noun explains:
SLAM: and (4) instant positioning and map building.
Disclosure of Invention
Aiming at the technical problems to be solved, the invention provides a WiFi and visual fusion SLAM method with strong robustness and high real-time performance.
The invention provides
A SLAM method based on WiFi and visual fusion comprises the following steps:
s1: setting a motion track of the mobile robot in an indoor environment and setting a starting point;
s2: the mobile robot simultaneously collects WiFi data, image data and relative position information on the set motion track;
s3: carrying out WiFi data preprocessing on the acquired WiFi data, and constructing an indoor WiFi-SLAM map;
s4: preprocessing the acquired image data to construct a visual SLAM map;
s5: placing the mobile robot at any indoor position, and simultaneously acquiring WiFi and image data at the current moment to realize position positioning;
s6: and the mobile robot moves the position, collects WiFi and image data at the next moment and updates the position.
In a further improvement, the step S1 includes the following steps:
and arranging a motion track in the indoor positioning area, wherein the motion track is a closed-loop track, the closed-loop track enables the mobile robot to collect all image data of the indoor environment, and a starting point is set on the closed-loop track.
In a further improvement, the step S2 includes the following steps:
s21: the mobile robot starts from a starting point and moves along a track;
and S22, in the process of movement, WiFi and image data of the current position and relative position information of the movement are collected at the same time during each movement.
In a further improvement, step S3 specifically includes:
s31: WiFi data pretreatment is carried out on the collected WiFi data, the WiFi data pretreatment comprises bad value elimination and AP screening so as to improve the WiFi data quality, wherein the AP screening mode comprises but is not limited to: screening based on information entropy, screening based on ambient signal strength, and the like;
s32: and establishing a Gaussian mixture model based on the collected WiFi data and the relative position information, and constructing the indoor continuous WiFi-SLAM map by the discrete WiFi data through the Gaussian mixture model.
In a further improvement, step S4 specifically includes:
s41: carrying out image data preprocessing on the acquired image data, wherein the image data preprocessing comprises the elimination of bad images and the elimination of images with the same position so as to improve the quality of the image data;
s42: and splicing the images based on the acquired image data and the relative position information so as to construct a visual SLAM map.
In a further improvement, step S5 specifically includes:
s51: placing the mobile robot at any position in an indoor positioning area, and simultaneously acquiring WiFi and image data of the current position;
s52: and calculating WiFi data of the point to be measured and a WiFi Gaussian model to obtain WiFi positioning, establishing a constraint range based on a WiFi positioning result, matching the image of the point to be measured with a key frame in the constraint range in a visual SLAM map, performing visual positioning if matching is successful, wherein a visual positioning result is adopted as a positioning result, and a WiFi positioning result is adopted as a positioning result if matching is failed.
In a further improvement, the step S6 includes the following steps:
the mobile robot moves to the next moment, WiFi data and image data are collected simultaneously, if visual positioning at the last moment is successful, the previous moment and the current moment are calculated to obtain rotation and translation matrixes of the two images, so that position updating is obtained, and if visual positioning at the last moment fails, the position updating is obtained based on WiFi positioning results.
Compared with the prior art, the SLAM method based on WiFi and visual fusion provided by the invention has the advantages that a Gaussian mixture model is built by applying WiFi data, discrete data are converted into continuous WiFi-SLAM maps, image splicing is carried out by applying image data, a visual SLAM map is built, and position updating and positioning of the mobile robot are realized by adopting a coupling decision mode in the positioning and position updating stages. The invention effectively fuses WiFi and visual images and obviously improves the positioning robustness and the real-time performance.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an indoor positioning method based on WiFi and visual fusion, which comprises the following steps:
s1: and setting a closed-loop track in an indoor environment, and setting a starting point.
Establishing a traversable path trajectory in the indoor positioning area, and establishing a starting point, where the step S1 specifically includes:
and arranging a closed-loop track in the indoor positioning area, wherein the track enables the acquisition process to acquire more indoor information as much as possible, and setting a starting point on the closed-loop track.
S2: the mobile robot simultaneously acquires WiFi data and image data on the set track.
In the embodiment, a closed-loop track is set in an indoor environment, and the mobile robot moves for a certain distance each time and acquires WiFi and image data.
The step S2 specifically includes:
s21: the mobile robot starts from a starting point and moves along a track;
and S22, in the process of movement, WiFi and image data of the current position and relative position information of the movement are collected at the same time during each movement.
S3: and preprocessing the collected WiFi data, and constructing an indoor WiFi-SLAM map.
The WiFi data which are discretely collected are converted into continuous data of an indoor environment, WiFi signals of non-collection points can be estimated through a continuous data model, and therefore positioning of any indoor point is achieved.
The step S3 specifically includes:
s31: the method comprises the following steps of carrying out relevant preprocessing on the collected WiFi data, such as bad value elimination, AP screening and the like, and improving the quality of the WiFi data, wherein the AP screening mode comprises but is not limited to: screening based on information entropy, screening based on ambient signal strength, and the like;
s32: and establishing a Gaussian mixture model based on the collected WiFi data and the relative position information, and constructing the indoor continuous WiFi-SLAM map by the discrete WiFi data through the model.
S4: and preprocessing the acquired image data to construct a visual SLAM map.
And (4) carrying out serialization on the discrete image data, and carrying out image splicing on the acquired images to construct a continuous map.
The step S4 includes the following steps:
s41: the collected image data is subjected to relevant preprocessing, such as bad image rejection, same image rejection at the same position and the like, so that the image data quality is improved;
s42: and splicing the images based on the acquired image data and the relative position information so as to construct a visual SLAM map.
S5: the mobile robot is placed at any indoor position, WiFi and image data at the current moment are collected at the same time, and position location is achieved.
WiFi and vision realize fusing the location, adopt the scope of WiFi location restriction vision location earlier to reduce the calculated amount of vision location, improve the real-time, its precision is higher in the dense region of vision, however in vision blind area or sparse region, the vision positioning effect is not good enough, its precision of WiFi location is relatively stable, does not have vision aliasing and illumination influence, under the relatively poor state of vision positioning effect, adopts WiFi location to replace vision location.
The step S5 includes the following steps:
s51: placing the mobile robot at any position in an indoor positioning area, and simultaneously acquiring WiFi and image data of the current position;
s52: and calculating WiFi data of the point to be measured and a WiFi Gaussian model to obtain WiFi positioning, establishing a constraint range based on a WiFi positioning result, matching the image of the point to be measured with a key frame in the constraint range in a visual SLAM map, performing visual positioning if matching is successful, wherein a visual positioning result is adopted as a positioning result, and a WiFi positioning result is adopted as a positioning result if matching is failed.
S6: the mobile robot moves the position and collects WiFi and image data at the next moment, and the position updating is realized by carrying out the steps;
and (4) moving the mobile robot to the next moment, simultaneously acquiring WiFi data and image data, calculating the two images at the previous moment and the current moment to obtain rotation and translation matrixes of the two images if the visual positioning at the previous moment is successful, so as to obtain position updating, and obtaining the position updating based on the step S5 if the positioning at the previous moment fails.
Compared with the prior art, the SLAM method based on WiFi and visual fusion provided by the invention has the advantages that a Gaussian mixture model is built by applying WiFi data, discrete data are converted into continuous WiFi-SLAM maps, image splicing is carried out by applying image data, a visual SLAM map is built, and position updating and positioning of the mobile robot are realized by adopting a coupling decision mode in the positioning and position updating stages. The invention effectively fuses WiFi and visual images and obviously improves the positioning robustness and the real-time performance.
While the foregoing is directed to embodiments of the present invention, it will be understood by those skilled in the art that various changes may be made without departing from the spirit and scope of the invention.
Claims (5)
1. A SLAM method based on Wi Fi and visual fusion is characterized by comprising the following steps:
s1: setting a motion track of the mobile robot in an indoor environment and setting a starting point;
s2: the mobile robot simultaneously collects Wi Fi data, image data and relative position information on the set motion track;
s3: carrying out Wi Fi data preprocessing on the collected Wi Fi data, and constructing an indoor Wi Fi-SLAM map;
s31: wi Fi data preprocessing is carried out on the collected Wi Fi data, the Wi Fi data preprocessing comprises bad value elimination and AP screening so as to improve the quality of the Wi Fi data, and the AP screening mode comprises but is not limited to: screening based on information entropy and screening based on surrounding signal intensity;
s32: establishing a Gaussian mixture model based on the collected Wi Fi data and relative position information, and constructing the discrete Wi Fi data into an indoor continuous Wi Fi-SLAM map through the Gaussian mixture model
S4: preprocessing the acquired image data to construct a visual SLAM map;
s5: placing the mobile robot at any indoor position, and simultaneously acquiring Wi Fi and image data at the current moment to realize position positioning;
s51: placing the mobile robot at any position in an indoor positioning area, and simultaneously acquiring Wi Fi and image data of the current position;
s52: calculating Wi Fi data of a point to be measured and a Wi Fi Gaussian model to obtain Wi Fi positioning, setting a constraint range based on a Wi Fi positioning result, matching an image of the point to be measured with a key frame in the constraint range in a visual SLAM map, performing visual positioning if matching is successful, wherein the positioning result adopts a visual positioning result, and the positioning result adopts a Wi Fi positioning result if matching is failed;
s6: and the mobile robot moves the position and acquires Wi Fi and image data at the next moment to update the position.
2. The SLAM method based on Wi-Fi and visual fusion of claim 1, wherein the step S1 comprises the following steps:
and arranging a motion track in the indoor positioning area, wherein the motion track is a closed-loop track, the closed-loop track enables the mobile robot to collect all image data of the indoor environment, and a starting point is set on the closed-loop track.
3. The SLAM method based on Wi-Fi and visual fusion of claim 2, wherein the step S2 comprises the following steps:
s21: the mobile robot starts from a starting point and moves along a track;
and S22, in the process of movement, the Wi Fi and the image data of the current position and the relative position information of the movement are collected at the same time during each movement.
4. The SLAM method based on Wi-Fi and visual fusion of claim 3, wherein the step S4 specifically comprises:
s41: carrying out image data preprocessing on the acquired image data, wherein the image data preprocessing comprises the elimination of bad images and the elimination of images with the same position so as to improve the quality of the image data;
s42: and splicing the images based on the acquired image data and the relative position information so as to construct a visual SLAM map.
5. The SLAM method based on Wi-Fi and visual fusion of claim 1, wherein the step S6 comprises the following steps:
the mobile robot moves to the next moment, Wi Fi data and image data are collected at the same time, if the visual positioning at the previous moment is successful, the two images at the previous moment and the current moment are calculated to obtain rotation and translation matrixes of the two images, so that position updating is obtained, and if the visual positioning at the previous moment fails, the position updating is obtained based on Wi Fi positioning results.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110497977.9A CN113490146B (en) | 2021-05-08 | 2021-05-08 | SLAM method based on WiFi and visual fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110497977.9A CN113490146B (en) | 2021-05-08 | 2021-05-08 | SLAM method based on WiFi and visual fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113490146A CN113490146A (en) | 2021-10-08 |
CN113490146B true CN113490146B (en) | 2022-04-22 |
Family
ID=77932794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110497977.9A Active CN113490146B (en) | 2021-05-08 | 2021-05-08 | SLAM method based on WiFi and visual fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113490146B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114025320A (en) * | 2021-11-08 | 2022-02-08 | 易枭零部件科技(襄阳)有限公司 | Indoor positioning method based on 5G signal |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108680177A (en) * | 2018-05-31 | 2018-10-19 | 安徽工程大学 | Synchronous superposition method and device based on rodent models |
CN112230243A (en) * | 2020-10-28 | 2021-01-15 | 西南科技大学 | Indoor map construction method for mobile robot |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012064131A (en) * | 2010-09-17 | 2012-03-29 | Tokyo Institute Of Technology | Map generating device, map generation method, movement method of mobile, and robot device |
US11187536B2 (en) * | 2018-01-12 | 2021-11-30 | The Trustees Of The University Of Pennsylvania | Probabilistic data association for simultaneous localization and mapping |
CN110856112B (en) * | 2019-11-14 | 2021-06-18 | 深圳先进技术研究院 | Crowd-sourcing perception multi-source information fusion indoor positioning method and system |
CN112325883B (en) * | 2020-10-19 | 2022-06-21 | 湖南大学 | Indoor positioning method for mobile robot with WiFi and visual multi-source integration |
CN112712107B (en) * | 2020-12-10 | 2022-06-28 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
-
2021
- 2021-05-08 CN CN202110497977.9A patent/CN113490146B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108680177A (en) * | 2018-05-31 | 2018-10-19 | 安徽工程大学 | Synchronous superposition method and device based on rodent models |
CN112230243A (en) * | 2020-10-28 | 2021-01-15 | 西南科技大学 | Indoor map construction method for mobile robot |
Also Published As
Publication number | Publication date |
---|---|
CN113490146A (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109951830B (en) | Indoor and outdoor seamless positioning method based on multi-information fusion | |
CN107179086B (en) | Drawing method, device and system based on laser radar | |
CN104732518B (en) | A kind of PTAM improved methods based on intelligent robot terrain surface specifications | |
CN104035115B (en) | Vision-aided satellite navigation and positioning method, and positioning machine | |
CN109509230A (en) | A kind of SLAM method applied to more camera lens combined type panorama cameras | |
CN105974940A (en) | Target tracking method applicable to aircraft | |
CN104936147B (en) | A kind of localization method under complex indoor environment based on construction pattern constraint | |
CN108226860B (en) | RSS (received signal strength) -based ultra-wideband mixed dimension positioning method and positioning system | |
CN111174781A (en) | Inertial navigation positioning method based on wearable device combined target detection | |
CN113490146B (en) | SLAM method based on WiFi and visual fusion | |
CN110631578B (en) | Indoor pedestrian positioning and tracking method under map-free condition | |
CN105717483A (en) | Position determining method and device based on multisource positioning mode | |
CN109387192B (en) | Indoor and outdoor continuous positioning method and device | |
CN113012292B (en) | AR remote construction monitoring method and system based on unmanned aerial vehicle aerial photography | |
CN104535047A (en) | Multi-agent target tracking global positioning system and method based on video stitching | |
CN111879305A (en) | Multi-mode perception positioning model and system for high-risk production environment | |
CN101714211A (en) | Detection method of high-resolution remote sensing image street center line | |
CN113640778A (en) | Multi-laser radar combined calibration method based on non-overlapping view field | |
CN113923596A (en) | Indoor positioning method, device, equipment and medium | |
CN109286946A (en) | Based on without the mobile communication indoor method for optimizing wireless network and system for relying on positioning | |
CN109241228A (en) | A kind of multiple mobile robot's cooperation synchronous superposition strategy | |
CN107229037A (en) | Mobile platform sensor metric data is augmented spatial registration method | |
CN114549595A (en) | Data processing method and device, electronic equipment and storage medium | |
CN106595601A (en) | Camera six-degree-of-freedom pose accurate repositioning method without hand eye calibration | |
CN110487199A (en) | A kind of Tunnel week DEFORMATION MONITORING SYSTEM and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |