CN111623765B - Indoor positioning method and system based on multi-mode data - Google Patents

Indoor positioning method and system based on multi-mode data Download PDF

Info

Publication number
CN111623765B
CN111623765B CN202010420793.8A CN202010420793A CN111623765B CN 111623765 B CN111623765 B CN 111623765B CN 202010420793 A CN202010420793 A CN 202010420793A CN 111623765 B CN111623765 B CN 111623765B
Authority
CN
China
Prior art keywords
indoor positioning
positioning result
particle
indoor
computer vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010420793.8A
Other languages
Chinese (zh)
Other versions
CN111623765A (en
Inventor
杨铮
陈亨杰
徐京傲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010420793.8A priority Critical patent/CN111623765B/en
Publication of CN111623765A publication Critical patent/CN111623765A/en
Application granted granted Critical
Publication of CN111623765B publication Critical patent/CN111623765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The embodiment of the invention provides an indoor positioning method and system based on multi-mode data, wherein the method comprises the following steps: respectively acquiring an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of a computer vision; and fusing the indoor positioning result of the inertial sensor, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result. The embodiment of the invention integrates the advantages of video and inertial sensor data on the basis of the traditional indoor wireless positioning technology, and performs data fusion calculation through the particle filter, thereby realizing high-precision robust real-time indoor positioning.

Description

Indoor positioning method and system based on multi-mode data
Technical Field
The invention relates to the technical field of wireless indoor positioning, in particular to an indoor positioning method and system based on multi-mode data.
Background
With the rapid development of internet of things technology and various applications based on indoor locations, the demand for high-precision and real-time indoor positioning services becomes stronger and stronger, and in the past decade, numerous indoor positioning technologies based on Wi-Fi, bluetooth, RFID, sound signals, computer vision, magnetic field signals, inertial sensors, and the like have been developed. However, in the development process, the above-mentioned technology gradually exposes its advantages and bottlenecks. In recent years, people continuously propose work of fusing multi-mode data for positioning, and the core idea of the method is to make up for deficiencies by taking advantages and making up for deficiencies, and make up for shortcuts of a single technology by other technologies.
The indoor positioning technology based on signals such as Wi-Fi and Bluetooth mainly carries out deductive positioning by analyzing signal characteristics. Wherein, signal characteristics with good positioning effect, such as Channel State Information (CSI), cannot be conveniently measured by a commercial smartphone. At present, commercial smart phones mainly support signal strength detection, and therefore, a wireless signal fingerprint technology becomes a core method of an indoor positioning technology based on wireless signals. Although Wi-Fi fingerprint based positioning algorithms achieve good results, current positioning systems generally face important challenges: because the indoor environment and the communication environment are constantly changed, the fingerprint, namely the signal intensity, has time instability, thereby bringing larger errors to positioning; due to the complexity of the indoor environment and the existence of multipath effect, the fingerprints acquired at two positions far away may be similar, so that the degree of distinguishing the fingerprints in the space is reduced, which may cause mismatching at a larger distance and reduce the accuracy of the positioning system.
The indoor positioning technology based on the inertial sensor is characterized in that a micro-electromechanical sensor integrated in a mobile phone terminal is used for detecting a motion mode of a user to perform Pedestrian Dead Reckoning (PDR), so that a moving track of the user in a room is obtained. Such methods have a clear advantage in the accuracy of relative positioning, but when such methods are used, the absolute position of the user in the map must be determined by some other method, which limits the popularization of such methods.
The indoor positioning method based on image recognition depends on the image recognition algorithm used by the indoor positioning method and the image matching technology in the indoor positioning method. Commonly used image recognition algorithms can help the computer to identify the individual person in the image, so as to perform active or passive positioning of the user. However, such methods typically require long run times, and the nature of passive positioning also makes it difficult to uniquely determine a user if multiple users are identified.
Therefore, there is a need for an indoor positioning method and system based on multi-modal data to solve the above problems.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides an indoor positioning method and system based on multi-mode data.
In a first aspect, an embodiment of the present invention provides an indoor positioning method based on multimodal data, including:
respectively acquiring an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of a computer vision;
and fusing the inertial sensor indoor positioning result, the wireless signal indoor positioning result and the computer vision indoor positioning result based on a preset particle filtering algorithm to obtain a multi-mode data fusion indoor positioning result.
Further, the respectively obtaining an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of a computer vision comprises:
calculating and acquiring the indoor positioning result of the wireless signal based on the distance attenuation of the signal intensity;
based on a machine learning algorithm, calculating and acquiring a computer vision indoor positioning result;
and calculating and acquiring an indoor positioning result of the inertial sensor based on the moving direction and the length measured by the inertial sensor.
Further, the fusing the inertial sensor indoor positioning result, the wireless signal indoor positioning result and the computer vision indoor positioning result based on a preset particle filtering algorithm to obtain a multi-mode data fusion indoor positioning result, including:
based on a preset particle filter algorithm, performing importance sampling to obtain a plurality of particles;
calculating a particle weight of the particle based on the wireless signal indoor positioning result and the computer vision indoor positioning result;
calculating a particle transfer condition of the particle based on the inertial sensor indoor positioning result;
performing iterative resampling until a preset termination condition is reached;
and taking the weighted average of all the particles as the indoor positioning result of the multi-modal data fusion.
Further, the weight calculation formula of the particles is as follows:
wi=wi*e^((-Δd2)/(2θ2));
based on wireless signal positioning, Δ d is a distance difference between a real distance and an estimated distance between a particle and a wireless access point AP; based on computer vision indoor positioning, delta d is the distance between a particle and the nearest video point, and the video point is the physical position of an indoor person based on computer vision indoor positioning; theta is a preset fixed value, the initial weight of the particles is 1/N, and N is the number of the particles.
Further, the calculating a particle transfer condition of the particle based on the inertial sensor room positioning result further comprises:
white noise is added to increase the representativeness of the particles when the particle transfer is performed.
Further, the method further comprises:
resolving the relative pose between two indoor cameras based on a motion recovery structure SfM technology;
rotating one camera according to the relative pose to construct an equivalent picture;
and acquiring the real world positions of the plane map and the pixel points based on the binocular stereo vision BSV algorithm and the equivalent picture so as to construct an electronic map for data acquisition.
In a second aspect, an embodiment of the present invention provides an indoor positioning system based on multimodal data, including:
the acquisition module is used for respectively acquiring an indoor positioning result of the inertial sensor, an indoor positioning result of the wireless signal and an indoor positioning result of the computer vision;
and the fusion positioning module is used for fusing the indoor positioning result of the inertial sensor, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method provided in the first aspect when executing the program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method as provided in the first aspect.
According to the indoor positioning method and system based on the multi-mode data, provided by the embodiment of the invention, the advantages of video and inertial sensor data are integrated on the basis of the traditional indoor wireless positioning technology, and the fusion calculation of the data is carried out through the particle filter, so that the high-precision robust real-time indoor positioning is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an indoor positioning method based on multi-modal data according to an embodiment of the present invention;
FIG. 2 is a CDF diagram of various locator quorum errors provided by embodiments of the present invention;
fig. 3 is a schematic diagram of SfM calibration according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an indoor positioning system based on multi-modal data according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of an indoor positioning method based on multi-modal data according to an embodiment of the present invention, and as shown in fig. 1, an embodiment of the present invention provides an indoor positioning method based on multi-modal data, including:
step 101, respectively obtaining an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of computer vision;
and 102, fusing the indoor positioning result of the inertial sensor, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result.
In the embodiment of the present invention, step 101 simultaneously utilizes three navigation technologies, so as to respectively obtain different indoor positioning results, which mainly includes: the computer vision technology is used for positioning, so that a positioning result can be given accurately and robustly; the positioning technology based on the inertial sensor can accurately depict the moving track of the user and accurately and robustly perform relative behaviors; the positioning technology based on wireless signals can give the approximate position of a user in a room. The inertial sensor and the wireless signal positioning can be obtained through intelligent equipment of users, so that each user can be distinguished from hardware.
Further, in step 102, fig. 2 is a CDF map of legal errors of different positioning methods provided by the embodiment of the present invention, as shown in fig. 2, there are respective problems in the three positioning methods, and the embodiment of the present invention uses a particle filter to fuse multimodal data, which can fuse data of three modalities efficiently, thereby achieving the purpose of efficient, accurate and robust positioning.
According to the indoor positioning method based on the multi-mode data, provided by the embodiment of the invention, the advantages of video and inertial sensor data are fused on the basis of the traditional indoor wireless positioning technology, and the fusion calculation of the data is carried out through the particle filter, so that the real-time indoor positioning with high precision and robustness is realized.
On the basis of the above embodiment, the respectively obtaining an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal, and an indoor positioning result of a computer vision includes:
calculating and acquiring the indoor positioning result of the wireless signal based on the distance attenuation of the signal intensity;
based on a machine learning algorithm, calculating and acquiring a computer vision indoor positioning result;
and calculating and acquiring an indoor positioning result of the inertial sensor based on the moving direction and the length measured by the inertial sensor.
It can be known from the content of the above embodiment that the embodiment of the present invention needs to obtain the positioning results of three indoor positioning manners at the same time.
Specifically, in terms of wireless signals, the embodiment of the present invention selects a positioning method based on signal strength as an input of the wireless signals:
Figure BDA0002496794580000061
based on the distance attenuation formula of the signal intensity, the distance between the user and the wireless access point AP can be reversely deduced from the signal intensity received by the user, and the premise is that the physical position of each AP in the map is known in advance.
In the aspect of computer vision, the embodiment of the invention uses a machine learning method, uses an indoor environment camera, positions an indoor user through Mask-RCNN, and projects the obtained result onto a two-dimensional physical plane through methods such as space projection and the like. It should be noted that there is a certain calculation time in the machine learning method, so the data is updated once in 2 to 3 seconds, but this is enough for updating the indoor user location information.
On the basis of the above embodiment, the fusing, based on a preset particle filtering algorithm, the inertial sensor indoor positioning result, the wireless signal indoor positioning result, and the computer vision indoor positioning result to obtain a multi-modal data fusion indoor positioning result includes:
based on a preset particle filter algorithm, performing importance sampling to obtain a plurality of particles;
calculating a particle weight for the particle based on the wireless signal indoor positioning result and the computer vision indoor positioning result;
calculating a particle transfer condition of the particle based on the inertial sensor indoor positioning result;
performing iterative resampling until a preset termination condition is reached;
and taking the weighted average of all the particles as the indoor positioning result of the multi-modal data fusion.
In the embodiment of the invention, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision are used as important bases for calculating the weight of the particles. Starting with the positioning algorithm, the particles will be randomly scattered, with one iteration of the particle filter every fixed time period (e.g., 500 ms). In the iteration, weight calculation needs to be performed on each particle. Before the weight is calculated, the distance between the particle and each AP is calculated through the physical position of the particle; the distance between the particle and the nearest user point of the computer vision projection can also be deduced.
Further, the distance errors are adjusted by the following formula: for each AP, if the received signal strength is greater than a threshold value, calculating an estimated distance through a signal strength attenuation formula, and calculating a real distance between the particle and the AP by using the physical position of the particle and the physical position of the AP. The distance difference between the two is Δ d. On the other hand, for each particle, the closest point to the point is found from the computer vision points, and the distance between the particle and the closest video point is also set to Δ d, where the video point is the physical location of the indoor person located based on the machine learning method. The initial weight of the particle is 1/N (N is the number of particles), and each Δ d is the weight of the particle multiplied by an error factor, which is expressed as follows:
wi=wi*e^((-Δd2)/(2θ2));
based on wireless signal positioning, Δ d is a distance difference between a real distance and an estimated distance between a particle and a wireless access point AP; based on computer vision indoor positioning, delta d is the distance between a particle and the nearest video point, and the video point is the physical position of an indoor person based on computer vision indoor positioning; and theta is a preset fixed value, the initial weight of the particles is 1/N, N is the number of the particles, and theta is 0.9, so that the weight of the particles is determined by the method.
Further, the iterative process of the particle filter involves the transfer of the particles. The moving direction and length of the user are calculated through the inertial sensor, and the moving direction and length are used as the reference of particle transfer, and the particle transfer is carried out according to the direction and length given by data in an electronic map.
On the basis of the foregoing embodiment, the calculating a particle transfer condition of the particle based on the result of the positioning in the inertial sensor chamber further includes:
white noise is added to increase the representativeness of the particles when the particle transfer is performed.
And finally, continuously eliminating and resampling the particles through map constraint existing in the electronic map, and continuously finishing iteration. In the iterative process, the weighted average of all particles is taken as the positioning result. In one embodiment of the present invention, the specific steps are as follows:
step S1, initializing with k equal to 0, and sampling N samples from the initial distribution:
Figure BDA0002496794580000071
all the corresponding weights are set as:
Figure BDA0002496794580000072
then let k be 1.
In step S2, importance sampling is started, and for all particles, particle transfer is performed according to the motion information, so that N samples at k times can be obtained:
Figure BDA0002496794580000081
then the sample weight is calculated: comparing all known wireless data with computer vision data to obtain a corresponding series of deltad, and modifying the weight of the particles into:
Figure BDA0002496794580000082
subsequently, normalization is performed:
Figure BDA0002496794580000083
step S3, judging the number of effective particles, if the number is insufficient, executing step S4 to perform resampling; otherwise, step S5 is executed to output the result.
And step S4, resampling. Extracting new particles from the particle set by taking the weight value as a reference
Figure BDA0002496794580000084
Probability of extraction satisfies
Figure BDA0002496794580000085
And after extraction is finished, replacing the original particle set with a new particle set and initializing the weight, and then estimating the weight of the particles again according to the current information and carrying out normalization.
In step S5, the result is output, including the particle status and the weight. The resulting state takes the weighted average position of the particles within the set of particles.
Step S6, when the measured value at the next time arrives, the process returns to step S2 with k equal to k + 1; otherwise, ending.
On the basis of the above embodiment, the method further includes:
resolving the relative pose between two indoor cameras based on a motion recovery structure SfM technology;
rotating one camera according to the relative pose to construct an equivalent picture;
and acquiring the real world positions of the plane map and the pixel points based on the binocular stereo vision BSV algorithm and the equivalent picture so as to construct an electronic map for data acquisition.
In the embodiment of the invention, data acquisition is realized by relying on an electronic map. Specifically, the present invention calculates the relative pose between two cameras by using the SfM technology, and then constructs a virtual picture by using the imaging principle, fig. 3 is an SfM calibration schematic diagram provided in the embodiment of the present invention, and as shown in fig. 3, the SfM algorithm first extracts and matches feature points in two pictures, and calculates the relative pose between two cameras by solving a PnP (passive-n-Point) problem according to the matching result of the feature points.
Further, the calculated phases are calculatedTagging pose transformation matrix
Figure BDA0002496794580000091
Wherein L is1And L2A reference coordinate system respectively representing the environment camera 1 and the camera 2, based on which the actual camera L can be obtained2To virtual camera
Figure BDA0002496794580000092
The transformation matrix of (2):
Figure BDA0002496794580000093
wherein, because the spatial positions of the two cameras are known,
Figure BDA0002496794580000094
the method is a simple translation transformation and can be simply obtained according to the distance between the cameras.
When a transformation matrix from the actual camera to the virtual camera is obtained
Figure BDA0002496794580000095
Then, an equivalent picture can be calculated according to the imaging principle, and the equivalent picture can be regarded as a picture composed of one and L1Parallel camera
Figure BDA0002496794580000096
And (5) shooting to obtain. Specifically, in the real camera L2Among them:
Figure BDA0002496794580000097
in a virtual camera
Figure BDA0002496794580000098
Among them:
Figure BDA0002496794580000099
where f is the focal length of the camera and (u)2,v2) Is the pixel position of the object P point in the original picture,
Figure BDA00024967945800000910
is the pixel position of the image in the equivalent picture, and the two vectors are collinear according to the imaging principle, then:
Figure BDA00024967945800000911
this equation is developed by having three unknowns: (
Figure BDA00024967945800000912
And lambda) and three equations, so that the pixel position of the original pixel point in the equivalent picture can be calculated. And performing the same operation on all the pixel points to obtain an equivalent picture shot by the virtual camera with the parallel optical axes.
Further, after the equivalent picture is obtained, the absolute position of the pixel point in the three-dimensional space can be obtained according to the parallax by using a traditional binocular stereo vision algorithm. After finding the positions of all matched feature points on the two-dimensional plane map, the projection matrix can be solved through an optimization problem:
and (3) minimizing:
Figure BDA00024967945800000913
among them are:
Figure BDA00024967945800000914
where N is the number of feature points matched in the two pictures, (u)i,vi) Is the position of the pixel in the picture,
Figure BDA0002496794580000101
by means of a projection matrixThe position of the obtained characteristic point on the two-dimensional plane,
Figure BDA0002496794580000102
the real position of the feature point on a two-dimensional plane is obtained by utilizing binocular stereo vision, P is a projection matrix to be obtained finally, and the projection matrix associates the pixel position on the picture with the two-dimensional absolute position in the real world, so that the identification result of computer vision also has the absolute position, and the prerequisite condition of multi-mode data fusion is obtained.
Fig. 4 is a schematic structural diagram of an indoor positioning system based on multimodal data according to an embodiment of the present invention, and as shown in fig. 4, an indoor positioning system based on multimodal data according to an embodiment of the present invention includes an obtaining module 401 and a fusion positioning module 402, where the obtaining module 401 is configured to obtain an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal, and an indoor positioning result of a computer vision, respectively; the fusion positioning module 402 is configured to fuse the inertial sensor indoor positioning result, the wireless signal indoor positioning result, and the computer vision indoor positioning result based on a preset particle filtering algorithm to obtain a multi-modal data fusion indoor positioning result.
According to the indoor positioning system based on the multi-mode data, the advantages of video and inertial sensor data are integrated on the basis of the traditional indoor wireless positioning technology, and the fusion calculation of the data is carried out through the particle filter, so that the real-time indoor positioning with high precision and robustness is realized.
The system provided by the embodiment of the present invention is used for executing the above method embodiments, and for details of the process and the details, reference is made to the above embodiments, which are not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and referring to fig. 5, the electronic device may include: a processor (processor)501, a communication Interface (Communications Interface)502, a memory (memory)503, and a communication bus 504, wherein the processor 501, the communication Interface 502, and the memory 503 are configured to communicate with each other via the communication bus 504. The processor 501 may call logic instructions in the memory 503 to perform the following method: respectively acquiring an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of a computer vision; and fusing the indoor positioning result of the inertial sensor, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result.
In addition, the logic instructions in the memory 503 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method for indoor positioning based on multi-modal data provided in the foregoing embodiments, for example, the method includes: respectively acquiring an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of a computer vision; and fusing the indoor positioning result of the inertial sensor, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. An indoor positioning method based on multi-modal data, comprising:
respectively obtaining an indoor positioning result of an inertial sensor, an indoor positioning result of a wireless signal and an indoor positioning result of a computer vision;
fusing the indoor positioning result of the inertial sensor, the indoor positioning result of the wireless signal and the indoor positioning result of the computer vision based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result;
the fusion of the inertial sensor indoor positioning result, the wireless signal indoor positioning result and the computer vision indoor positioning result based on a preset particle filter algorithm to obtain a multi-mode data fusion indoor positioning result comprises:
based on a preset particle filtering algorithm, importance sampling is carried out to obtain a plurality of particles;
calculating a particle weight for the particle based on the wireless signal indoor positioning result and the computer vision indoor positioning result;
calculating a particle transfer condition of the particle based on the inertial sensor indoor positioning result;
performing iterative resampling until a preset termination condition is reached;
taking the weighted average of all the particles as the indoor positioning result of the multi-modal data fusion;
when positioning is carried out based on the wireless signals, the weight calculation formula of the particles is as follows:
Figure DEST_PATH_IMAGE001
wherein, based on the wireless signal positioning,
Figure 809989DEST_PATH_IMAGE002
the distance difference between the real distance and the estimated distance of the particle and the wireless access point AP is obtained;
Figure DEST_PATH_IMAGE003
setting the weight value as a preset fixed value, wherein the initial weight value of the particles is 1/N, and N is the number of the particles;
when indoor positioning is based on computer vision, the weight calculation formula of the particles is as follows:
Figure 835714DEST_PATH_IMAGE004
wherein, based on the computer vision indoor positioning,
Figure DEST_PATH_IMAGE005
is the distance between the particle and the nearest video point, which is the physical location of the indoor person based on computer vision indoor positioning.
2. The indoor positioning method of multi-modal data as claimed in claim 1, wherein the respectively obtaining the inertial sensor indoor positioning result, the wireless signal indoor positioning result and the computer vision indoor positioning result comprises:
calculating and acquiring the indoor positioning result of the wireless signal based on the distance attenuation of the signal intensity;
based on a machine learning algorithm, calculating and acquiring a computer vision indoor positioning result;
and calculating to obtain the indoor positioning result of the inertial sensor based on the moving direction and the length measured by the inertial sensor.
3. The method of indoor localization of multi-modal data according to claim 1, wherein said calculating a particle transfer of said particle based on said inertial sensor indoor localization result further comprises:
white noise is added to increase the representativeness of the particles when the particle transfer is performed.
4. The method for indoor localization of multimodal data as claimed in claim 1, wherein the method further comprises:
resolving the relative pose between two indoor cameras based on a motion recovery structure SfM technology;
rotating one camera according to the relative pose to construct an equivalent picture;
and acquiring the real world positions of the plane map and the pixel points based on the binocular stereo vision BSV algorithm and the equivalent picture so as to construct an electronic map for data acquisition.
5. An indoor positioning system based on multimodal data, comprising:
the acquisition module is used for respectively acquiring an indoor positioning result of the inertial sensor, an indoor positioning result of the wireless signal and an indoor positioning result of the computer vision;
the fusion positioning module is used for fusing the inertial sensor indoor positioning result, the wireless signal indoor positioning result and the computer vision indoor positioning result based on a preset particle filtering algorithm to obtain a multi-mode data fusion indoor positioning result;
the fusion positioning module is specifically configured to:
based on a preset particle filter algorithm, performing importance sampling to obtain a plurality of particles;
calculating a particle weight for the particle based on the wireless signal indoor positioning result and the computer vision indoor positioning result;
calculating a particle transfer condition of the particle based on the inertial sensor indoor positioning result;
performing iterative resampling until a preset termination condition is reached;
taking the weighted average of all the particles as the indoor positioning result of the multi-modal data fusion;
when positioning is carried out based on the wireless signals, the weight calculation formula of the particles is as follows:
Figure 430644DEST_PATH_IMAGE001
wherein, based on the wireless signal positioning,
Figure 363965DEST_PATH_IMAGE002
the distance difference between the real distance and the estimated distance of the particle and the wireless access point AP is obtained;
Figure 891898DEST_PATH_IMAGE006
setting the weight value as a preset fixed value, wherein the initial weight value of the particles is 1/N, and N is the number of the particles;
when indoor positioning is based on computer vision, the weight calculation formula of the particles is as follows:
Figure 354103DEST_PATH_IMAGE004
wherein, based on the computer vision indoor positioning,
Figure DEST_PATH_IMAGE007
is the distance between the particle and the nearest video point, which is the physical location of the indoor person based on computer vision indoor positioning.
6. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, carries out the steps of the method for indoor positioning based on multimodal data as claimed in any one of claims 1 to 4.
7. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, performs the steps of the method for indoor positioning based on multi-modal data as recited in any one of claims 1 to 4.
CN202010420793.8A 2020-05-18 2020-05-18 Indoor positioning method and system based on multi-mode data Active CN111623765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010420793.8A CN111623765B (en) 2020-05-18 2020-05-18 Indoor positioning method and system based on multi-mode data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010420793.8A CN111623765B (en) 2020-05-18 2020-05-18 Indoor positioning method and system based on multi-mode data

Publications (2)

Publication Number Publication Date
CN111623765A CN111623765A (en) 2020-09-04
CN111623765B true CN111623765B (en) 2022-07-01

Family

ID=72270495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010420793.8A Active CN111623765B (en) 2020-05-18 2020-05-18 Indoor positioning method and system based on multi-mode data

Country Status (1)

Country Link
CN (1) CN111623765B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112711055B (en) * 2020-12-08 2024-03-19 重庆邮电大学 Indoor and outdoor seamless positioning system and method based on edge calculation
CN112284403B (en) * 2020-12-28 2021-09-24 深兰人工智能芯片研究院(江苏)有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN112862818B (en) * 2021-03-17 2022-11-08 合肥工业大学 Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN113949999B (en) * 2021-09-09 2024-01-30 之江实验室 Indoor positioning navigation equipment and method
CN113923596B (en) * 2021-11-23 2024-01-30 中国民用航空总局第二研究所 Indoor positioning method, device, equipment and medium
CN114910081B (en) * 2022-05-26 2023-03-10 阿波罗智联(北京)科技有限公司 Vehicle positioning method and device and electronic equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2585852A1 (en) * 2010-06-25 2013-05-01 Trusted Positioning Inc. Moving platform ins range corrector (mpirc)
CN105222772B (en) * 2015-09-17 2018-03-16 泉州装备制造研究所 A kind of high-precision motion track detection system based on Multi-source Information Fusion
CN106123897B (en) * 2016-06-14 2019-05-03 中山大学 Indoor fusion and positioning method based on multiple features
CN106767791A (en) * 2017-01-13 2017-05-31 东南大学 A kind of inertia/visual combination air navigation aid using the CKF based on particle group optimizing
CN107339989A (en) * 2017-06-23 2017-11-10 江苏信息职业技术学院 A kind of pedestrian's indoor orientation method based on particle filter
CN107255476B (en) * 2017-07-06 2020-04-21 青岛海通胜行智能科技有限公司 Indoor positioning method and device based on inertial data and visual features
CN108632761B (en) * 2018-04-20 2020-03-17 西安交通大学 Indoor positioning method based on particle filter algorithm
CN109298389B (en) * 2018-08-29 2022-09-23 东南大学 Indoor pedestrian combination pose estimation method based on multi-particle swarm optimization
CN109164411B (en) * 2018-09-07 2023-07-11 中国矿业大学 Personnel positioning method based on multi-data fusion
CN110602647B (en) * 2019-09-11 2020-11-24 江南大学 Indoor fusion positioning method based on extended Kalman filtering and particle filtering

Also Published As

Publication number Publication date
CN111623765A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111623765B (en) Indoor positioning method and system based on multi-mode data
CN111199564B (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
CN109671119A (en) A kind of indoor orientation method and device based on SLAM
CN106529538A (en) Method and device for positioning aircraft
CN108693548B (en) Navigation method and system based on scene target recognition
CN109974721A (en) A kind of vision winding detection method and device based on high-precision map
CN107980138A (en) A kind of false-alarm obstacle detection method and device
CN111323024B (en) Positioning method and device, equipment and storage medium
US9396396B2 (en) Feature value extraction apparatus and place estimation apparatus
CN113012224B (en) Positioning initialization method and related device, equipment and storage medium
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN113899364B (en) Positioning method and device, equipment and storage medium
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
CN111522988B (en) Image positioning model obtaining method and related device
JP7140710B2 (en) Information terminal device and program
CN112927291B (en) Pose determining method and device of three-dimensional object, electronic equipment and storage medium
CN113847907A (en) Positioning method and device, equipment and storage medium
CN115775325B (en) Pose determining method and device, electronic equipment and storage medium
JP7075090B1 (en) Information processing system and information processing method
Chen et al. Indoor positioning fusion algorithm for smartphones
US20230188691A1 (en) Active dual pixel stereo system for depth extraction
WO2024083010A1 (en) Visual localization method and related apparatus
Guanqi et al. An improved Indoor Navigation Method based on Monocular Vision Measuring and Region Based Convolutional Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant