CN114088095A - Three-dimensional indoor positioning method based on photodiode - Google Patents

Three-dimensional indoor positioning method based on photodiode Download PDF

Info

Publication number
CN114088095A
CN114088095A CN202111269921.4A CN202111269921A CN114088095A CN 114088095 A CN114088095 A CN 114088095A CN 202111269921 A CN202111269921 A CN 202111269921A CN 114088095 A CN114088095 A CN 114088095A
Authority
CN
China
Prior art keywords
light intensity
fingerprint
trajectory line
photodiode
layer space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111269921.4A
Other languages
Chinese (zh)
Other versions
CN114088095B (en
Inventor
崔金强
牛冠冲
尉越
丁玉隆
宋伟伟
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202111269921.4A priority Critical patent/CN114088095B/en
Publication of CN114088095A publication Critical patent/CN114088095A/en
Application granted granted Critical
Publication of CN114088095B publication Critical patent/CN114088095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional indoor positioning method based on a photodiode, which comprises the following steps: constructing an indoor single-layer space modular fingerprint library based on the photodiodes; acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line; and acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object. The indoor single-layer space modular fingerprint library containing the light intensity sequences of the fingerprint points is constructed based on the photodiode, the error of the matching of the environmental light intensity to the fingerprint and the calculation complexity are reduced, then a candidate light intensity sequence set corresponding to an input trajectory line is matched in the fingerprint library, and finally the dynamic time warping algorithm and the Kalman filtering algorithm are utilized to further calibrate and position, so that the error of a single fingerprint point is avoided, and the positioning is more accurate.

Description

Three-dimensional indoor positioning method based on photodiode
Technical Field
The invention relates to the technical field of control, in particular to a three-dimensional indoor positioning method based on a photodiode.
Background
Along with the continuous promotion of unmanned aerial vehicle system intelligent ability, the application prospect is becoming wide day by day in fields such as industrial scene patrols and examines, city anti-terrorism reconnaissance, traffic monitoring. However, the main use of the present unmanned aerial vehicle is limited to outdoor scenes, and for indoor scenes that cannot use a satellite navigation system, the positioning navigation technology is not yet in a mature application stage due to the limitation of the positioning navigation technology. The application requirements of the unmanned aerial vehicle in actual indoor scenes are very wide, such as warehouse inspection, factory safety inspection, indoor reconnaissance and the like. Because no satellite navigation signal (such as GPS) is used for accurate positioning of the unmanned aerial vehicle, currently mainstream researchers use airborne sensors such as laser radars and vision cameras to realize indoor positioning by utilizing instant mapping and positioning technology, but the existing indoor positioning method has the problem of low positioning accuracy.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The present invention provides a three-dimensional indoor positioning method based on a photodiode, aiming at solving the problem of low positioning accuracy of the indoor positioning method in the prior art.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a three-dimensional indoor positioning method based on a photodiode, where the method includes:
constructing an indoor single-layer space modular fingerprint library based on the photodiodes; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line;
and acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object.
In one implementation, the photodiode-based construction of an indoor single-layer spatially modular fingerprint library includes:
constructing an indoor single-layer space module;
uniformly dividing the indoor single-layer space module to obtain a plurality of indoor single-layer space sub-modules;
extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules, measuring a plurality of light intensity numerical values corresponding to preset positions through a photodiode based on each preset position in the indoor single-layer space sub-modules, and obtaining a light intensity sequence corresponding to the preset positions according to the plurality of light intensity numerical values;
and obtaining an indoor single-layer space modularized fingerprint library according to the preset position and the light intensity sequence.
In one implementation, the extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules, based on each preset position in the indoor single-layer space sub-module, measuring a plurality of light intensity values corresponding to the preset position through a photodiode, and obtaining a light intensity sequence corresponding to the preset position according to the plurality of light intensity values includes:
extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules;
based on each preset position in the indoor single-layer space submodule, the photodiode generates a plurality of current values corresponding to the preset positions after receiving a plurality of lights;
converting the plurality of current values into a plurality of light intensity numerical values corresponding to the preset positions through a photodiode;
and forming a light intensity sequence corresponding to the preset position by the plurality of light intensity numerical values corresponding to the preset position.
In one implementation, the obtaining an indoor single-layer spatial modular fingerprint library according to the preset position and the light intensity sequence includes:
taking the preset positions as fingerprint points, and storing the fingerprint points and the light intensity sequences corresponding to the fingerprint points to obtain a unit fingerprint library corresponding to the indoor single-layer space sub-module;
and forming an indoor single-layer space modularized fingerprint library by using a plurality of unit fingerprint libraries.
In one implementation, the obtaining a trajectory line of a target object, and obtaining a set of candidate light intensity sequences corresponding to the trajectory line according to the fingerprint point, the light intensity sequence corresponding to the fingerprint point, and the trajectory line includes:
acquiring a trajectory line of a target object through an inertial measurement unit;
obtaining an approximate trajectory line according to the fingerprint points and the trajectory line;
and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the light intensity sequences corresponding to the approximate trajectory line and the fingerprint point.
In one implementation, the obtaining an approximate trajectory line from the fingerprint points and the trajectory line includes:
extracting a fingerprint point grid formed by the fingerprint points, wherein the fingerprint point grid is used for representing the intervals among the fingerprint points;
and based on the fingerprint point grids, performing approximate operation on the track points in the track line to obtain an approximate track line.
In an implementation manner, the approximating a trace point in the trace line based on the fingerprint point grid to obtain an approximated trace line includes:
and performing approximate operation on the track points in the track line and the fingerprint points in the fingerprint point grid to obtain an approximate track line.
In one implementation, the obtaining a set of candidate light intensity sequences corresponding to the trajectory line according to the light intensity sequences corresponding to the approximate trajectory line and the fingerprint point includes:
traversing and matching the approximate trajectory in the fingerprint database to obtain a plurality of candidate trajectories containing fingerprint points;
for each candidate trajectory line containing the fingerprint points, extracting a light intensity sequence corresponding to the fingerprint points in the candidate trajectory line to obtain a candidate light intensity sequence subset;
a subset of the plurality of candidate light intensity sequences is grouped into a set of candidate light intensity sequences.
In one implementation, the obtaining light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a kalman filtering algorithm to obtain the target position of the target object includes:
acquiring light intensity data corresponding to a track point on the track line through a photodiode;
based on the dynamic time warping algorithm, the light intensity data and the candidate light intensity sequence set are subjected to positioning calculation to obtain a first prediction position;
obtaining a second predicted position according to the trajectory line;
and performing data processing on the first predicted position and the second predicted position based on a Kalman filtering algorithm to obtain a target position of the target object.
In one implementation, the performing a positioning calculation on the light intensity data and the candidate light intensity sequence set based on the dynamic time warping algorithm to obtain a first predicted position includes:
and matching the light intensity data with the candidate light intensity sequence set by adopting a dynamic time warping algorithm to obtain a first predicted position.
In one implementation, the deriving a second predicted position from the trajectory line includes:
and calculating the position of the trajectory line through an inertial measurement unit to obtain a second predicted position.
In one implementation, the performing data processing on the first predicted position and the second predicted position based on the kalman filter algorithm to obtain the target position of the target object includes obtaining a target position of the target object based on the kalman filter algorithm
And based on a Kalman filtering algorithm, fusing the first predicted position and the second predicted position to obtain the target position of the target object.
In a second aspect, an embodiment of the present invention further provides a three-dimensional indoor positioning device based on a photodiode, where the device includes:
the fingerprint library construction module is used for constructing an indoor single-layer space modularized fingerprint library based on the photodiode; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
the candidate light intensity sequence set acquisition module is used for acquiring a trajectory line of a target object and acquiring a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint library and the trajectory line;
and the target position acquisition module of the target object is used for acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object.
In a third aspect, an embodiment of the present invention further provides an intelligent terminal, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors includes a processor configured to execute the method for three-dimensional indoor positioning based on photodiodes according to any one of the above-mentioned embodiments.
In a fourth aspect, embodiments of the present invention further provide a non-transitory computer-readable storage medium, where instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the three-dimensional indoor positioning method based on a photodiode as described in any one of the above.
The invention has the beneficial effects that: the embodiment of the invention firstly constructs an indoor single-layer space modularized fingerprint library based on the photodiode; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points; then, acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line; finally, light intensity data corresponding to the trajectory line are obtained, and the light intensity data and the candidate light intensity sequence set are subjected to positioning calculation based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain a target position of a target object; therefore, the indoor single-layer space modular fingerprint library containing the light intensity sequences of the fingerprint points is constructed based on the photodiode, the error of the matching of the environmental light intensity to the fingerprint and the calculation complexity are reduced, then the candidate light intensity sequence set corresponding to the input trajectory line is matched in the fingerprint library, and finally the positioning position is further calibrated by utilizing the dynamic time warping algorithm and the Kalman filtering algorithm, so that the error of the single fingerprint point is avoided, and the positioning is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a three-dimensional indoor positioning method based on a photodiode according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of an implementation manner of a three-dimensional indoor positioning system based on a photodiode according to an embodiment of the present invention.
FIG. 3 is a hardware design provided by an embodiment of the present invention.
Fig. 4 is a schematic diagram of an implementation manner of modular floor modeling provided by the embodiment of the invention.
Fig. 5 is a schematic diagram of an implementation manner of the intra-module positioning experimental design according to the embodiment of the present invention.
Fig. 6 is a schematic diagram of an implementation manner of a light intensity fingerprint data set according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of an implementation manner of DTW data matching according to an embodiment of the present invention.
Fig. 8 is a schematic diagram comparing two sets of data using DTW according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of an implementation manner of the positioning and navigation experimental design provided in the embodiment of the present invention.
Fig. 10 is a schematic block diagram of a three-dimensional indoor positioning device based on a photodiode according to an embodiment of the present invention.
Fig. 11 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
The invention discloses a three-dimensional indoor positioning method based on a photodiode, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail below by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Because among the prior art, outdoor location can adopt GPS, for outdoor environment, indoor environment can't use satellite navigation signal to be used for unmanned aerial vehicle's location, causes unmanned aerial vehicle to have more challenging in indoor environment's application, and the rapid development of unmanned aerial vehicle field correlation technique then provides new development opportunity for this application, and the development accurate three-dimensional indoor location and navigation will produce important influence to promoting unmanned aerial vehicle in the wide application of indoor environment. The traditional indoor positioning uses airborne sensors such as laser radar and vision cameras, and high-precision indoor positioning is realized by utilizing instant mapping and positioning technology. However, the existing laser radar cannot be really carried on an unmanned aerial vehicle for use due to the defects of large volume and high cost. Although the visual camera is low in cost and small in size, the positioning accuracy is limited, and the visual camera cannot be used. Meanwhile, the positioning accuracy of indoor positioning by using the geomagnetic wire is poor, and high-accuracy indoor positioning of the unmanned aerial vehicle cannot be realized. The use of only Inertial Measurement Units (IMUs), while very low cost, cannot be used because of the fast drift of the position estimate due to inertial accumulated errors. In addition, the traditional fingerprint method is used for positioning, and light intensity data of each coordinate point are collected and then compared with predicted data. The method is based on the improvement of a fingerprint positioning method, and the closest fingerprint point can be calculated by sequentially comparing the characteristics of each fingerprint point in the traditional fingerprint positioning method. The method has larger error, and particularly when visible light is used as the fingerprint point, the error of fingerprint matching can be caused due to the influence of the environment.
In order to solve the problems of the prior art, the embodiment provides a three-dimensional indoor positioning method based on a photodiode, and by the method, an indoor single-layer space modular fingerprint library containing a light intensity sequence of fingerprint points is constructed based on the photodiode, so that the error and the calculation complexity of the environment light intensity for fingerprint matching are reduced, then a candidate light intensity sequence set corresponding to an input trajectory line is matched in the fingerprint library, and finally, a positioning position is further calibrated by using a dynamic time warping algorithm and a kalman filtering algorithm, so that the error of a single fingerprint point is avoided, and the positioning is more accurate. When the method is specifically implemented, an indoor single-layer space modularized fingerprint library is constructed on the basis of the photodiodes; then, acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line; and finally, acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object.
Exemplary method
The embodiment provides a three-dimensional indoor positioning method based on a photodiode, and the method can be applied to a controlled intelligent terminal. As shown in fig. 1 and 2, the method includes:
s100, constructing an indoor single-layer space modularized fingerprint library based on a photodiode; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
in this embodiment, as shown in fig. 3, 3D positioning and navigation of the drone are realized using visible light, which is light generated by the LED lamp. Fingerprinting is the process of fingerprinting a feature or features of a message or signal, usually signal strength. The fingerprint points are used to characterize the position of the Photodiode (PD) when testing light intensity. In practice, a photodiode can be placed at a preset position, and the light intensity of a plurality of LED lamps is sensed by the photodiode, so that a plurality of light intensity data are corresponding to one fingerprint point, that is, the fingerprint point corresponds to a light intensity sequence (VLIS).
In order to obtain a fingerprint library, the construction of the indoor single-layer space modularized fingerprint library based on the photodiode comprises the following steps:
s101, constructing an indoor single-layer space module;
s102, uniformly dividing the indoor single-layer space module to obtain a plurality of indoor single-layer space sub-modules;
s103, extracting an indoor single-layer space sub-module from the indoor single-layer space sub-modules, measuring a plurality of light intensity numerical values corresponding to preset positions through a photodiode based on each preset position in the indoor single-layer space sub-modules, and obtaining a light intensity sequence corresponding to the preset positions according to the light intensity numerical values;
and S104, obtaining an indoor single-layer space modularized fingerprint library according to the preset position and the light intensity sequence.
Specifically, in order to reduce the workload of fingerprint (light intensity sequence) measurement, as shown in fig. 4, since the building structures are all regular shapes, the floors are modularized, an indoor single-layer space module is constructed, and only one module fingerprint library is generated, so that the structure of the whole floor can be identified through one module fingerprint library. In addition, according to the symmetry of the indoor single-layer space module, uniformly dividing the indoor single-layer space module to obtain a plurality of indoor single-layer space sub-modules; the light intensity of each preset position in one indoor single-layer space submodule is only required to be tested, and in the embodiment, the light intensity value of light of four LED lamps can be measured through the photodiode at each preset position. Correspondingly, the method for extracting the indoor single-layer space submodule from the plurality of indoor single-layer space submodules comprises the following steps of: extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules; based on each preset position in the indoor single-layer space submodule, the photodiode generates a plurality of current values corresponding to the preset positions after receiving a plurality of lights; converting the plurality of current values into a plurality of light intensity numerical values corresponding to the preset positions through a photodiode; and forming a light intensity sequence corresponding to the preset position by the plurality of light intensity numerical values corresponding to the preset position.
In practice, one indoor single-layer space sub-module is extracted from a plurality of indoor single-layer space sub-modules, and in one indoor single-layer space sub-module, the indoor single-layer space sub-module comprises a plurality of preset positions; for each preset position in the indoor single-layer space submodule, a control program is used, the LED lamps are sequentially lightened, and the photodiode generates a plurality of current values corresponding to the preset positions after receiving a plurality of light; and converting the current values into a plurality of light intensity numerical values corresponding to the preset positions through a photodiode to obtain a light intensity sequence corresponding to the preset positions. In this embodiment 4 intensity values, the four intensity values forming a sequence.
After the preset position and the light intensity sequence are obtained, an indoor single-layer space modularized fingerprint library can be obtained according to the preset position and the light intensity sequence, and correspondingly, the step of obtaining the indoor single-layer space modularized fingerprint library according to the preset position and the light intensity sequence comprises the following steps: taking the preset positions as fingerprint points, and storing the fingerprint points and the light intensity sequences corresponding to the fingerprint points to obtain a unit fingerprint library corresponding to the indoor single-layer space sub-module; that is, only the light intensity sequence corresponding to the fingerprint point in the shaded portion in fig. 5 needs to be measured, and then the unit fingerprint libraries are combined into an indoor single-layer space modularized fingerprint library, that is, the rest fingerprint points and the light intensity sequence corresponding to the fingerprint point can be deduced according to the symmetry, so that the calculation amount can be reduced.
After the fingerprint database is constructed, the following steps can be executed as shown in fig. 1: s200, acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line;
specifically, in this embodiment, the target object is an unmanned aerial vehicle, the unmanned aerial vehicle may form a trajectory when operating indoors, and the trajectory of the unmanned aerial vehicle may be obtained by a laser radar or a visual camera, in practice, only one section of trajectory of a part of trajectory is intercepted, and the trajectory may be matched to a fingerprint library, or the trajectory may be matched with a fingerprint point in the fingerprint library, so as to obtain a candidate light intensity sequence set corresponding to the trajectory. Correspondingly, the step of obtaining the trajectory line of the target object and obtaining the candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint point, the light intensity sequence corresponding to the fingerprint point and the trajectory line comprises the following steps:
s201, acquiring a trajectory line of a target object through an inertia measurement unit; the trajectory line refers to a line segment which does not carry coordinate information and direction information;
s202, obtaining an approximate trajectory line according to the fingerprint points and the trajectory line;
s203, obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the light intensity sequences corresponding to the approximate trajectory line and the fingerprint point.
In this embodiment, a trajectory of the target object is obtained through the inertial measurement unit, where the trajectory is a line segment with a limited length and does not carry coordinate information and direction information, and therefore, the actual position of the target object and the direction to be traveled still cannot be determined. Then obtaining an approximate trajectory line according to the fingerprint points and the trajectory line; correspondingly, the step of obtaining an approximate trajectory line according to the fingerprint points and the trajectory line comprises the following steps: extracting a fingerprint point grid formed by the fingerprint points, wherein the fingerprint point grid is used for representing the intervals among the fingerprint points; and based on the fingerprint point grids, performing approximate operation on the track points in the track line to obtain an approximate track line.
In one implementation, a fingerprint point grid formed by the fingerprint points is extracted according to the lengths of the track lines, as shown in fig. 6, and the fingerprint point grid is used for representing the intervals between the fingerprint points; thus, the track points in the track line and the fingerprint points in the grid can be arrangedAnd performing approximate operation on the fingerprint points to obtain an approximate trajectory line. Assuming that the sampling number of the track points is N, the expression is X ═ X0,…,xn,…,xN]The matrix expression form of the fingerprint point grid is as follows:
Figure BDA0003327742820000111
the approximate trajectory can be expressed as:
Figure BDA0003327742820000112
wherein N isWAnd NLRespectively, representing the two-dimensional length of the fingerprint library.
After the approximate trajectory line is obtained, a candidate light intensity sequence set corresponding to the trajectory line can be obtained according to the light intensity sequences corresponding to the approximate trajectory line and the fingerprint point. Correspondingly, the step of obtaining the candidate light intensity sequence set corresponding to the trajectory line according to the light intensity sequences corresponding to the approximate trajectory line and the fingerprint point comprises the following steps: traversing and matching the approximate trajectory in the fingerprint database to obtain a plurality of candidate trajectories containing fingerprint points; for each candidate trajectory line containing the fingerprint points, extracting a light intensity sequence corresponding to the fingerprint points in the candidate trajectory line to obtain a candidate light intensity sequence subset; a subset of the plurality of candidate light intensity sequences is grouped into a set of candidate light intensity sequences.
In the present embodiment, as shown in fig. 6, the approximate trajectory line is traversed and matched in the fingerprint library, and all candidate trajectory lines containing fingerprint points with the same trend are found; candidate trajectory lines such as [19, 20, 21, 29, 37] in fig. 6 and [26, 27, 28, 36, 44] in fig. 6, for each candidate trajectory line containing fingerprint points, extracting the light intensity sequence corresponding to the fingerprint point in the candidate trajectory lines to obtain a candidate light intensity sequence subset; for example, the light intensity sequence of the fingerprint point 19, the light intensity sequence of 20, the light intensity sequence of 21, the light intensity sequence of 29 and the light intensity sequence of 37 in fig. 6 are respectively extracted to form a candidate light intensity sequence subset, and then the light intensity sequence of the fingerprint point 26, the light intensity sequence of 27, the light intensity sequence of 28, the light intensity sequence of 36 and the light intensity sequence of 44 in fig. 6 are respectively extracted to form another candidate light intensity sequence subset. All these subsets of candidate light intensity sequences then constitute a set of candidate light intensity sequences.
After the candidate light intensity sequence sets are obtained, the following steps can be performed as shown in fig. 1: s300, obtaining light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain a target position of a target object. Correspondingly, the step of obtaining the light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a kalman filtering algorithm to obtain the target position of the target object includes the following steps:
s301, acquiring light intensity data corresponding to the track point on the track line through a photodiode;
s302, positioning calculation is carried out on the light intensity data and the candidate light intensity sequence set based on the dynamic time warping algorithm, and a first prediction position is obtained;
s303, obtaining a second predicted position according to the trajectory line;
s304, data processing is carried out on the first prediction position and the second prediction position based on a Kalman filtering algorithm, and the target position of the target object is obtained.
Specifically, the light intensity data firstly induces the light intensity change through the photodiode, so as to obtain the light intensity data corresponding to the track points on the track line, each track point corresponds to one light intensity sequence, and in the embodiment, one track point corresponds to four light intensity numerical values (because the frequency of the LED lamp is very high, the LED lamp can flash 100 times per second). And matching the light intensity data with the candidate light intensity sequence set by adopting a dynamic time warping algorithm to obtain a first predicted position. As shown in fig. 7, if the light intensity data is matched with the candidate light intensity sequence set, at this time, a Dynamic Time Warping (DTW) algorithm is used to find the minimum value of the matching, since the space is three-dimensional, the space is layered to better display the result, fig. 7 only uses two layers (light intensity sequences) VLIS for displaying, when the light intensity data is matched with one of the candidate light intensity sequence set, the value of the DTW score is minimum, and the DTW score can be calculated by the corresponding function in matlab and python. The DTW algorithm is to dynamically normalize vector data and compare two sets of data, as shown in fig. 8, so as to obtain a first moving trajectory corresponding to light intensity data, and use the last trace point in the first moving trajectory as a first predicted position. The DTW method realizes positioning by comparing variation trends among different fingerprint points, thereby avoiding the error of a single fingerprint point. Since the DTW calculation time is slow and in actual use, the drone needs to be positioned immediately, the trajectory line is then calculated by an Inertial Measurement Unit (IMU) for a second predicted position. In the embodiment, the IMU is realized by a sensor with high sampling frequency and low precision, and the invention is realized by a sensor with low sampling frequency and high precision. And finally, fusing the first predicted position and the second predicted position based on a Kalman filtering algorithm to obtain the target position of the target object. The fusion expression is as follows:
Figure BDA0003327742820000141
t|t-1=A∑t-1|t-1AT+Q,
Figure BDA0003327742820000142
Et=C∑t|t-1CT2IM
Figure BDA0003327742820000143
Figure BDA0003327742820000144
t|t=∑t|t-1-KtC∑t|t-1
Figure BDA0003327742820000145
Figure BDA0003327742820000146
wherein the content of the first and second substances,
Figure BDA0003327742820000147
I3representing a third order identity matrix and superscript T representing a transposed matrix.
Figure BDA0003327742820000148
And w is a gaussian noise vector. Sigma2Is the variance of the gaussian noise and is,
Figure BDA0003327742820000149
is a state vector estimated a priori, and the corresponding covariance matrix is sigmat|t-1Measuring residual error etRepresents a covariance of Et。KtIs the gain of the kalman filter, which is,
Figure BDA00033277428200001410
is the posterior estimated variance of the output result of
Figure BDA00033277428200001411
Assume an initial position of p0=[x0,y0,z0]TInitial value of velocity v0=[ux,0,vy,0,vz,0]T. The initial state vector is represented as
Figure BDA00033277428200001412
The covariance is sigma0
In one implementation, as shown in fig. 9, the module for measuring light intensity in the present invention senses the change of light intensity by a Photodiode (PD), and an Amplifier (Amplifier) amplifies the circuit signal. IMU is used to measure acceleration and speed, and Arduino is the control end, and transmits the digital signal to the computer.
Exemplary device
As shown in fig. 10, the embodiment of the present invention provides a three-dimensional indoor positioning apparatus based on a photodiode, the apparatus includes a fingerprint library construction module 401, a candidate light intensity sequence set acquisition module 402 and a target position acquisition module 403 of a target object, wherein:
the fingerprint database building module 401 is used for building an indoor single-layer space modular fingerprint database based on the photodiode; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
a candidate light intensity sequence set obtaining module 402, configured to obtain a trajectory line of a target object, and obtain a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint library and the trajectory line;
and a target position obtaining module 403 of the target object, configured to obtain light intensity data corresponding to the trajectory line, and perform positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a kalman filtering algorithm to obtain a target position of the target object.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 11. The intelligent terminal comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The computer program is executed by a processor to implement a photodiode based three-dimensional indoor positioning method. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen, and the temperature sensor of the intelligent terminal is arranged inside the intelligent terminal in advance and used for detecting the operating temperature of internal equipment.
Those skilled in the art will appreciate that the schematic diagram of fig. 11 is merely a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation of the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have different arrangements of components.
In one embodiment, an intelligent terminal is provided that includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: constructing an indoor single-layer space modular fingerprint library based on the photodiodes; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line;
and acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a three-dimensional indoor positioning method based on photodiodes, the method comprising: constructing an indoor single-layer space modular fingerprint library based on the photodiodes; acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line; and acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object. The indoor single-layer space modular fingerprint library containing the light intensity sequences of the fingerprint points is constructed based on the photodiode, the error of the matching of the environmental light intensity to the fingerprint and the calculation complexity are reduced, then a candidate light intensity sequence set corresponding to an input trajectory line is matched in the fingerprint library, and finally the positioning position is further calibrated by utilizing a dynamic time warping algorithm and a Kalman filtering algorithm, so that the error of the single fingerprint point is avoided, and the positioning is more accurate.
Based on the above embodiments, the present invention discloses a three-dimensional indoor positioning method based on photodiodes, it should be understood that the application of the present invention is not limited to the above examples, and it is obvious to those skilled in the art that modifications and changes can be made based on the above description, and all such modifications and changes are intended to fall within the scope of the appended claims.

Claims (15)

1. A three-dimensional indoor positioning method based on a photodiode is characterized by comprising the following steps:
constructing an indoor single-layer space modular fingerprint library based on the photodiodes; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
acquiring a trajectory line of a target object, and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint points, the light intensity sequences corresponding to the fingerprint points and the trajectory line;
and acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object.
2. The three-dimensional indoor positioning method based on the photodiode as claimed in claim 1, wherein the photodiode-based building of the indoor single-layer space modular fingerprint library comprises:
constructing an indoor single-layer space module;
uniformly dividing the indoor single-layer space module to obtain a plurality of indoor single-layer space sub-modules;
extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules, measuring a plurality of light intensity numerical values corresponding to preset positions through a photodiode based on each preset position in the indoor single-layer space sub-modules, and obtaining a light intensity sequence corresponding to the preset positions according to the plurality of light intensity numerical values;
and obtaining an indoor single-layer space modularized fingerprint library according to the preset position and the light intensity sequence.
3. The three-dimensional indoor positioning method based on the photodiode according to claim 2, wherein the extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules, based on each preset position in the indoor single-layer space sub-module, measuring a plurality of light intensity values corresponding to the preset position through the photodiode, and obtaining a light intensity sequence corresponding to the preset position according to the plurality of light intensity values comprises:
extracting an indoor single-layer space sub-module from a plurality of indoor single-layer space sub-modules;
based on each preset position in the indoor single-layer space submodule, the photodiode generates a plurality of current values corresponding to the preset positions after receiving a plurality of lights;
converting the plurality of current values into a plurality of light intensity numerical values corresponding to the preset positions through a photodiode;
and forming a light intensity sequence corresponding to the preset position by the plurality of light intensity numerical values corresponding to the preset position.
4. The three-dimensional indoor positioning method based on the photodiode as claimed in claim 2, wherein the obtaining of the indoor single-layer space modularized fingerprint library according to the preset position and the light intensity sequence comprises:
taking the preset positions as fingerprint points, and storing the fingerprint points and the light intensity sequences corresponding to the fingerprint points to obtain a unit fingerprint library corresponding to the indoor single-layer space sub-module;
and forming an indoor single-layer space modularized fingerprint library by using a plurality of unit fingerprint libraries.
5. The three-dimensional indoor positioning method based on the photodiode as claimed in claim 1, wherein the obtaining a trajectory line of the target object and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint point, the light intensity sequence corresponding to the fingerprint point and the trajectory line comprises:
acquiring a trajectory line of a target object through an inertial measurement unit;
obtaining an approximate trajectory line according to the fingerprint points and the trajectory line;
and obtaining a candidate light intensity sequence set corresponding to the trajectory line according to the light intensity sequences corresponding to the approximate trajectory line and the fingerprint point.
6. The three-dimensional indoor photodiode-based positioning method according to claim 5, wherein the obtaining an approximate trajectory line from the fingerprint points and the trajectory line comprises:
extracting a fingerprint point grid formed by the fingerprint points, wherein the fingerprint point grid is used for representing the intervals among the fingerprint points;
and based on the fingerprint point grids, performing approximate operation on the track points in the track line to obtain an approximate track line.
7. The three-dimensional indoor positioning method based on the photodiode according to claim 6, wherein the approximating operation of the trace points in the trace line based on the fingerprint point grid to obtain an approximated trace line comprises:
and performing approximate operation on the track points in the track line and the fingerprint points in the fingerprint point grid to obtain an approximate track line.
8. The three-dimensional indoor positioning method based on the photodiode as claimed in claim 5, wherein the obtaining the set of candidate light intensity sequences corresponding to the trace line according to the light intensity sequences corresponding to the approximate trace line and the fingerprint point comprises:
traversing and matching the approximate trajectory in the fingerprint database to obtain a plurality of candidate trajectories containing fingerprint points;
for each candidate trajectory line containing the fingerprint points, extracting a light intensity sequence corresponding to the fingerprint points in the candidate trajectory line to obtain a candidate light intensity sequence subset;
a subset of the plurality of candidate light intensity sequences is grouped into a set of candidate light intensity sequences.
9. The three-dimensional indoor positioning method based on the photodiode as claimed in claim 1, wherein the obtaining of the light intensity data corresponding to the trajectory line and the positioning calculation of the light intensity data and the candidate light intensity sequence set based on the dynamic time warping algorithm and the kalman filtering algorithm to obtain the target position of the target object comprises:
acquiring light intensity data corresponding to a track point on the track line through a photodiode;
based on the dynamic time warping algorithm, the light intensity data and the candidate light intensity sequence set are subjected to positioning calculation to obtain a first prediction position;
obtaining a second predicted position according to the trajectory line;
and performing data processing on the first predicted position and the second predicted position based on a Kalman filtering algorithm to obtain a target position of the target object.
10. The three-dimensional indoor positioning method based on the photodiode as claimed in claim 9, wherein the performing the positioning calculation on the light intensity data and the candidate light intensity sequence set based on the dynamic time warping algorithm to obtain the first predicted position comprises:
and matching the light intensity data with the candidate light intensity sequence set by adopting a dynamic time warping algorithm to obtain a first predicted position.
11. The method of claim 9, wherein deriving a second predicted position from the trajectory line comprises:
and calculating the position of the trajectory line through an inertial measurement unit to obtain a second predicted position.
12. The three-dimensional indoor positioning method based on photodiode of claim 9, wherein the obtaining the target position of the target object by performing data processing on the first predicted position and the second predicted position based on kalman filter algorithm comprises
And based on a Kalman filtering algorithm, fusing the first predicted position and the second predicted position to obtain the target position of the target object.
13. A three-dimensional indoor positioning device based on photodiodes, the device comprising:
the fingerprint library construction module is used for constructing an indoor single-layer space modularized fingerprint library based on the photodiode; the fingerprint database comprises a plurality of fingerprint points and light intensity sequences corresponding to the fingerprint points;
the candidate light intensity sequence set acquisition module is used for acquiring a trajectory line of a target object and acquiring a candidate light intensity sequence set corresponding to the trajectory line according to the fingerprint library and the trajectory line;
and the target position acquisition module of the target object is used for acquiring light intensity data corresponding to the trajectory line, and performing positioning calculation on the light intensity data and the candidate light intensity sequence set based on a dynamic time warping algorithm and a Kalman filtering algorithm to obtain the target position of the target object.
14. An intelligent terminal comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and wherein the one or more programs being configured to be executed by the one or more processors comprises instructions for performing the method of any of claims 1-12.
15. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-12.
CN202111269921.4A 2021-10-29 2021-10-29 Three-dimensional indoor positioning method based on photodiode Active CN114088095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111269921.4A CN114088095B (en) 2021-10-29 2021-10-29 Three-dimensional indoor positioning method based on photodiode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111269921.4A CN114088095B (en) 2021-10-29 2021-10-29 Three-dimensional indoor positioning method based on photodiode

Publications (2)

Publication Number Publication Date
CN114088095A true CN114088095A (en) 2022-02-25
CN114088095B CN114088095B (en) 2023-07-25

Family

ID=80298197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111269921.4A Active CN114088095B (en) 2021-10-29 2021-10-29 Three-dimensional indoor positioning method based on photodiode

Country Status (1)

Country Link
CN (1) CN114088095B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399298A (en) * 2013-07-30 2013-11-20 中国科学院深圳先进技术研究院 Device and method for positioning multiple sensors in room on basis of light intensity
CN104270816A (en) * 2014-10-14 2015-01-07 西北工业大学 Self-adaptive dynamic fingerprint library construction method of LED visible light indoor positioning system
CN105044659A (en) * 2015-07-21 2015-11-11 深圳市西博泰科电子有限公司 Indoor positioning device based on environment spectrum fingerprint
WO2016145880A1 (en) * 2015-09-30 2016-09-22 中兴通讯股份有限公司 Terminal positioning method and device
CN106610490A (en) * 2016-12-30 2017-05-03 北京大学 Optical positioning method, system and device based on LED and image sensor
CN106646366A (en) * 2016-12-05 2017-05-10 深圳市国华光电科技有限公司 Visible light positioning method and system based on particle filter algorithm and intelligent equipment
US20170276767A1 (en) * 2014-12-10 2017-09-28 University Of South Australia Visible Light Based Indoor Positioning System
CN109917404A (en) * 2019-02-01 2019-06-21 中山大学 A kind of indoor positioning environmental characteristic point extracting method
CN109975757A (en) * 2019-03-29 2019-07-05 努比亚技术有限公司 Indoor positioning air navigation aid, terminal and computer storage medium
CN110187308A (en) * 2019-06-20 2019-08-30 华南师范大学 A kind of indoor orientation method based on received signals fingerprint library, device, equipment and medium
CN110726968A (en) * 2019-09-08 2020-01-24 天津大学 Visible light sensing passive indoor positioning method based on clustering fingerprint method
CN111356082A (en) * 2020-03-10 2020-06-30 西安电子科技大学 Indoor mobile terminal positioning method based on WIFI and visible light communication

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399298A (en) * 2013-07-30 2013-11-20 中国科学院深圳先进技术研究院 Device and method for positioning multiple sensors in room on basis of light intensity
CN104270816A (en) * 2014-10-14 2015-01-07 西北工业大学 Self-adaptive dynamic fingerprint library construction method of LED visible light indoor positioning system
US20170276767A1 (en) * 2014-12-10 2017-09-28 University Of South Australia Visible Light Based Indoor Positioning System
CN105044659A (en) * 2015-07-21 2015-11-11 深圳市西博泰科电子有限公司 Indoor positioning device based on environment spectrum fingerprint
WO2016145880A1 (en) * 2015-09-30 2016-09-22 中兴通讯股份有限公司 Terminal positioning method and device
CN106646366A (en) * 2016-12-05 2017-05-10 深圳市国华光电科技有限公司 Visible light positioning method and system based on particle filter algorithm and intelligent equipment
CN106610490A (en) * 2016-12-30 2017-05-03 北京大学 Optical positioning method, system and device based on LED and image sensor
CN109917404A (en) * 2019-02-01 2019-06-21 中山大学 A kind of indoor positioning environmental characteristic point extracting method
CN109975757A (en) * 2019-03-29 2019-07-05 努比亚技术有限公司 Indoor positioning air navigation aid, terminal and computer storage medium
CN110187308A (en) * 2019-06-20 2019-08-30 华南师范大学 A kind of indoor orientation method based on received signals fingerprint library, device, equipment and medium
CN110726968A (en) * 2019-09-08 2020-01-24 天津大学 Visible light sensing passive indoor positioning method based on clustering fingerprint method
CN111356082A (en) * 2020-03-10 2020-06-30 西安电子科技大学 Indoor mobile terminal positioning method based on WIFI and visible light communication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
束沁冬;戴欢;周泽仑;史文华;: "基于光强信息的室内平面图构建方法研究", 苏州科技大学学报(自然科学版), vol. 37, no. 01, pages 79 - 84 *

Also Published As

Publication number Publication date
CN114088095B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
CN107255795B (en) Indoor mobile robot positioning method and device based on EKF/EFIR hybrid filtering
US10895458B2 (en) Method, apparatus, and system for determining a movement of a mobile platform
CN110146909A (en) A kind of location data processing method
CN111415387B (en) Camera pose determining method and device, electronic equipment and storage medium
Veth et al. Fusion of low-cost imagining and inertial sensors for navigation
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN108680160B (en) Indoor positioning and navigation method and device, storage medium and computer equipment
CN109507706B (en) GPS signal loss prediction positioning method
Veth et al. Fusing Low‐Cost Image and Inertial Sensors for Passive Navigation
CN107607091A (en) A kind of method for measuring unmanned plane during flying flight path
CN111080682A (en) Point cloud data registration method and device
CN114383605B (en) Indoor positioning and optimizing method based on MEMS sensor and sparse landmark point
CN115588144A (en) Real-time attitude capturing method, device and equipment based on Gaussian dynamic threshold screening
KR102288609B1 (en) Method and system for position estimation of unmanned aerial vehicle using graph structure based on multi module
CN114111776A (en) Positioning method and related device
CN114820793A (en) Target detection and target point positioning method and system based on unmanned aerial vehicle
CN117518196A (en) Motion compensation method, device, system, equipment and medium for laser radar
CN105807083A (en) Real-time speed measuring method and system for unmanned aerial vehicle
CN113344954A (en) Boundary detection method and device, computer equipment, storage medium and sensor
CN114088095B (en) Three-dimensional indoor positioning method based on photodiode
CN113790711B (en) Unmanned aerial vehicle low-altitude flight pose uncontrolled multi-view measurement method and storage medium
CN114705223A (en) Inertial navigation error compensation method and system for multiple mobile intelligent bodies in target tracking
CN111880576B (en) Unmanned aerial vehicle flight control method and device based on vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant