CN115333624A - Visible light indoor positioning method and system based on spectrum estimation detection and computer readable medium - Google Patents
Visible light indoor positioning method and system based on spectrum estimation detection and computer readable medium Download PDFInfo
- Publication number
- CN115333624A CN115333624A CN202210965366.7A CN202210965366A CN115333624A CN 115333624 A CN115333624 A CN 115333624A CN 202210965366 A CN202210965366 A CN 202210965366A CN 115333624 A CN115333624 A CN 115333624A
- Authority
- CN
- China
- Prior art keywords
- light source
- positioning
- visible light
- indoor positioning
- led
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000001514 detection method Methods 0.000 title claims abstract description 40
- 238000001228 spectrum Methods 0.000 title claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 28
- 238000004891 communication Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 19
- 238000003062 neural network model Methods 0.000 claims abstract description 13
- 238000009434 installation Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 24
- 239000013598 vector Substances 0.000 claims description 21
- 230000003595 spectral effect Effects 0.000 claims description 15
- 210000002569 neuron Anatomy 0.000 claims description 13
- 230000003287 optical effect Effects 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 9
- 238000000926 separation method Methods 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 9
- 238000005286 illumination Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 230000003111 delayed effect Effects 0.000 claims description 5
- 230000005284 excitation Effects 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 230000001678 irradiating effect Effects 0.000 claims 1
- 238000012360 testing method Methods 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000009826 distribution Methods 0.000 description 5
- 230000004807 localization Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000005311 autocorrelation function Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B10/00—Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
- H04B10/11—Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
- H04B10/114—Indoor or close-range type systems
- H04B10/116—Visible light communication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B10/00—Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
- H04B10/07—Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems
- H04B10/075—Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal
- H04B10/079—Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
- H04B10/0795—Performance monitoring; Measurement of transmission parameters
- H04B10/07955—Monitoring or measuring power
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Optical Communication System (AREA)
Abstract
The invention discloses a visible light indoor positioning method, a visible light indoor positioning system and a computer readable medium based on spectrum estimation detection, wherein a visible light indoor communication link system model is built; establishing an LED channel diffuse reflection model comprising a direct line-of-sight (LOS) link and a first-order reflection link NLOS; setting the installation distance of the LED and dividing a positioning area on the basis of the LED channel diffuse reflection model, and constructing an indoor VLC positioning system channel model; selecting a plurality of positioning points in a positioning area, separating multiple light source signals based on a spectrum estimation detection method, obtaining power values of different light source signals, and storing the obtained power values and calibration coordinates of different light source signals of each positioning point into a database; and taking the data stored in the database as a neural network training data set, constructing and training a visible light indoor positioning neural network model, and performing visible light indoor positioning by using the trained visible light indoor positioning neural network model to realize accurate positioning of a complex indoor environment.
Description
Technical Field
The invention belongs to the technical field of visible light indoor positioning, and relates to a visible light indoor positioning method and system based on spectrum estimation detection and a computer readable medium.
Background
With the gradual maturity of the visible light communication technology and the deep research of the related application of indoor positioning, the indoor positioning technology is more and more widely concerned by researchers at home and abroad, statistics shows that more than half of activities of human beings are performed indoors, and the positioning and navigation service requirements of large indoor places such as libraries, hospitals, supermarkets, underground parking lots and the like are increased increasingly. Although many indoor positioning technologies based on wireless communication generally exist nowadays, such as WIFI positioning, infrared positioning, ultrasonic positioning, bluetooth positioning, ultra-wideband positioning, etc., most wireless signals are affected by electromagnetic interference and multipath fading in the positioning process, so that the positioning accuracy of the system cannot be guaranteed, and meanwhile, the energy consumption of the device is high, which has certain limitations. The indoor positioning by utilizing visible light is a novel indoor positioning technology, combines illumination and communication, has rich spectrum resources and no electromagnetic interference, has incomparable advantages compared with the traditional radio frequency communication, has become a new research hotspot in the field of wireless communication in recent years, is in a starting stage, but is rapidly developed in recent years along with the development of the visible light communication technology, is discussed as one of indoor access modes of a fifth-generation mobile communication system, and has a very wide application prospect.
According to the type of the receiver, visible Light Communication (VLC) indoor positioning technologies can be divided into two main categories, namely imaging indoor positioning technology based on an Image Sensor (IS) and indoor positioning technology based on a high-precision Photoelectric Detector (PD). The accuracy of the indoor positioning method based on the Image Sensor (IS) IS related to the measurement accuracy of the actual device, the device cost IS high, and the method IS only suitable for positioning objects which are static or slowly move indoors. In positioning using a Photodetector (PD), an algorithm based on an Angle of arrival (AOA), a time of arrival (TOA), and a Received Signal Strength (RSS) belongs to a conventional positioning algorithm. The positioning algorithm based on RSS is widely used due to the advantages of simple theoretical implementation, strong portability and the like, but accurate light source information separation cannot be performed by superposing light source signals on the premise of ensuring illumination, and accurate positioning of a complex indoor environment is difficult to realize.
Disclosure of Invention
The embodiment of the invention aims to provide a visible light indoor positioning method, a visible light indoor positioning system and a computer readable medium based on spectrum estimation detection, so as to solve the problems that accurate light source information separation cannot be carried out on light source signal superposition and accurate positioning of a complex indoor environment is difficult to realize on the premise of ensuring illumination by the existing RSS-based positioning algorithm.
The first technical scheme adopted by the embodiment of the invention is as follows: the visible light indoor positioning method based on spectrum estimation detection is carried out according to the following steps:
and 5, taking the data stored in the database as a training data set of the neural network, constructing and training a visible light indoor positioning neural network model, and performing visible light indoor positioning by using the trained visible light indoor positioning neural network model.
The second technical scheme adopted by the embodiment of the invention is as follows: indoor positioning system of visible light based on spectrum estimation detects, including a plurality of LED lamps and photoelectric detector, still include:
a memory for storing instructions executable by the processor;
a processor for executing the instructions to implement the visible light indoor positioning method based on spectral estimation detection as described above.
The third technical scheme adopted by the embodiment of the invention is as follows: a computer readable medium storing computer program code which when executed by a processor implements a visible light indoor positioning method based on spectral estimation detection as described above.
The embodiment of the invention has the beneficial effects that: aiming at the problem that the attenuation factor of each LED lamp in the space transmission process cannot be accurately obtained in the traditional RSS-based visible light indoor positioning method, a Pisarenko harmonic decomposition algorithm is combined with a neural network on the basis of signal intensity, a visible light indoor positioning model based on PSD detection and neural network combination is provided, the visible light indoor positioning model has a good effect on frequency estimation and power extraction of multiple LED light sources loaded with sinusoidal signals of different frequencies under the background of colored noise and low signal to noise ratio, and the separated power values of the LED light sources are trained and tested in the neural network, so that the indoor positioning accuracy of a visible light system can be remarkably improved, and accurate positioning is realized. The positioning method solves the problems that the existing positioning algorithm based on RSS cannot perform accurate light source information separation by superposing light source signals on the premise of ensuring illumination, and is difficult to realize accurate positioning of complex indoor environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a visible light communication link system model.
Fig. 2 is a schematic diagram of a diffuse reflection model of an LED channel.
Fig. 3 is a schematic diagram of an indoor VLC positioning system channel model.
Fig. 4 is a block diagram of a spectral estimation detection method.
Fig. 5 is a block diagram of an indoor positioning method based on spectrum estimation detection and a neural network.
FIG. 6 is a plot of the frequency-power distribution of each light source measured at coordinates (4,1,0) by the indoor location method based on spectral estimation detection and neural networks for a signal-to-noise ratio of 10 dB.
FIG. 7 is a plot of the frequency-power distribution of each light source measured at coordinates (2,3,0) by the indoor location method based on spectral estimation detection and neural networks for a signal-to-noise ratio of 10 dB.
Fig. 8 is a three-dimensional distribution diagram of the localization error at H =0 of the indoor localization method based on spectral estimation detection and neural network.
Fig. 9 is a measured localization error distribution plot for an indoor localization method based on spectral estimation detection and neural networks.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Example 1
The embodiment provides a visible light indoor positioning method based on spectrum estimation detection, which comprises the following steps:
I(φ i )=I 0 cos m φ i (1)
in the formula I 0 Indicates the central luminous intensity of the LED, phi i Representing the emission angle of the ith LED, and m representing the Lambert reflection order corresponding to the simplification of the visible light source into the Lambert source, the direct horizontal illuminance E of the jth Photodetector (PD) under the multi-LED light source j Comprises the following steps:
in the formula, E ij Indicating that the jth PD end receives the illuminance of the ith LED,denotes the incident angle of the direct light source received by the jth PD whose coordinate is (x) j ,y j ,0),(X i ,Y i ,Z i ) M is the total number of LEDs, which is the coordinate of the ith LED.
Assuming that the secondary light source generated by any LED light source is P point, the coordinate of the P point can be expressed as P (X) 1 ,Y 1 ,Z 1 ) Similarly, the reflected light intensity I of the secondary light source generated by the ith LED light source and directly emitted to the point P i ' is defined as:
I' i =kI i cos m-1 β (3)
in the formula I i Indicating the illumination intensity of the secondary light source generated by the ith LED light source directly reaching the point P, k is the reflection coefficient of the wall surface, beta is the emission angle of the secondary light source, m-1 is the secondary reflection, and then the illumination intensity E 'of the reflected light of the jth PD under the multi-secondary light source' j Comprises the following steps:
in the formula, E ij Indicating the illuminance of the secondary light source of the ith LED received by the jth PD terminal.
Then the illuminance E of any M LED light sources received by the jth PD end is:
firstly, modulating the LED light source to load different carrier signals on each matrix light source formed by combining a plurality of lamp beads, and assuming that the ith LED light source at a sending end has the frequency f i The modulation signal generated by the ith LED light source at the transmitting end is:
x i (n)=A i sin(2πf i n)(1≤i≤M) (6)
wherein, A i For the amplitude, f, of the modulation signal, i.e. the optical power signal generated by the ith LED light source i The optical power signal generated for the ith LED light source is at the time domain frequency, n is the independent variable of the signal in the time domain, and generally n =1,2,3 ….
According to the linear nature of sinusoidal signals, there are:
sin(2πf i n+θ)+sin[2πf i (n-2)+θ)]=2cos(2πf i )sin[2πf i (n-1)+θ] (7)
where θ is the phase shift of the sinusoidal signal;
substituting equation (6) for equation (7) yields the difference equation:
x i (n)-2cos(2πf i )x i (n-1)+x i (n-2)=0 (8)
wherein x is i (n-1) is a reaction of x i (n) the argument n is delayed by a 1 unit delayed signal, x i (n-2) is a reaction of x i The argument n of (n) is delayed by a delay signal of 2 units.
The left and right sides of the formula (8) are subjected to Z conversion simultaneously, and the Z conversion comprises the following steps:
[1-2cos(2πf i )z -1 +z -2 ]X i (z)=0 (9)
wherein X i (z) is x i (n) z is a complex variable, so:
1-2cos(2πf i )z -1 +z -2 =0 (10)
wherein, re (z) i ) Denotes z i Real part of, im (z) i ) Denotes z i Imaginary part of, z i Is the modulation signal x generated by the ith LED light source i (n) a Z-transformed root.
Therefore, the M LED lamps are derived according to the above process, and when the M LED lamps are driven by signals with different frequencies at the same time, the modulation signal x generated by the ith LED light source i The root of the Z transform of (n) may be determined by equation (12):
wherein the content of the first and second substances,is z i The conjugate complex number of (a); a is 0 =1, there is symmetry in the coefficients, i.e. a i =a 2M-i (i=0,1,...,M)。
The difference equation corresponding to equation (12) is:
wherein x is i (n-i) is a reaction of x i (n) delaying the argument n by i units of the delayed signal, n-i denoting delaying the argument n by i units.
Then, selecting a plurality of positioning points in the positioning area, wherein the positioning points are interfered by a visible light channel, and the signals received by a receiving end are as follows:
where y (n) is the observed signal received by the PD containing white noise e (n), H (0) is the optical channel DC gain,is the total noise variance.
wherein y (n-i) is a delay signal obtained by delaying the argument n of y (n) by i units, and e (n-i) is a delay signal obtained by delaying the argument n of e (n) by i units;
the formula (15) is written in a matrix form to obtain
Y T A=E T A (16)
Wherein:
multiplying the vector Y by the left of the formula (16), and obtaining the expectation of mathematics at the two sides to obtain E { YY T The method is as follows:
wherein R is Y Is a matrix formed by autocorrelation functions of the received observed signal, i.e. an autocorrelation matrix, R y (0) Representing the signal x from the 0 th light source 0 (n) the result of the autocorrelation operation;
wherein the content of the first and second substances,is the autocorrelation matrix R of the set of observed signals y (n) Y Characteristic value λ of i And the coefficient vector A of the characteristic polynomial is corresponding to the characteristic value lambda i The dimension of the noise subspace is 1, which is determined by the minimum eigenvalueThe corresponding feature vector constitutes, and therefore the coefficient vector a can be calculated.
When operating the Pisarenko harmonic decomposition method, the autocorrelation matrix R is generally constructed from a p × p (p > 2M) dimensional autocorrelation matrix Y At the beginning, to avoid the multi-solution condition of the coefficient vector A caused by multiple eigenvalues, R is needed Y Performing dimensionality reduction treatment, namely:
solving formula of PD receiving end(20) Then, the coefficient vector A is obtained by the formula (19) to obtain the coefficient vector A i Autocorrelation matrix R Y The result of the autocorrelation operation is performed on each harmonic component of the observation signal received by the receiving end PD, and then the Z transformation is performed on the formula (13) to obtain Z i Then f can be obtained from equation (11) i 。
According to the signal x i (n) is statistically independent of white noise e (n), and the autocorrelation matrix of y (n) from the received signal in equation (14) is:
wherein R is x Is the signal x i (n) autocorrelation function, R e Is the autocorrelation function, σ, of white noise e (n) ω Is the white noise variance, I is the noise coefficient; e.g. of the type i Are M linearly independent vectors which are,using the obtained f i Calculating to obtain a vector e i Then to vector e i Performing DTFT to obtainP i Is the power of the light signal emitted by the ith LED lamp detected by the receiving end, P i =|A i | 2 。
R Y a i =λ i a i ,i=1,2,...,M (22)
R of formula (21) Y Into the above formula, have
Is simplified to obtain
Wherein, the first and the second end of the pipe are connected with each other,is a frequency f i Processing signal subspace feature vector a i The squared magnitude of DTFT, i.e.:
in finding a i Andthen, the calculation is carried out according to the formula (26) Is to x i (n) the expression after the DTFT is performed,a i is the signal intensity, w, of the ith LED light source i Is x i (n) frequency coefficients in the complex frequency domain,is x i (n) frequencies in the complex frequency domain, so equation (26) can be written as:
the above formula represents M linear equations with M unknowns P written in matrix form:
solving the equation system can obtain a power matrix P = [ P = [ P ] 1 P 2 … P i … P M ]Completing the separation of multiple light source signals, wherein, P i Corresponding to the power emitted by the ith LED light source received by the PD.
And finally, storing the obtained power values of the different light source signals of each positioning point and the calibration coordinates thereof into a database to complete the establishment of the database.
And 5, based on a visible light indoor positioning framework of spectral estimation detection and a neural network, fitting a functional relation between the optical power and the calibration coordinate through the neural network by taking the database data established in the step 4 as a training data set of the neural network. The neural network is a three-layer BP neural network and comprises an input layer, a hidden layer and an output layer, wherein the input layer is composed of M neurons, the output layer is composed of three neurons, the input of the input layer is a power matrix P obtained based on Pisarenko data separation, and the output layer outputs optimized relative coordinates of unknown positioning points.
Randomly selecting Q points on a plane (xy plane) with the positioning area H = 0M as reference points of a fingerprint data set, and separating the power of M LED light sources at each reference point by a Pisarenko harmonic decomposition method to obtain the power of the M LED light sources as the fingerprint data set, wherein the recorded information corresponding to the nth (n is more than or equal to 1 and less than or equal to Q) fingerprint point is as follows:
F n =(n,x n ,y n ,z n ,P n1 ,P n2 ,...,P nM )(1≤n≤Q) (29)
wherein the content of the first and second substances,(x n ,y n ,z n ) Representing the true coordinate position, P, of the nth fingerprint point nM Indicates that the nth fingerprint point is at (x) n ,y n ,z n ) The received light power value of the mth LED light source.
Where the number of training dataset samples is K and the number of test dataset samples is L (L + K = Q). And a matrix constructed by the light power values of the M LED light sources received by each fingerprint point is an input training set. In the training process, the output layer continuously performs backward propagation on the positioning error, and the actual positioning coordinate is continuously approached by continuously correcting the weight and the threshold value of each layer, so that the positioning accuracy is improved, and the neural network training model is obtained. Modeling input training set matrix X T Can be expressed as:
wherein X d =(P d1 ,P d2 ,....,P dM ) (d is more than or equal to 1 and less than or equal to K) and represents the Pisarenko power estimated values of the M LED light sources received by the d-th position reference point in the training set. For each training sample, the output of each neuron is computed from front to back.
The excitation function is a unipolar sigmoid function with the expression:
wherein, the first and the second end of the pipe are connected with each other,is the coefficient of a unipolar sigmoid function.
The output layer weight coefficients are:
Δθ k =-ηβ′O k (1-O k )(d k -O k ) (32)
where β' is the input vector of the output layer, i.e. the output of the hidden layer, d k The expected value of the kth neuron of the output layer is represented, and eta is the learning rateThe input of the kth neuron of the output layer isThe output is O k =f(I k ),O j Is the output of the jth neuron of the upper layer of the neural network, Q is the total number of neurons of the upper layer of the neural network, w jk Is a weight between the input-output layer and the hidden layer, O k For the output of the kth neuron of the current layer neural network, f () is a unipolar sigmoid function, I k And sending the data into an activation function for calculation to prevent the gradient of the neural network from disappearing.
The hidden layer weight coefficients are:
wherein, O i Output of the ith neuron for the hidden layer, O j Representing the output of the jth neuron of the hidden layer, wherein i and j are the number of rows and columns of the connection weight matrix respectively, and L is the number of input data; delta k Is the partial derivative of the error transfer function of the k-th hidden layer to each neuron of the output, beta' is the input vector of the current hidden layer, i.e. the output of the previous hidden layer, w jk The weights of the previous hidden layer and the current hidden layer.
And testing the trained neural network model by adopting the test set data, finishing the whole training process if the performance reaches the standard, and otherwise, continuing to adjust the hyper-parameters to train the neural network model until the test performance reaches the standard. The reference point in each test set and the matrix P of the light power values of M LEDs received correspondingly are brought into the trained BP neural network model, and at the moment, the test data set of the neural network model is input into the matrixCan be expressed as:
wherein the content of the first and second substances,the light power values of the M LED light sources received by the qth reference fingerprint point in the test set are represented, and the corresponding output matrix is:
wherein the content of the first and second substances,the predicted coordinates for the qth reference fingerprint point in the test set.
And 6, performing visible light indoor positioning by using the neural network model trained and tested in the step 5.
The performance of the visible light indoor positioning method based on spectrum estimation detection is improved compared with other traditional visible light indoor positioning methods: as shown in fig. 6 to 7, under the condition that the signal-to-noise ratio is 10dB, the predicted frequency and the power value have no large deviation, the maximum error of the frequency is 1.253HZ, the power changes sensitively with the distance of the positioning point, the maximum error of the power estimation is 0.125W, the stability of each extracted LED power value as training input data is good, and the recognition degree is high, which indicates that the Pisarenko harmonic decomposition method can be well applied to the extraction/separation of the multi-LED mixed optical signal positioned in the visible light room.
As shown in fig. 8, an overall result of three-dimensional positioning at H =0 m is shown, the hollow circle represents a real position coordinate, the star represents a position coordinate measured by the BP neural network, and from the result in the figure, the predicted position has no large deviation from the actual position coordinate, the average error is 2.12 cm, and the overall three-dimensional average positioning error is 3.81 cm.
And the positioning accuracy and stability of the positioning system are verified through an actual measurement positioning experiment. The method comprises the steps of building a three-dimensional test space with the side length of 0.8m, uniformly arranging 4 LED light sources with the power of 5W on the top of the space, building a two-dimensional coordinate system with one corner of the plane as an original point and 5cm as intervals on a plane at the bottom of the space, drawing interval lattice points on the plane at the bottom of the space, selecting 289 measurement positions with equal intervals by using each interval point, generating sinusoidal signals with different frequencies by a signal generator, loading the sinusoidal signals onto corresponding LED light sources through optical drive, and enabling the LEDs to periodically send optical signals; at a receiving end, a PD is used as a signal receiver and is horizontally placed at 289 equidistant measurement positions in a positioning area, and is responsible for converting a received optical signal into an electric signal, the electric signal is acquired by an oscilloscope after passing through an amplifier, optical power vectors of 4 LED light sources corresponding to the position are obtained through separation by a Pisarenko harmonic decomposition algorithm, 40 power values are acquired for each light source, then extreme values are removed in a sequencing mode, the average value is taken as each group of light source power value data of the point, and finally 289 groups of training data and 20 groups of testing position data are selected according to a neural network positioning algorithm, and position coordinates of the PD are obtained through multiple tests. Then, according to the neural network positioning algorithm proposed in this embodiment, 289-group data is introduced into the neural network for processing and training. Through multiple positioning tests on the 289 selected training data and 20 non-coincident sets of position data selected according to the square track, a positioning error distribution diagram is shown in fig. 9, and the result shows that the probability of less than 5cm is 60%, the probability of less than 2 cm is 10%, and the average positioning error is 4.28 cm, which indicates that the method of the embodiment has high actual measurement positioning accuracy and stable positioning effect.
Example 2
The embodiment provides a visible light indoor positioning system based on spectrum estimation detects, including a plurality of LED lamps and photoelectric detector, still includes: a memory for storing instructions executable by the processor; and a processor for executing the instructions to implement the visible light indoor positioning method based on spectral estimation detection as described above in embodiment 1.
A visible light indoor positioning system based on spectral estimation detection may include an internal communication bus, a Processor (Processor), a Read Only Memory (ROM), a Random Access Memory (RAM), a communication port, and a hard disk. The internal communication bus may enable data communication between visible light indoor positioning system components based on spectral estimation detection. The processor may make the determination and issue the prompt. In some embodiments, a processor may be comprised of one or more processors. The communication port may enable data communication outside of the visible light indoor positioning system based on spectral estimation detection. In some embodiments, the visible light indoor positioning system based on spectrum estimation detection may also send and receive information and data from the network through the communication port. The visible light indoor positioning system based on spectrum estimation detection may also comprise different forms of program storage units as well as data storage units, such as a hard disk, a Read Only Memory (ROM) and a Random Access Memory (RAM), capable of storing various data files for computer processing and/or communication use, and possibly program instructions executed by a processor. The processor executes these instructions to implement the main parts of the method. The result processed by the processor is transmitted to the device of the testee through the communication port and displayed on the interface of the testee.
The above-mentioned visible light indoor positioning method based on spectrum estimation detection can be implemented as a computer program, stored in a hard disk, and recorded in a processor for execution. Accordingly, embodiments of the present invention also provide a computer readable medium having stored thereon computer program code, which when executed by a processor, implements the visible light indoor positioning method based on spectral estimation detection as described above.
The method for visible light indoor localization based on spectral estimation detection, when implemented as a computer program, may also be stored in a computer-readable storage medium as an article of manufacture. For example, computer-readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., compact Disk (CD), digital Versatile Disk (DVD)), smart cards, and flash memory devices (e.g., electrically Erasable Programmable Read Only Memory (EPROM), card, stick, key drive). In addition, various storage media described herein as embodiments of the invention can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media (and/or storage media) capable of storing, containing, and/or carrying code and/or instructions and/or data.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. The visible light indoor positioning method based on spectrum estimation detection is characterized by comprising the following steps of:
step 1, building a visible light indoor communication link system model;
step 2, establishing an LED channel diffuse reflection model comprising a direct line-of-sight (LOS) link and a first-order reflection link NLOS;
step 3, setting the installation distance of the LED and dividing a positioning area on the basis of the LED channel diffuse reflection model, and constructing an indoor VLC positioning system channel model;
step 4, selecting a plurality of positioning points in the positioning area, separating the multiple light source signals based on a spectrum estimation detection method, obtaining power values of different light source signals, and storing the obtained power values and calibration coordinates of the different light source signals of each positioning point into a database;
and 5, taking the data stored in the database as a training data set of the neural network, constructing and training a visible light indoor positioning neural network model, and performing visible light indoor positioning by using the trained visible light indoor positioning neural network model.
2. The visible light indoor positioning method based on spectrum estimation detection as claimed in claim 1, wherein the LED channel diffuse reflection model established in step 2 and comprising a direct line-of-sight link LOS and a first-order reflection link NLOS is:
I(φ i )=I 0 cos m φ i (1)
I′ i =kI i cos m-1 β (2)
wherein the content of the first and second substances,I 0 indicates the central luminous intensity of the LED, phi i Denotes the emission angle of the ith LED, m denotes the Lambert reflection order, I i ' indicates the reflected light intensity of the secondary light source generated by the ith LED light source and directly irradiating to the P point of the wall surface, I i The coordinate of the point P is P (X) which represents the illumination intensity of the point P directly irradiated by the secondary light source generated by the ith LED light source to the wall surface 1 ,Y 1 ,Z 1 ) K is the reflection coefficient of the wall surface, and beta represents the emission angle of the secondary light source; e is the illuminance of any M LED light sources received by the jth Photodetector (PD), E j Direct horizontal illuminance, E 'of jth PD under multiple LED light sources' j The reflected light illuminance of the jth PD under the multi-level light source;denotes the incident angle of the direct light source received by the jth PD whose coordinate is (x) j ,y j 0), the coordinates of the ith LED are (X) i ,Y i ,Z i ) And M is the total number of LEDs.
3. The visible light indoor positioning method based on spectral estimation detection according to claim 1, wherein in step 4, a Pisarenko harmonic decomposition method is used for multi-light source signal separation to obtain power values of different light source signals.
4. The method of claim 1, wherein the step 4 of separating the multiple light source signals based on the spectrum estimation detection method to obtain the power values of the different light source signals is to solve the formula (28) to obtain a power matrix P = [ P ] = 1 P 2 … P i … P M ],P i And for the power emitted by the ith LED lamp received by the PD, completing the multi-light source signal separation:
wherein, the first and the second end of the pipe are connected with each other,is obtained by carrying out DTFT on the received modulation signal generated by the ith LED light source,A i amplitude of modulation signal generated for ith LED light source received by PD, a i Is the intensity, w, of the modulation signal generated by the ith LED light source received by the PD i Is the frequency coefficient of the modulation signal generated by the ith LED light source received by the PD in the complex frequency domain,is the frequency, lambda, of the modulation signal generated by the ith LED light source received by the PD in the complex frequency domain i Autocorrelation matrix R that is a set of observed signals received by PD Y The signal received by the PD is an observed signal containing white noise e (n), σ ω Is the variance of the white noise e (n).
5. The visible light indoor positioning method based on spectrum estimation detection as claimed in claim 4, wherein at the PD receiving end, the first positioning is performed according toCalculating a vector e i ,f i Time domain frequency of the modulation signal generated for the ith LED light source, then vector e i Performing DTFT to obtainThen calculated according to the following formula
Wherein, at the PD receiving end, the time domain frequency f of the emission signal loaded by the ith LED light source i Calculated by the following steps:
when M LED lamps are driven by different frequency signals at the same time, the modulation signal x generated by the ith LED light source i The root of the Z transform of (n) is determined by equation (12):
wherein the content of the first and second substances,is z i The conjugate complex number of (a); a is a 0 =1,a i =a 2M-i ,i=0,1,...,M;
The difference equation corresponding to equation (12) is:
wherein x is i (n-i) is a reaction of x i (n) delaying the argument n by i units of the delayed signal, n-i denoting delaying the argument n by i units;
z transformation is carried out on the left side and the right side of the formula (13) at the same time, and the modulation signal x generated by the ith LED light source is obtained by solving i Root of Z transform of (n) i ;
Finally, the time domain frequency f of the emission signal loaded by the ith LED light source is obtained through the following formula i :
Wherein, re (z) i ) Denotes z i Real part of, im (z) i ) Denotes z i The imaginary part of (c).
6. The method of claim 5The visible light indoor positioning method based on spectrum estimation detection is characterized in that x i (n) signal intensity a i Determined by the following procedure:
selecting a plurality of positioning points in the positioning area, wherein the positioning points are interfered by a visible light channel, and the signals received by a receiving end are as follows:
where y (n) is the observed signal received by the PD containing white noise e (n), H (0) is the optical channel DC gain,is the total noise variance;
wherein y (n-i) is a delay signal obtained by delaying the argument n of y (n) by i units, and e (n-i) is a delay signal obtained by delaying the argument n of e (n) by i units;
the formula (15) is written in a matrix form to obtain
Y T A=E T A (16)
Wherein:
multiplying the vector Y by the left of the formula (16), and obtaining the expectation of mathematics at the two sides to obtain E { YY T The method is as follows:
wherein R is Y Is an autocorrelation matrix, R, of the received observed signal y (i) Representing a signal x i (n) the result of the autocorrelation operation;
wherein the content of the first and second substances,is the autocorrelation matrix R of the set of observed signals y (n) Y Characteristic value λ of i And the coefficient vector A of the characteristic polynomial is corresponding to the characteristic value lambda i The feature vector of (2);
to R Y Performing dimensionality reduction treatment, namely:
solving the characteristic polynomial in the formula (20), and then obtaining the coefficient vector A through the formula (19) to obtain the known x i (n) signal intensity a i 。
7. The visible light indoor positioning method based on spectrum estimation detection according to any one of claims 1 to 6, wherein in step 5, the constructed visible light indoor positioning neural network model is a BP neural network, the input of the BP neural network is a power matrix P of multi-light source signals obtained based on spectrum estimation detection, and the output is relative coordinates of unknown positioning points;
the BP neural network comprises an input layer, an output layer and at least one hidden layer, wherein the input layer is composed of M neurons, and the output layer is composed of three neurons.
8. The visible light indoor positioning method based on spectrum estimation detection according to any one of claims 1 to 6, wherein when the visible light indoor positioning neural network model is constructed and trained in the step 5, the excitation function is a unipolar sigmoid function, and the expression is as follows:
9. Indoor positioning system of visible light based on spectrum estimation detects, including a plurality of LED lamps and photoelectric detector, its characterized in that still includes:
a memory for storing instructions executable by the processor;
a processor for executing the instructions to implement the method of any one of claims 1 to 6.
10. A computer-readable medium, characterized in that a computer program code is stored, which, when being executed by a processor, realizes the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210965366.7A CN115333624B (en) | 2022-08-12 | 2022-08-12 | Visible light indoor positioning method, system and computer readable medium based on spectrum estimation detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210965366.7A CN115333624B (en) | 2022-08-12 | 2022-08-12 | Visible light indoor positioning method, system and computer readable medium based on spectrum estimation detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115333624A true CN115333624A (en) | 2022-11-11 |
CN115333624B CN115333624B (en) | 2024-04-12 |
Family
ID=83924593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210965366.7A Active CN115333624B (en) | 2022-08-12 | 2022-08-12 | Visible light indoor positioning method, system and computer readable medium based on spectrum estimation detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115333624B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108732537A (en) * | 2018-05-08 | 2018-11-02 | 北京理工大学 | A kind of indoor visible light localization method based on neural network and received signal strength |
CN111090074A (en) * | 2019-12-23 | 2020-05-01 | 武汉邮电科学研究院有限公司 | Indoor visible light positioning method and equipment based on machine learning |
CN111664853A (en) * | 2020-06-22 | 2020-09-15 | 北京大学 | Linear regression model-based NLOS interference-resistant visible light positioning method and system |
CN112468954A (en) * | 2020-11-03 | 2021-03-09 | 西安工业大学 | Visible light indoor stereo positioning method based on neural network |
CN113709661A (en) * | 2021-07-30 | 2021-11-26 | 西安交通大学 | Single-site indoor hybrid positioning method and system based on LOS (line of Signaling) identification |
-
2022
- 2022-08-12 CN CN202210965366.7A patent/CN115333624B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108732537A (en) * | 2018-05-08 | 2018-11-02 | 北京理工大学 | A kind of indoor visible light localization method based on neural network and received signal strength |
CN111090074A (en) * | 2019-12-23 | 2020-05-01 | 武汉邮电科学研究院有限公司 | Indoor visible light positioning method and equipment based on machine learning |
CN111664853A (en) * | 2020-06-22 | 2020-09-15 | 北京大学 | Linear regression model-based NLOS interference-resistant visible light positioning method and system |
CN112468954A (en) * | 2020-11-03 | 2021-03-09 | 西安工业大学 | Visible light indoor stereo positioning method based on neural network |
CN113709661A (en) * | 2021-07-30 | 2021-11-26 | 西安交通大学 | Single-site indoor hybrid positioning method and system based on LOS (line of Signaling) identification |
Non-Patent Citations (4)
Title |
---|
张凌雁: "基于WiFi信道状态信息的室内定位跟踪技术研究", 中国博士学位论文全文数据库信息科技辑 * |
王旭东等: "多照明区域协作的室内可见光定位", 光电子・激光, 15 April 2017 (2017-04-15) * |
赵黎等: "基于神经网络的可见光室内立体定位研究", 中国激光 * |
赵黎等: "基于采用天牛须搜索算法优化神经网络的可见光室内定位方法", 光通信技术 * |
Also Published As
Publication number | Publication date |
---|---|
CN115333624B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108629380B (en) | Cross-scene wireless signal sensing method based on transfer learning | |
CN109001679B (en) | Indoor sound source area positioning method based on convolutional neural network | |
Njima et al. | Indoor localization using data augmentation via selective generative adversarial networks | |
Jin et al. | Indoor localization with channel impulse response based fingerprint and nonparametric regression | |
CN106970379B (en) | Based on Taylor series expansion to the distance-measuring and positioning method of indoor objects | |
CN112468954B (en) | Visible light indoor stereo positioning method based on neural network | |
Finstad et al. | Fast parameter estimation of binary mergers for multimessenger follow-up | |
Kılıç et al. | Through‐Wall Radar Classification of Human Posture Using Convolutional Neural Networks | |
Majeed et al. | Passive indoor visible light positioning system using deep learning | |
Dubey et al. | An enhanced approach to imaging the indoor environment using WiFi RSSI measurements | |
Luo et al. | A space-frequency joint detection and tracking method for line-spectrum components of underwater acoustic signals | |
CN113591760B (en) | Gait monitoring method of far-field multiple human bodies based on millimeter waves | |
Cui et al. | Research on indoor positioning system based on VLC | |
CN111753660A (en) | Terahertz millimeter wave-based human face bone identification method | |
Cappelli et al. | Enhanced visible light localization based on machine learning and optimized fingerprinting in wireless sensor networks | |
CN113567975B (en) | Human body rapid security inspection method based on vortex electromagnetic wave mode scanning | |
Jacobson et al. | Using the world wide lightning location network (WWLLN) to study very low frequency transmission in the earth-ionosphere waveguide: 1. Comparison with a full-wave model | |
CN115333624B (en) | Visible light indoor positioning method, system and computer readable medium based on spectrum estimation detection | |
CN107595260A (en) | Contactless sign detection method, device, storage medium and its computer equipment | |
CN113219408A (en) | Improved RBF neural network indoor visible light positioning method and system | |
CN116466335A (en) | Indoor visible light positioning method and system based on clustering and deep neural network | |
CN116930963A (en) | Through-wall imaging method based on wireless communication system | |
Li et al. | Passive Multi-user Gait Identification through micro-Doppler Calibration using mmWave Radar | |
Lau et al. | Novel indoor localisation using an unsupervised Wi-Fi signal clustering method | |
CN107071897A (en) | It is a kind of based on ring-like Wi Fi indoor orientation methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |