CN113283490A - Channel state information deep learning positioning method based on front-end fusion - Google Patents

Channel state information deep learning positioning method based on front-end fusion Download PDF

Info

Publication number
CN113283490A
CN113283490A CN202110544012.0A CN202110544012A CN113283490A CN 113283490 A CN113283490 A CN 113283490A CN 202110544012 A CN202110544012 A CN 202110544012A CN 113283490 A CN113283490 A CN 113283490A
Authority
CN
China
Prior art keywords
csi
training
amplitude
image
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110544012.0A
Other languages
Chinese (zh)
Inventor
颜俊
方文杰
曹艳华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110544012.0A priority Critical patent/CN113283490A/en
Publication of CN113283490A publication Critical patent/CN113283490A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a channel state information deep learning positioning method based on front-end fusion, which comprises an offline training stage and an online positioning stage which are executed in sequence, wherein the offline training stage comprises the steps of preprocessing training data and offline learning, and the online positioning stage comprises the steps of constructing a CSI actual amplitude difference image and a CSI actual phase difference image, performing front-end fusion on the images, and respectively substituting the obtained CSI actual fusion images into a regression model based on an X-axis coordinate position and a regression model based on a Y-axis coordinate position to obtain a position estimation value. The method comprehensively utilizes the amplitude and phase information of the CSI, combines a deep learning method, realizes accurate positioning of the indoor target, obviously improves the positioning precision, shortens the positioning time and reduces the complexity of the positioning process.

Description

Channel state information deep learning positioning method based on front-end fusion
Technical Field
The invention relates to a target positioning method, in particular to a target positioning method which comprehensively utilizes the front-end fusion of CSI amplitude and phase information and combines deep learning, and belongs to the cross field of positioning navigation and machine learning.
Background
In recent years, various services based on positioning technology have gradually deepened into the lives of people, and great convenience is brought to people. However, for an indoor environment in which satellite positioning cannot work effectively, a set of standard positioning technical scheme is not formed at present, which also makes LBS (Location Based Services) technology incapable of being further developed in the indoor environment. Common indoor positioning technologies are mainly based on infrared rays or bluetooth, but all of them have the disadvantages of high cost, large power consumption, poor stability and the like. For example, although the indoor positioning technology based on infrared rays can achieve high accuracy, infrared signals are susceptible to interference of sunlight and fluorescence, and infrared rays cannot penetrate obstacles such as walls, so that only short-distance line-of-sight positioning can be performed. The indoor positioning technology based on bluetooth is mainly realized by judging the strength of a received signal, but because the attenuation of the signal is not only related to the distance, but also needs to consider factors such as multipath loss and the like, the stability of the method is poor, and the positioning effect is not good.
With the continuous maturation of Wi-Fi technology and the widespread deployment of wireless devices indoors, indoor positioning technology based on wireless signals has received a great deal of attention from the industry. The positioning technology based on the position fingerprint is an important point in the research of the indoor positioning field at the present stage due to the advantages of low implementation cost, wide application range, no need of additional hardware support and the like. However, since the hardware device cannot support the acquisition of the physical layer Channel State Information, the fingerprint positioning technology based on the CSI (Channel State Information) provides a new idea for the research of indoor positioning. In this scheme, the CSI describes the transmission factors of the signal on each transmission path, such as signal scattering, distance attenuation, and environmental attenuation. Unlike the conventional scheme in which the received signal strength is determined, the CSI measures each ofdm subcarrier from a packet received from the radio link, and thus more stable information can be obtained. In addition, the CSI also represents amplitude and phase information of the subcarriers in the frequency domain, describes fine-grained physical information, and is more sensitive to the environment.
In summary, how to design a brand-new target positioning method in combination with the current research situation, the method comprehensively utilizes the amplitude and phase information of the CSI, and combines with a deep learning method to finally achieve the purpose of target positioning, which is also a problem of common attention of those skilled in the art.
Disclosure of Invention
In view of the foregoing defects in the prior art, an object of the present invention is to provide an object localization method that combines front-end fusion of CSI amplitude and phase information and deep learning, as follows.
A channel state information deep learning positioning method based on front end fusion comprises two stages of off-line training and on-line positioning which are executed in sequence,
in the off-line training phase, the method comprises the following steps,
s11, dividing the positioning area into a plurality of reference points according to the X-axis and Y-axis directions, and acquiring CSI training measured values of signals transmitted by the router at each reference point by using the receiving end;
s12, extracting amplitude values and phase values of the CSI training measured values, and respectively constructing a CSI training amplitude difference image and a CSI training phase difference image;
s13, performing front-end fusion on the CSI training amplitude difference image and the CSI training phase difference image to form a CSI training fusion image which is used as a position fingerprint;
s14, learning a nonlinear relation between the CSI training fusion image and the X-axis coordinate position in the reference point by using the CNN to obtain a regression model based on the X-axis coordinate position;
s15, learning a nonlinear relation between the CSI training fusion image and the Y-axis coordinate position in the reference point by using the CNN to obtain a regression model based on the Y-axis coordinate position;
in the on-line positioning stage, the method comprises the following steps:
s21, acquiring a CSI actual measurement value of a signal transmitted by a router on a reference point where a target is located by using a receiving end, extracting an amplitude value and a phase value of the CSI actual measurement value, and respectively constructing a CSI actual amplitude difference image and a CSI actual phase difference image;
s22, performing front-end fusion on the CSI actual amplitude difference image and the CSI actual phase difference image to obtain a CSI actual fusion image;
and S23, respectively bringing the CSI actual fusion image into a regression model based on the X-axis coordinate position and a regression model based on the Y-axis coordinate position to obtain a position estimation value.
Preferably, the constructing the CSI training amplitude difference image and the CSI training phase difference image in S12 respectively includes the following steps:
s121, constructing a CSI training amplitude difference matrix and a CSI training phase difference matrix;
and S122, converting each element in the CSI training amplitude difference matrix and the CSI training phase difference matrix into an image pixel value by using a linear mapping method to respectively obtain a CSI training amplitude difference image and a CSI training phase difference image.
Preferably, the constructing the CSI training amplitude difference matrix and the CSI training phase difference matrix in S121 includes the following steps:
s1211, definition NTNumber of antennas at transmitting end, NRNumber of antennas at receiving end, NkNumber of subcarriers, N, for training measurements for CSIpTraining the number of measurement value data packets for the CSI;
s1212, for each CSI training measured value data packet, constructing a training amplitude matrix of a single transmitting end antenna corresponding to the receiving end antenna, the training amplitude matrix having a dimension N, by taking the amplitude value of the CSI training measured value extracted by each receiving end antenna as each row of the matrixR×NkThen, using the amplitude value of the CSI training measured value of the first receiving end antenna as a reference, subtracting the amplitude value as the reference from each row of the training amplitude matrix, and deleting all zero rows in the training amplitude matrix to obtain the dimension of (N)R-1)×NkTraining amplitude difference submatrices;
s1213, the operation in S1212 is executed to the remaining transmitting terminal antennas, and N is obtainedTDimension of (N)R-1)×NkThe training amplitude difference submatrix is subjected to matrix merging operation based on rows to obtain a dimension [ (N)R-1)×NT]×NkTraining an amplitude difference matrix for the CSI;
s1214, to NpCSI training measurement value data packetOperations in lines S1212 through S1213 to yield NpDimension of [ (N)R-1)×NT]×NkThe CSI training amplitude difference matrix is subjected to matrix merging operation based on rows to obtain a dimension [ (N)R-1)×NT×Np]×NkTraining an amplitude difference matrix for the CSI;
s1215, aiming at the phase value of the CSI training measured value extracted by the receiving end antenna, executing the operations in S1212-S1214 to obtain the CSI training phase difference matrix.
Preferably, the front-end fusion of the CSI training amplitude difference image and the CSI training phase difference image in S13 includes the following steps:
s131, performing Laplacian pyramid decomposition on the CSI training amplitude difference image and the CSI training phase difference image respectively, and establishing Laplacian pyramids of the images;
s132, fusion processing is carried out on all decomposition layers of the Laplacian pyramid corresponding to the CSI training amplitude difference image and the CSI training phase difference image respectively;
and S133, carrying out image reconstruction on the fused Laplacian pyramid to obtain a CSI training fusion image.
The advantages of the invention are mainly embodied in the following aspects:
according to the channel state information deep learning positioning method based on front-end fusion, provided by the invention, the amplitude and phase information of the CSI is fused by using a Laplacian pyramid fusion algorithm, so that the positioning precision of an indoor target is obviously improved. And the amplitude and phase information of the CEI can be directly extracted from the received signal without adding any additional hardware equipment in the system, thereby reducing the implementation cost of the scheme.
Meanwhile, the CSI amplitude difference and the phase difference image are constructed by utilizing the CSI amplitude difference and the phase difference matrix, and the influence of measurement noise on the image fingerprint in the CSI image construction can be reduced by preprocessing based on the amplitude difference and the phase difference, so that the learning efficiency is greatly improved in the subsequent off-line learning process, the positioning time is shortened, and the complexity of the positioning process is reduced.
In addition, the invention also provides a brand new thought for the related research and application of the indoor target positioning technology, provides reference for other related problems in the same field, can be used for expanding, extending and deeply researching on the basis of the brand new thought, and has very wide application prospect.
The following detailed description of the embodiments of the present invention is provided in connection with the accompanying drawings for the purpose of facilitating understanding and understanding of the technical solutions of the present invention.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of the fusion of a CSI amplitude difference image and a CSI phase difference image in the present invention;
FIG. 3 is a schematic diagram of the structure of a convolutional neural network used in the present invention;
FIG. 4 is a graph of accumulated error analysis according to the present invention.
Detailed Description
The invention discloses a channel state information deep learning positioning method based on front-end fusion, and the method is characterized in that the obvious difference between a CSI amplitude difference image and a phase difference image at different positions is found in the actual test process and can be used as the fingerprint of a reference position, so that the method comprehensively utilizes the amplitude and phase information of the CSI and combines the deep learning method, the accurate positioning of an indoor target is realized, the positioning precision is obviously improved, the positioning time is shortened, and the complexity of the positioning process is reduced.
The specific scheme of the invention is as follows.
As shown in fig. 1, a deep learning and positioning method for channel state information based on front-end fusion includes two stages of off-line training and on-line positioning, which are performed in sequence.
In the off-line training stage, the method comprises the following steps:
and S11, dividing the positioning area into a plurality of reference points according to the X-axis direction and the Y-axis direction, assuming that the target is positioned on the reference points, and acquiring CSI training measured values of signals transmitted by the router on each reference point by using the receiving end. And simultaneously extracting the amplitude value and the phase value of the CSI training measured value from each transmitting-end antenna to the receiving-end antenna. In this embodiment, the receiving end device may be a computer with an Intel 5300 wireless network card, and the CSI measurement value extraction software is CSI-Tool.
And S12, extracting amplitude values and phase values of the CSI training measured values, and respectively constructing a CSI training amplitude difference image and a CSI training phase difference image.
The method for constructing the CSI training amplitude difference image and the CSI training phase difference image separately comprises the following steps,
and S121, constructing a CSI training amplitude difference matrix and a CSI training phase difference matrix. This step can be further refined in that,
s1211, definition NTNumber of antennas at transmitting end, NRNumber of antennas at receiving end, NkNumber of subcarriers, N, for training measurements for CSIpTraining the number of measurement value data packets for the CSI;
s1212, for each CSI training measured value data packet, constructing a training amplitude matrix of a single transmitting end antenna corresponding to the receiving end antenna, the training amplitude matrix having a dimension N, by taking the amplitude value of the CSI training measured value extracted by each receiving end antenna as each row of the matrixR×NkThen, using the amplitude value of the CSI training measured value of the first receiving end antenna as a reference, subtracting the amplitude value as the reference from each row of the training amplitude matrix, and deleting all zero rows in the training amplitude matrix to obtain the dimension of (N)R-1)×NkTraining amplitude difference submatrices;
s1213, the operation in S1212 is executed to the remaining transmitting terminal antennas, and N is obtainedTDimension of (N)R-1)×NkThe training amplitude difference submatrix is subjected to matrix merging operation based on rows to obtain a dimension [ (N)R-1)×NT]×NkTraining an amplitude difference matrix for the CSI;
s1214, to NpThe CSI training measured value data packets execute the operations in S1212-S1213 to obtain NpDimension of [ (N)R-1)×NT]×NkBased on all of the training amplitude difference matricesThe matrix merging operation on the rows yields a dimension of [ (N)R-1)×NT×Np]×NkTraining an amplitude difference matrix for the CSI;
s1215, aiming at the phase value of the CSI training measured value extracted by the receiving end antenna, executing the operations in S1212-S1214 to obtain the CSI training phase difference matrix.
And S122, converting each element in the CSI training amplitude difference matrix and the CSI training phase difference matrix into an image pixel value by using a linear mapping method to respectively obtain a CSI training amplitude difference image and a CSI training phase difference image. It should be noted that although the CSI training phase difference image is obtained in the same manner as the CSI training amplitude difference image, the phase value of the CSI training measured value needs to be unwrapped by one step
And S13, performing front-end fusion on the CSI training amplitude difference image and the CSI training phase difference image, forming a CSI training fusion image and taking the CSI training fusion image as a position fingerprint.
Compared with a simple image fusion algorithm, the image fusion algorithm based on pyramid decomposition is respectively fused on different scales, different spatial resolutions and different decomposition layers, so that a better fusion effect can be obtained. Therefore, the front-end fusion of the CSI training amplitude difference image and the CSI training phase difference image described herein further includes the following steps,
s131, performing Laplacian pyramid decomposition on the CSI training amplitude difference image and the CSI training phase difference image respectively, and establishing Laplacian pyramids of the images.
Suppose that the original image G0Is the 0 th layer of the Gaussian pyramid, then the I-th layer image G of the Gaussian pyramidlComprises the following steps:
Figure BDA0003072890630000081
wherein l is more than 0 and less than or equal to N, i is more than or equal to 0 and less than Cl,0≤j<RlN represents the layer number of the top layer of the Gaussian pyramid, and if N is 4 in the specific implementation, the constructed pyramid has 5 layers in total; clIs the I layer of Gaussian pyramidThe number of columns of the image; rlRepresenting the number of lines of the I layer of the Gaussian pyramid; gl-1An image representing the layer I-1; ω (m, n) is a 1 × 5 window function with low-pass characteristic, and is expressed as:
Figure BDA0003072890630000091
the principle of gaussian pyramid decomposition of an image is to convolve a lower layer image with a window function ω (m, n) in sequence, and then perform interlaced 2-down sampling on the convolution result. Because the shape of the window function ω (m, n) is similar to a gaussian distribution function, ω (m, n) is also called a gaussian weight matrix, and the gaussian pyramid is also named accordingly.
Then establishing Laplacian pyramid of the image by the Gaussian pyramid
Firstly G islInterpolating and amplifying to obtain an amplified image Gl *And G isl *Size and G ofl-1Are the same in size, Gl *The expression of (a) is:
Figure BDA0003072890630000092
wherein l is more than 0 and less than or equal to N, i is more than or equal to 0 and less than Cl,0≤j<Rl
Figure BDA0003072890630000093
As can be seen from equation (3), the gray value of the new pixel interpolated between the original pixels is determined by a weighted average of the gray values of the original pixels. Because of GlIs to Gl-1Obtained by low-pass filtering, so Gl *Is more detailed than Gl-1Less.
And then ordering:
Figure BDA0003072890630000094
where N is the layer number of the Laplacian pyramid top layer, LPlIs the I-layer image of the laplacian pyramid decomposition.
Then, from the above formula, LP0,LP1,LP2,…LPNThe constructed pyramid is a Laplacian pyramid, each layer of images of the Laplacian pyramid are formed by the difference of a Gaussian pyramid image and an image on the Gaussian pyramid image after interpolation and amplification, the process is equivalent to band-pass filtering, and Laplacian pyramid decomposition is also called band-pass pyramid decomposition.
In summary, the laplacian pyramid decomposition for establishing the image is divided into 4 steps: low pass filtering, down sampling, interpolation, and band pass filtering.
S132, reconstructing an original image by the Laplacian pyramid corresponding to the CSI training amplitude difference image and the CSI training phase difference image, and fusing decomposition layers of the Laplacian pyramid respectively;
as can be seen from the formula (4):
Figure BDA0003072890630000101
as can be seen from equation (5), the top layer of the laplacian pyramid is recursive from top to bottom, and the corresponding laplacian pyramid can be restored, so as to obtain the original image G0
The image fusion method based on the laplacian pyramid decomposition is shown in fig. 2. In the invention, the CSI training amplitude difference image and the CSI training phase difference image are fused.
And S133, carrying out image reconstruction on the fused Laplacian pyramid to obtain a CSI training fusion image.
S14, learning a nonlinear relation between the CSI training fusion image and the X-axis coordinate position in the reference point by utilizing the constructed CNN (Convolutional Neural Networks) to obtain a regression model based on the X-axis coordinate position;
and S15, learning the nonlinear relation between the CSI training fusion image and the Y-axis coordinate position in the reference point by using the CNN to obtain a regression model based on the Y-axis coordinate position.
FIG. 3 depicts the structure of a convolutional neural network employed in the embodiments of this patent. Where C represents the convolutional layer, P represents the pooling layer, FC represents the full-link layer, and Output represents the Output layer. Firstly, a CSI amplitude difference phase difference fusion image is transmitted to a convolutional neural network for training, and the CSI amplitude difference phase difference fusion image passes through two convolutional layers and a pooling layer, then passes through three convolutional layers and one pooling layer continuously, and then passes through three full-connection layers, and finally an output result is obtained. Because the algorithm adopts a regression learning method, the output layer does not adopt a softmax function, but adopts y ═ wx + b, and the used loss function is not normalized, specifically,
Figure BDA0003072890630000111
where Y is the label value, wx + b is the output value of the network, a is 10, and c is 0.2.
And respectively sending the obtained fusion image and the X-axis coordinate and the Y-axis coordinate of the reference point into a convolutional neural network for off-line regression learning to obtain a regression model based on the X-axis coordinate position and the Y-axis coordinate position.
In the on-line positioning stage, the method comprises the following steps:
s21, acquiring a CSI actual measurement value of a signal transmitted by a router on a reference point where a target is located by using a receiving end, extracting an amplitude value and a phase value of the CSI actual measurement value, and respectively constructing a CSI actual amplitude difference image and a CSI actual phase difference image.
And S22, performing front-end fusion on the CSI actual amplitude difference image and the CSI actual phase difference image to obtain a CSI actual fusion image.
And S23, respectively bringing the CSI actual fusion image into a regression model based on the X-axis coordinate position and a regression model based on the Y-axis coordinate position to obtain a position estimation value.
FIG. 4 is a plot of cumulative error (CDF) analysis according to the present invention. Compared with methods only using a CSI amplitude image and only using a CSI phase image, the method has the advantage that the method jointly inputs the CSI amplitude difference phase difference fusion image into the convolutional neural network and has the best estimation result.
In summary, the channel state information deep learning positioning method based on front-end fusion provided by the invention uses the laplacian pyramid fusion algorithm to fuse the amplitude and phase information of the CSI, thereby significantly improving the positioning accuracy of the indoor target. And the amplitude and phase information of the CEI can be directly extracted from the received signal without adding any additional hardware equipment in the system, thereby reducing the implementation cost of the scheme.
Meanwhile, the CSI amplitude difference and the phase difference image are constructed by utilizing the CSI amplitude difference and the phase difference matrix, and the influence of measurement noise on the image fingerprint in the CSI image construction can be reduced by preprocessing based on the amplitude difference and the phase difference, so that the learning efficiency is greatly improved in the subsequent off-line learning process, the positioning time is shortened, and the complexity of the positioning process is reduced.
In addition, the invention also provides a brand new thought for the related research and application of the indoor target positioning technology, provides reference for other related problems in the same field, can be used for expanding, extending and deeply researching on the basis of the brand new thought, and has very wide application prospect.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Finally, it should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should integrate the description, and the technical solutions in the embodiments can be appropriately combined to form other embodiments understood by those skilled in the art.

Claims (4)

1. A channel state information deep learning positioning method based on front end fusion comprises two stages of off-line training and on-line positioning which are executed in sequence, and is characterized in that:
in the off-line training phase, the method comprises the following steps,
s11, dividing the positioning area into a plurality of reference points according to the X-axis and Y-axis directions, and acquiring CSI training measured values of signals transmitted by the router at each reference point by using the receiving end;
s12, extracting amplitude values and phase values of the CSI training measured values, and respectively constructing a CSI training amplitude difference image and a CSI training phase difference image;
s13, performing front-end fusion on the CSI training amplitude difference image and the CSI training phase difference image to form a CSI training fusion image which is used as a position fingerprint;
s14, learning a nonlinear relation between the CSI training fusion image and the X-axis coordinate position in the reference point by using the CNN to obtain a regression model based on the X-axis coordinate position;
s15, learning a nonlinear relation between the CSI training fusion image and the Y-axis coordinate position in the reference point by using the CNN to obtain a regression model based on the Y-axis coordinate position;
in the on-line positioning stage, the method comprises the following steps:
s21, acquiring a CSI actual measurement value of a signal transmitted by a router on a reference point where a target is located by using a receiving end, extracting an amplitude value and a phase value of the CSI actual measurement value, and respectively constructing a CSI actual amplitude difference image and a CSI actual phase difference image;
s22, performing front-end fusion on the CSI actual amplitude difference image and the CSI actual phase difference image to obtain a CSI actual fusion image;
and S23, respectively bringing the CSI actual fusion image into a regression model based on the X-axis coordinate position and a regression model based on the Y-axis coordinate position to obtain a position estimation value.
2. The method for deep learning and locating channel state information based on front-end fusion according to claim 1, wherein the separately constructing a CSI training amplitude difference image and a CSI training phase difference image in S12 includes the following steps:
s121, constructing a CSI training amplitude difference matrix and a CSI training phase difference matrix;
and S122, converting each element in the CSI training amplitude difference matrix and the CSI training phase difference matrix into an image pixel value by using a linear mapping method to respectively obtain a CSI training amplitude difference image and a CSI training phase difference image.
3. The method according to claim 2, wherein the constructing of the CSI-trained amplitude difference matrix and the CSI-trained phase difference matrix in S121 comprises the following steps:
s1211, definition NTNumber of antennas at transmitting end, NRNumber of antennas at receiving end, NkNumber of subcarriers, N, for training measurements for CSIpTraining the number of measurement value data packets for the CSI;
s1212, for each CSI training measured value data packet, constructing a training amplitude matrix of a single transmitting end antenna corresponding to the receiving end antenna, the training amplitude matrix having a dimension N, by taking the amplitude value of the CSI training measured value extracted by each receiving end antenna as each row of the matrixR×NkThen, using the amplitude value of the CSI training measured value of the first receiving end antenna as a reference, subtracting the amplitude value as the reference from each row of the training amplitude matrix, and deleting all zero rows in the training amplitude matrix to obtain the dimension of (N)R-1)×NkTraining amplitude difference submatrices;
s1213, the operation in S1212 is executed to the remaining transmitting terminal antennas, and N is obtainedTDimension of (N)R-1)×NkThe training amplitude difference submatrix is subjected to matrix merging operation based on rows to obtain a dimension [ (N)R-1)×NT]×NkTraining an amplitude difference matrix for the CSI;
S1214、to NpThe CSI training measured value data packets execute the operations in S1212-S1213 to obtain NpDimension of [ (N)R-1)×NT]×NkThe CSI training amplitude difference matrix is subjected to matrix merging operation based on rows to obtain a dimension [ (N)R-1)×NT×Np]×NkTraining an amplitude difference matrix for the CSI;
s1215, aiming at the phase value of the CSI training measured value extracted by the receiving end antenna, executing the operations in S1212-S1214 to obtain the CSI training phase difference matrix.
4. The method for deep learning and locating channel state information based on front-end fusion of claim 1, wherein the front-end fusion of the CSI training amplitude difference image and the CSI training phase difference image in S13 includes the following steps:
s131, performing Laplacian pyramid decomposition on the CSI training amplitude difference image and the CSI training phase difference image respectively, and establishing Laplacian pyramids of the images;
s132, respectively carrying out fusion processing on the decomposition layers of the Laplacian pyramid corresponding to the training amplitude difference image and the CSI training phase difference image CSl;
and S133, carrying out image reconstruction on the fused Laplacian pyramid to obtain a CSI training fusion image.
CN202110544012.0A 2021-05-19 2021-05-19 Channel state information deep learning positioning method based on front-end fusion Pending CN113283490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110544012.0A CN113283490A (en) 2021-05-19 2021-05-19 Channel state information deep learning positioning method based on front-end fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110544012.0A CN113283490A (en) 2021-05-19 2021-05-19 Channel state information deep learning positioning method based on front-end fusion

Publications (1)

Publication Number Publication Date
CN113283490A true CN113283490A (en) 2021-08-20

Family

ID=77279847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110544012.0A Pending CN113283490A (en) 2021-05-19 2021-05-19 Channel state information deep learning positioning method based on front-end fusion

Country Status (1)

Country Link
CN (1) CN113283490A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822350A (en) * 2021-09-14 2021-12-21 南京邮电大学 Equipment-free personnel action identification and position estimation method based on multi-task learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153736A (en) * 2020-09-14 2020-12-29 南京邮电大学 Personnel action identification and position estimation method based on channel state information
CN112147573A (en) * 2020-09-14 2020-12-29 山东科技大学 Passive positioning method based on amplitude and phase information of CSI (channel State information)
CN112734675A (en) * 2021-01-19 2021-04-30 西安理工大学 Image rain removing method based on pyramid model and non-local enhanced dense block

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153736A (en) * 2020-09-14 2020-12-29 南京邮电大学 Personnel action identification and position estimation method based on channel state information
CN112147573A (en) * 2020-09-14 2020-12-29 山东科技大学 Passive positioning method based on amplitude and phase information of CSI (channel State information)
CN112734675A (en) * 2021-01-19 2021-04-30 西安理工大学 Image rain removing method based on pyramid model and non-local enhanced dense block

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822350A (en) * 2021-09-14 2021-12-21 南京邮电大学 Equipment-free personnel action identification and position estimation method based on multi-task learning
CN113822350B (en) * 2021-09-14 2024-04-30 南京邮电大学 Method for identifying actions and estimating positions of equipment-free personnel based on multitask learning

Similar Documents

Publication Publication Date Title
AU2019216706B2 (en) Iterative ray-tracing for autoscaling of oblique ionograms
CN112135344B (en) CSI (channel State information) and DCNN (distributed channel neural network) -based equipment-free target positioning method
CN106359023A (en) Agricultural irrigation system based on internet of things
CN109672973B (en) Indoor positioning fusion method based on strongest AP
CN105911518A (en) Robot positioning method
CN110650436B (en) WiFi data-based position fingerprint database establishing and fitting method
CN107589399A (en) Based on the relatively prime array Wave arrival direction estimating method for sampling virtual signal singular values decomposition more
CN105205114B (en) A kind of Wi-Fi location fingerprint data base construction method based on image procossing
CN104992068B (en) A kind of prediction technique of topsoil nitrogen distribution
Zhu et al. BLS-location: A wireless fingerprint localization algorithm based on broad learning
CN110536245A (en) A kind of indoor wireless positioning method and system based on deep learning
CN106896340A (en) A kind of relatively prime array high accuracy Wave arrival direction estimating method based on compressed sensing
CN113283490A (en) Channel state information deep learning positioning method based on front-end fusion
CN114137358B (en) Power transmission line fault diagnosis method based on graph convolution neural network
CN105425222B (en) A kind of radar target detection method under data transmission rate constraint
CN111079835A (en) Himapari-8 atmospheric aerosol inversion method based on deep full-connection network
CN109212472B (en) Indoor wireless positioning method and device in noise-oriented environment
CN113993074A (en) 5G base station signal transceiving device and target positioning method
CN113271539A (en) Indoor target positioning method based on improved CNN model
CN109949252B (en) Infrared image light spot removing method based on compensation coefficient fitting
CN114554398A (en) Indoor positioning method, first positioning server and indoor positioning system
CN114488150A (en) InSAR time sequence phase optimization method and device
CN113596757A (en) Rapid high-precision indoor fingerprint positioning method based on integrated width learning
CN113129441A (en) Engineering geological mapping method and system based on three-dimensional laser scanning
CN112929818A (en) Indoor positioning method based on Kalman filtering and pan-kriging interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210820