CN114125698B - Positioning method based on channel state information and depth image - Google Patents
Positioning method based on channel state information and depth image Download PDFInfo
- Publication number
- CN114125698B CN114125698B CN202110493860.3A CN202110493860A CN114125698B CN 114125698 B CN114125698 B CN 114125698B CN 202110493860 A CN202110493860 A CN 202110493860A CN 114125698 B CN114125698 B CN 114125698B
- Authority
- CN
- China
- Prior art keywords
- image
- csi
- target
- depth
- channel state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013145 classification model Methods 0.000 claims abstract description 37
- 238000007781 pre-processing Methods 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 19
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 26
- 238000001914 filtration Methods 0.000 claims description 13
- 238000012986 modification Methods 0.000 claims description 4
- 230000004048 modification Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 239000000969 carrier Substances 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000007786 learning performance Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B17/00—Monitoring; Testing
- H04B17/30—Monitoring; Testing of propagation channels
- H04B17/309—Measuring or estimating channel quality parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/08—Testing, supervising or monitoring using real traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Electromagnetism (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a positioning method based on channel state information and a depth image, which comprises the steps of acquiring channel state information of a depth image and a WiFi signal by using a binocular camera and a wireless network card on different reference position points in an off-line stage; the method comprises the steps of acquiring positioning data, preprocessing the acquired positioning data to form target depth images, position labels and training data sets (CSI images and reference point categories), respectively carrying out position-based classification learning on the two data sets by using a convolutional neural network to obtain position-based classification models, and finally carrying out decision-making data fusion on the two models to obtain a final position-based classification model.
Description
Technical Field
The invention relates to a positioning method based on channel state information and depth pictures, in particular to a positioning method based on Channel State Information (CSI) of depth pictures and WiFi signals obtained by mainly utilizing binocular cameras, which realizes indoor personnel position estimation through a depth learning algorithm and belongs to the technical field of positioning navigation.
Background
It is known that position estimation of people has found widespread use in daily life, such as people tracking, elderly monitoring, and has attracted considerable attention in academia and industry. The traditional indoor personnel positioning technology comprises infrared indoor positioning, ultrasonic positioning, bluetooth positioning, zigBee positioning, ultra Wideband (UWB) positioning, wiFi positioning and image positioning. Then, the accuracy of the method is not very high in the indoor positioning field, and certain defects exist.
In recent years, with the widespread deployment of wireless networks and binocular cameras, related technologies have begun to develop rapidly. Research shows that the depth image acquired by the binocular camera contains depth information and can be used for personnel positioning. Wireless networks may be used not only for transmitting data, but also for indoor positioning. Under indoor complex environments, wireless signals of a transmitting end do not reach a receiving end along a sight distance path, but are propagated through multiple paths such as reflection, scattering, diffraction and the like of human bodies, furniture and other obstacles. The multipath superimposed signal obtained at the receiving end will carry the characteristic information reflecting the environment. Based on a depth learning algorithm, the Channel State Information (CSI) of the binocular camera and the WiFi signals is combined, and the position information contained in the depth map and the CSI amplitude image can be fully utilized, so that the positioning accuracy is improved.
According to the search, the Chinese patent with publication number CN112040397A discloses a CSI indoor fingerprint positioning method based on self-adaptive Kalman filtering, which comprises the steps of acquiring CSI data information of a reference point by adopting a wireless network card, and judging the position coordinates of the positioning point by utilizing a KNN matching algorithm; the Chinese patent with publication number CN112489128A discloses an RGB-D indoor unmanned aerial vehicle positioning realization method based on unsupervised deep learning, which utilizes a depth graph line as the input of a neural network, designs a loss function for RCNN network training, and realizes indoor unmanned positioning. However, the indoor positioning accuracy of the two methods is slightly lacking.
Disclosure of Invention
The invention aims to solve the technical problem of overcoming the defects of the prior art and providing a positioning method based on channel state information and depth images.
The invention provides a positioning method based on channel state information and depth pictures, which consists of an off-line stage and an on-line stage, wherein the off-line stage comprises the following steps:
Step 101, acquiring a target picture and Channel State Information (CSI) of a WiFi signal on different reference position points by a target;
102, preprocessing the acquired data to form multi-source positioning information;
Step 103, performing Convolutional Neural Network (CNN) -based classification learning by utilizing the depth image and the channel state information image obtained in the step 102 to obtain a plurality of position-based classification models, and performing decision-making data fusion on the classification models to obtain CNN classification models;
the online phase comprises the following steps:
Step 201, acquiring target pictures of targets at different reference position points and channel state information of WiFi signals on line, and preprocessing the target pictures and the channel state information to obtain target depth information images and CSI images;
And 202, substituting the target depth information image and the CSI image into the CNN classification model trained in the step 103 to obtain a final target position estimation result.
The method of the present invention includes an offline phase and an online phase. In an off-line stage, obtaining channel state information of a depth picture and a WiFi signal by using a binocular camera and a wireless network card at different reference position points; then, preprocessing the collected positioning data to form training data sets (target depth image, position label) and (CSI image, reference point category) respectively; respectively carrying out position-based classification learning on the two data sets by using a convolutional neural network to respectively obtain position-based classification models; and finally, carrying out decision-making data fusion on the two models to obtain a final classification model based on the position. In the online stage, firstly, a target depth information image and a CSI image are constructed through data preprocessing, then the target depth information image and the CSI image are substituted into a position classification model trained in the offline stage, and finally a position estimation result is obtained. The method utilizes deep learning and data fusion technology, thereby improving positioning accuracy.
The invention further adopts the technical scheme that:
in the steps 102 and 201, the specific method for preprocessing data is as follows: according to amplitude information of the CSI measurement value, a CSI image is constructed by utilizing a time domain, a space domain and a frequency domain of the CSI; and carrying out target segmentation on the depth picture, and extracting target depth information.
Further, the preprocessing of the channel state information includes the following steps:
(1) Selecting a first antenna of a receiver and a transmitter as a receiving end and a transmitting end, generating a data stream and extracting amplitude information of the data stream; arranging the data stream amplitude values of N K subcarriers into one row to form a vector of 1 XN K; performing row-based splicing operation on the obtained vector through the data packet to form a CSI amplitude matrix of N P×NK, wherein N K、NP respectively represents the number of subcarriers and the number of the data packet;
(2) Preprocessing the CSI amplitude matrix, removing abnormal values in the CSI amplitude matrix through hampel filtering, removing Gaussian noise through smoothing filtering, filtering impulse noise through median filtering, and retaining matrix edge information;
(3) And converting element values in the preprocessed CSI amplitude matrix into different colors according to the sizes by using a linear mapping method to form the CSI image.
Further, the depth image segmentation preprocessing includes the following steps:
(1) The method comprises the steps of performing size adjustment on an RGB image and a depth image acquired by a binocular camera, so that the RGB image and the depth image are adjusted to be consistent in size;
(2) Dividing an RGB image acquired by a binocular camera by using a trained DeepLabv & lt3+ & gt division network model to obtain an RGB image based on target division;
(3) Modifying the matrix corresponding to the segmented RGB image, changing the target corresponding position to 1, changing the background to 0, and correspondingly multiplying the matrix obtained by modification and the elements of the image matrix corresponding to the depth image, thereby completing the target extraction in the depth image and removing the background part.
In step 103, the specific steps of the classification learning based on the convolutional neural network are as follows:
Step 1031, taking the reference position category information as a training sample label, respectively constructing a training data set 1 containing a target depth image and a reference point position category and a training data set 2 containing a CSI image and a reference point position category, and respectively utilizing the training data set 1 and the training data set 2 to perform position-based classification learning to obtain a position-based classification model 1 and a position-based classification model 2;
Step 1032, performing decision-level data fusion on the location-based classification model 1 and the location-based classification model 2 to obtain a final location-based classification model.
In step 1032, the decision level data fusion includes the following steps:
(1) Obtaining a position estimation vector 1 by using the target depth image and the position-based classification model 1; simultaneously obtaining a position estimation vector 2 by using the CSI image and the position-based classification model 2;
(2) And (3) carrying out weight addition on the position estimation vector 1 and the position estimation vector 2 of the classification model by using a linear weight method, wherein the position class corresponding to the maximum element in the fused position estimation vector is the target position estimation.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects:
(1) The method and the device estimate the position of the target by utilizing the depth image acquired by the binocular camera and the channel state information of the WIFI signal, can fully utilize the position information of the depth image and the CSI, and can improve the positioning accuracy of indoor personnel;
(2) The invention constructs the position training data set by utilizing the segmented depth image, can fully utilize the personnel position information in the depth image, simultaneously removes useless information in the background, and filters out the influence of the environment. Therefore, the segmented depth image can improve offline learning performance.
In a word, the invention combines the depth image, and can fully utilize the position information of the depth image and the CSI to improve the positioning accuracy of indoor personnel. Meanwhile, the original depth image is segmented, useless information in the background is removed, and the influence of the environment is filtered, so that the segmented depth image can improve offline learning performance.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of the present invention.
Fig. 2 is a view of depth images at different positions according to the present invention.
Fig. 3 is CSI images at different positions in the present invention.
Fig. 4 is a performance diagram of the location classification of the present invention.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings: the present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are provided, but the protection rights of the present invention are not limited to the following embodiments.
The embodiment provides a positioning method based on channel state information and a depth image, as shown in fig. 1, comprising the following steps:
step 1, offline stage
Step 101, data acquisition
The target is at different reference position points, and a binocular camera is utilized to shoot a target picture (namely a depth picture); and simultaneously, receiving Channel State Information (CSI) of the WiFi signal by using the wireless network card.
102, Preprocessing data to form multi-source positioning information
According to amplitude information of the CSI measurement value, a CSI image is constructed by utilizing a time domain, a space domain and a frequency domain of the CSI; and carrying out target segmentation on the depth picture, and extracting target depth information.
The preprocessing of the channel state is specifically as follows:
step (1), constructing a CSI amplitude matrix
And respectively selecting a first antenna of the receiver and a first antenna of the transmitter as a receiving end and a transmitting end, generating a data stream and extracting amplitude information of the data stream. The data stream amplitude values of N K subcarriers are arranged in a row to form a vector of 1×n K. Finally, the obtained vector is subjected to row-based splicing operation through the data packet to form a matrix of N P×NK, wherein N K、NP represents the number of subcarriers and the number of the data packet respectively.
Step (2), preprocessing the amplitude matrix
Removing abnormal values in the CSI amplitude matrix through hampel filtering, removing Gaussian noise through smoothing filtering, filtering impulse noise through median filtering, and retaining matrix edge information.
Step (3), constructing CSI image
And converting element values in the amplitude matrix into different colors according to the sizes by using a linear mapping method to form the CSI image.
The depth image segmentation preprocessing is specifically as follows:
step (1), image resizing
And adjusting the RGB image and the depth image obtained by the binocular camera to be consistent in size.
Step (2) of dividing RGB image
And dividing the RGB image acquired by the binocular camera by using the trained DeepLabv3+ division network model to obtain an RGB image based on target division.
Step (3), segmenting the depth image
Modifying the matrix corresponding to the segmented RGB picture, changing the target corresponding position to 1, changing the background to 0, and correspondingly multiplying the matrix obtained by modification and the elements of the image matrix corresponding to the depth map. Thereby completing the extraction of the target in the depth map and removing the background part.
Step 103, classification learning based on Convolutional Neural Network (CNN)
Step 1031, location-based classification learning
And taking the reference position category information as a training sample label, constructing a training data set containing the target depth image and the reference point category, and performing classification learning by using CNN to obtain a position-based classification model 1.
And taking the reference position category information as a training sample label, constructing a training data set containing the CSI image and the reference point category, and performing classification learning by using the CNN to obtain a position-based classification model 2.
Step 1032 decision-level based position estimation fusion
And (3) carrying out decision-stage data fusion on the two classification model results obtained in the step 1031 to obtain a position estimation result, namely a CNN classification model.
The decision level data fusion is specifically as follows:
(1) Generating a position estimate intermediate value
Obtaining a position estimation vector 1 by using the target depth image and the position classification model 1; obtaining a position estimation vector 2 by using the CSI image and the position classification model 2;
(2) Decision-level fusion strategy
And (3) carrying out weight addition on the position estimation vector 1 and the vector 2 of the classification model by using a linear weight method, wherein the position class corresponding to the maximum element in the fused position estimation vector is the target position estimation. The optimal weight coefficient can be obtained by a cross-validation method.
Step 2, on-line stage
Step 201, multisource positioning information construction
And acquiring a picture of the target on line, receiving the CSI of the target WiFi signal, and obtaining a target depth information image and a CSI image according to the preprocessing method of the off-line stage step 102. Namely, according to amplitude information of the CSI measured value, a CSI image is constructed by utilizing a time domain, a space domain and a frequency domain of the CSI; and carrying out target segmentation on the depth image, and extracting a target depth information image.
Step 202, target position estimation
Substituting the target depth information image and the CSI image into the CNN classification model trained in the step 103 in the off-line stage to obtain a final position estimation result.
In this embodiment, the targets stand on different position reference points respectively, the binocular camera is used to collect depth images of different positions, and the wireless network card is used to receive CSI data of different positions. And then dividing the depth image, and constructing the CSI image by using the time domain, space domain and frequency domain information of the CSI amplitude. Fig. 2 is a depth image at different positions (fig. 2a is a depth image at position 1, fig. 2b is a depth image at position 2), fig. 3 is a CSI image at different positions (fig. 3a is a CSI image at position 1, and fig. 3b is a CSI image at position 2), and it can be seen from fig. 2 and fig. 3 that there is a significant difference between the depth image and the CSI image at different positions, so that the method can be used for position estimation. Fig. 4 depicts the performance of the location classification algorithm. In the experimental process, 3600 depth images and 3600 CSI amplitude images in 18 different positions are acquired. Wherein, based on the training data set (target depth image, reference point position category), the classification accuracy rate obtained by CNN classification learning is 0.969; based on the (CSI image, the reference point position category) training data set, the CNN classification learning is carried out, and the obtained classification accuracy is 0.9402. The classification accuracy obtained by the method is 0.9875. Therefore, the method effectively improves the positioning precision.
The foregoing is merely illustrative of the embodiments of the present invention, and the scope of the present invention is not limited thereto, and any person skilled in the art will appreciate that modifications and substitutions are within the scope of the present invention, and the scope of the present invention is defined by the appended claims.
Claims (3)
1. The positioning method based on the channel state information and the depth picture is characterized by comprising an off-line stage and an on-line stage, wherein the off-line stage comprises the following steps:
Step 101, acquiring channel state information of a target picture and WiFi signals on different reference position points by a target;
102, preprocessing the acquired data to form multi-source positioning information;
Step 103, performing convolutional neural network-based classification learning by utilizing the depth image and the channel state information image obtained in the step 102 to obtain a plurality of position-based classification models, and performing decision-making level data fusion on the classification models to obtain a CNN classification model;
the online phase comprises the following steps:
Step 201, acquiring target pictures of targets at different reference position points and channel state information of WiFi signals on line, and preprocessing the target pictures and the channel state information to obtain target depth information images and CSI images;
step 202, substituting the target depth information image and the CSI image into the CNN classification model trained in the step 103 to obtain a final target position estimation result;
In the steps 102 and 201, the specific method for preprocessing data is as follows: according to amplitude information of the CSI measurement value, a CSI image is constructed by utilizing a time domain, a space domain and a frequency domain of the CSI; performing target segmentation on the depth picture, and extracting target depth information;
The preprocessing of the channel state information comprises the following steps:
(1) Selecting a first antenna of a receiver and a transmitter as a receiving end and a transmitting end, generating a data stream and extracting amplitude information of the data stream; arranging the data stream amplitude values of the sub-carriers into one row to form a1 multiplied by vector; performing row-based splicing operation on the obtained vector through the data packet to form an X CSI amplitude matrix, wherein the X CSI amplitude matrix respectively represents the number of subcarriers and the number of the data packet;
(2) Preprocessing the CSI amplitude matrix, removing abnormal values in the CSI amplitude matrix through hampel filtering, removing Gaussian noise through smoothing filtering, filtering impulse noise through median filtering, and retaining matrix edge information;
(3) Converting element values in the preprocessed CSI amplitude matrix into different colors according to the sizes by using a linear mapping method to form a CSI image;
The target picture comprises an RGB image and a depth image, and the depth image segmentation preprocessing comprises the following steps:
(1) The method comprises the steps of performing size adjustment on an RGB image and a depth image acquired by a binocular camera, so that the RGB image and the depth image are adjusted to be consistent in size;
(2) Dividing an RGB image acquired by a binocular camera by using a trained DeepLabv & lt3+ & gt division network model to obtain an RGB image based on target division;
(3) Modifying the matrix corresponding to the segmented RGB image, changing the target corresponding position to 1, changing the background to 0, and correspondingly multiplying the matrix obtained by modification and the elements of the image matrix corresponding to the depth image, thereby completing the target extraction in the depth image and removing the background part.
2. The positioning method based on channel state information and depth pictures according to claim 1, wherein in step 103, the specific steps of classification learning based on convolutional neural network are as follows:
Step 1031, taking the reference position category information as a training sample label, respectively constructing a training data set 1 containing a target depth image and a reference point position category and a training data set 2 containing a CSI image and a reference point position category, and respectively utilizing the training data set 1 and the training data set 2 to perform position-based classification learning to obtain a position-based classification model 1 and a position-based classification model 2;
Step 1032, performing decision-level data fusion on the location-based classification model 1 and the location-based classification model 2 to obtain a final location-based classification model.
3. The positioning method based on channel state information and depth pictures according to claim 2, wherein in the step 1032, the decision level data fusion comprises the steps of:
(1) Obtaining a position estimation vector 1 by using the target depth image and the position-based classification model 1; simultaneously obtaining a position estimation vector 2 by using the CSI image and the position-based classification model 2;
(2) And (3) carrying out weight addition on the position estimation vector 1 and the position estimation vector 2 of the classification model by using a linear weight method, wherein the position class corresponding to the maximum element in the fused position estimation vector is the target position estimation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110493860.3A CN114125698B (en) | 2021-05-07 | 2021-05-07 | Positioning method based on channel state information and depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110493860.3A CN114125698B (en) | 2021-05-07 | 2021-05-07 | Positioning method based on channel state information and depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114125698A CN114125698A (en) | 2022-03-01 |
CN114125698B true CN114125698B (en) | 2024-05-17 |
Family
ID=80359513
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110493860.3A Active CN114125698B (en) | 2021-05-07 | 2021-05-07 | Positioning method based on channel state information and depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114125698B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115278518B (en) * | 2022-07-04 | 2024-08-23 | 南京邮电大学 | Indoor positioning method for channel state information based on subcarrier selection |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458025A (en) * | 2019-07-11 | 2019-11-15 | 南京邮电大学 | A kind of personal identification and localization method based on binocular camera |
WO2019237646A1 (en) * | 2018-06-14 | 2019-12-19 | 清华大学深圳研究生院 | Image retrieval method based on deep learning and semantic segmentation |
CN112153736A (en) * | 2020-09-14 | 2020-12-29 | 南京邮电大学 | Personnel action identification and position estimation method based on channel state information |
CN112261719A (en) * | 2020-07-24 | 2021-01-22 | 大连理智科技有限公司 | Area positioning method combining SLAM technology with deep learning |
-
2021
- 2021-05-07 CN CN202110493860.3A patent/CN114125698B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019237646A1 (en) * | 2018-06-14 | 2019-12-19 | 清华大学深圳研究生院 | Image retrieval method based on deep learning and semantic segmentation |
CN110458025A (en) * | 2019-07-11 | 2019-11-15 | 南京邮电大学 | A kind of personal identification and localization method based on binocular camera |
CN112261719A (en) * | 2020-07-24 | 2021-01-22 | 大连理智科技有限公司 | Area positioning method combining SLAM technology with deep learning |
CN112153736A (en) * | 2020-09-14 | 2020-12-29 | 南京邮电大学 | Personnel action identification and position estimation method based on channel state information |
Non-Patent Citations (4)
Title |
---|
一种基于信道状态信息的无源室内指纹定位算法;党小超;司雄;郝占军;黄亚宁;;计算机工程;20180715(07);全文 * |
基于信道状态信息幅值-相位的被动式室内指纹定位;江小平;王妙羽;丁昊;李成华;;电子与信息学报;20200515(05);全文 * |
基于双目图像与跨级特征引导的语义分割模型;张娣;陆建峰;;计算机工程;20201015(10);全文 * |
基于支持向量机和信道状态信息的无线感知技术研究;吴康;《中国优秀硕士学位论文全文数据库 信息科技辑》;20210215;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114125698A (en) | 2022-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107862705B (en) | Unmanned aerial vehicle small target detection method based on motion characteristics and deep learning characteristics | |
CN108872984B (en) | Human body identification method based on multi-base radar micro Doppler and convolutional neural network | |
CN112153736B (en) | Personnel action identification and position estimation method based on channel state information | |
CN109341694A (en) | A kind of autonomous positioning air navigation aid of mobile sniffing robot | |
CN107220611B (en) | Space-time feature extraction method based on deep neural network | |
Deng et al. | GaitFi: Robust device-free human identification via WiFi and vision multimodal learning | |
CN107240122A (en) | Video target tracking method based on space and time continuous correlation filtering | |
CN112596024B (en) | Motion identification method based on environment background wireless radio frequency signal | |
CN113837131B (en) | Multi-scale feature fusion gesture recognition method based on FMCW millimeter wave radar | |
CN105760825A (en) | Gesture identification system and method based on Chebyshev feed forward neural network | |
CN111275740B (en) | Satellite video target tracking method based on high-resolution twin network | |
CN113158943A (en) | Cross-domain infrared target detection method | |
CN107862295A (en) | A kind of method based on WiFi channel condition informations identification facial expression | |
CN113901931B (en) | Behavior recognition method of infrared and visible light video based on knowledge distillation model | |
CN106839881B (en) | A kind of anti-unmanned plane method based on dynamic image identification | |
CN114125698B (en) | Positioning method based on channel state information and depth image | |
CN109323697A (en) | A method of particle fast convergence when starting for Indoor Robot arbitrary point | |
CN111598028A (en) | Method for identifying earth surface vegetation distribution based on remote sensing imaging principle | |
CN112767267B (en) | Image defogging method based on simulation polarization fog-carrying scene data set | |
Wang et al. | Detection of passageways in natural foliage using biomimetic sonar | |
CN116343261A (en) | Gesture recognition method and system based on multi-modal feature fusion and small sample learning | |
Abbasi et al. | Novel cascade cnn algorithm for uwb signal denoising, compressing, and toa estimation | |
CN113420778B (en) | Identity recognition method based on Wi-Fi signal and depth camera | |
Sangari et al. | Deep learning-based Object Detection in Underwater Communications System | |
Nie et al. | An Efficient Nocturnal Scenarios Beamforming Based on Multi-Modal Enhanced by Object Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |