CN104898089A - Device-free localization method based on space migration compressive sensing - Google Patents

Device-free localization method based on space migration compressive sensing Download PDF

Info

Publication number
CN104898089A
CN104898089A CN201510157843.7A CN201510157843A CN104898089A CN 104898089 A CN104898089 A CN 104898089A CN 201510157843 A CN201510157843 A CN 201510157843A CN 104898089 A CN104898089 A CN 104898089A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
area
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510157843.7A
Other languages
Chinese (zh)
Other versions
CN104898089B (en
Inventor
常俪琼
房鼎益
陈晓江
王举
邢天璋
聂卫科
王薇
任宇辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwest University
Original Assignee
Northwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest University filed Critical Northwest University
Priority to CN201510157843.7A priority Critical patent/CN104898089B/en
Publication of CN104898089A publication Critical patent/CN104898089A/en
Application granted granted Critical
Publication of CN104898089B publication Critical patent/CN104898089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0278Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves involving statistical or probabilistic considerations

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention discloses a device-free localization method based on space migration compressive sensing, and belongs to the field of device-free localization. The method comprises the following steps: deploying sensor nodes; collecting matrixes at reference positions in a sample area and a to-be-monitored area to obtain a migration function; migrating the sensing matrix in the sample area and a measurement vector in the to-be-monitored area according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector; and recovering the position of a target by adopting the theory of compressive sensing according to the migrated sensing matrix and the migrated measurement vector. According to the invention, the sensing matrix in the sample area and the measurement vector in the to-be-monitored area are migrated, and the position information of the target in the monitored area is determined by a compressive sensing localization method. Therefore, human consumption and communication cost brought by sensing matrix rebuilding for the to-be-monitored area are avoided, and the feasibility of realizing localization of different areas through compressive sensing is improved.

Description

Passive positioning method based on space migration compressed sensing
Technical Field
The invention relates to the field of passive positioning, in particular to a passive positioning method based on space migration compressed sensing.
Background
In recent years, the passive Localization (DFL) technology has received great attention from both academia and industry due to its characteristics of not requiring a user to wear any wireless Device and not requiring the user to actively participate in the Localization process. The mainstream passive positioning method is to position the disturbance of a wireless signal in a monitoring area by using an object to be positioned, and generally comprises two steps: in the training stage, a positioning model (a priori knowledge base) is established based on the relation between Received Signal Strength (RSS for short) and the position of a target; in the positioning stage, the position of the target is determined by matching the real-time RSS value with the prior knowledge base.
However, the existing DFL methods all have a common premise that the trained prior knowledge base is obtained for a given region. Once the size of the positioning area changes, the link length formed by node deployment changes, and the disturbance of the corresponding target to the wireless signal also changes, so that the new area needs to be retrained to obtain a priori knowledge base of the new area, and a large amount of time needs to be spent for scanning each position of the monitoring area, so that the problems of large data volume, high energy consumption and huge manpower consumption are brought. In real world applications, however, it is clearly impractical and unfeasible to acquire a priori knowledge of all monitored areas, since the monitored areas are different from application to application.
Many passive positioning methods do not take this into account, and they are broadly classified into the following 3 categories:
the first type: passive localization based on learning represented by jolt, etc. The nodes are deployed into adjacent equilateral triangles to form a plurality of hexagons, and a priori knowledge base is established by utilizing the interference of targets at different grid positions to signals in a mode that intermediate nodes communicate with hexagonal vertex nodes. However, the method does not solve the problems of large data volume, high energy consumption and huge manpower consumption because the method needs to deploy denser nodes to obtain higher cost and needs to reestablish the prior knowledge base when the monitoring area changes.
The second type: tomographic passive localization as represented by Joseph Wilson et al. The nodes are uniformly arranged around the monitoring area, pairwise communication is carried out between all the nodes, a tomography knowledge base is established according to interference of targets on the pairwise communication nodes at different positions, and the positions of the targets are displayed by combining a tomography method, so that positioning is realized. However, in the method, communication is needed between every two nodes, and a tomography knowledge base is needed to be established for different monitoring areas, so that the problems of large data volume, high energy consumption and huge manpower consumption are not solved.
In the third category: passive localization based on Compressive Sensing (CS for short) represented by sandingyi. The same number of nodes are deployed at two sides of the positioning area, and only the nodes with the same labels are communicated. The method comprises the steps of recording RSS values of all links when a target is at each grid before positioning to construct a sensing matrix, collecting a group of RSS values by all links when positioning, and accurately obtaining the position of the target through the group of RSS values and the sensing matrix. According to the method, pairwise communication between all nodes is not required, and the number of deployed nodes is small, so that the data volume is greatly reduced, and the energy consumption is reduced. However, when the monitored area changes, a sensing matrix needs to be constructed for different areas, and therefore, the problem of high manpower consumption cannot be solved.
In summary, the three types of positioning methods do not consider the problem of monitoring area change, i.e. a positioning model established for a given area cannot be used for new areas with different sizes. Moreover, it is very unrealistic to establish a corresponding positioning model for all regions with different sizes in the real situation. Therefore, passive positioning for practical applications in the face of multiple monitoring areas requires new techniques.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a passive positioning method based on spatial migration compressed sensing, which comprises the following steps:
respectively deploying sensor nodes in a sample region and a region to be monitored;
step two, acquiring RSS matrixes at reference positions in a sample area and an area to be monitored through the sensor nodes;
thirdly, obtaining a migration function according to the RSS matrixes of the sample region and the region to be monitored;
step four, collecting sample RSS values in a sample area through the sensor nodes, and combining the sample RSS values into a sensing matrix;
acquiring positioning RSS values in an area to be monitored through the sensor nodes, and combining the positioning RSS values into measurement vectors;
step six, migrating the sensing matrix of the sample area and the measurement vector in the area to be monitored according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector;
seventhly, when the area of the sample area is smaller than that of the area to be monitored, carrying out grid interpolation processing on the migrated sensing matrix to obtain a migrated high-resolution sensing matrix;
and step eight, recovering the position of the target by using a compressed sensing theory according to the migrated sensing matrix and the migrated measurement vector.
Optionally, the passive positioning method based on spatial migration compressed sensing further includes:
when the area of the sample region is not smaller than the area of the region to be monitored, after step six is completed, step eight is directly performed.
Optionally, the deploying the sensor nodes in the sample region and the region to be monitored respectively includes:
the area size l × a of the sample region is set, the area size of the region to be monitored is u × b, the length of a wireless link formed by nodes deployed in the sample region is l, the length of a wireless link formed by nodes deployed in the region to be monitored is u, l is not equal to u, and the number of links deployed in the sample region and the region to be monitored is M.
Optionally, the acquiring, by the sensor node, RSS matrices at reference positions in a sample area and an area to be monitored includes:
firstly, dividing a sample area and an area to be monitored into N square grids, and then respectively selecting the same reference position points in the sample area and the area to be monitored, wherein the reference position points are respectively represented by 1,2, … N and 1 ', 2', … N ', and N is equal to or less than N'.
Then, the target respectively measures the RSS matrix s of the sample area according to the grid selected by the secondary station in the sample area and the area to be monitoredlAnd the RSS matrix s of the area to be monitoreduWherein;
<math> <mrow> <msup> <mi>s</mi> <mi>l</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mn>11</mn> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <mn>1</mn> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>Mj</mi> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>n</mi> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>Mn</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>s</mi> <mi>u</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mrow> <mn>11</mn> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <mn>1</mn> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>j</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>Mj</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>n</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>Mn</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
and sij={sij(1),…,sij(q),…,sij(Q)}TRepresenting Q consecutive RSS values for the ith link when the target is in the jth trellis.
Optionally, obtaining a migration function according to the RSS matrix of the sample region and the region to be monitored, includes:
the RSS matrix s according to the sample regionlAnd the RSS matrix s of the area to be monitoreduA 1 is tolAnd slRespectively projecting to obtain yl=Wsl,yu=WsuLet x be (x)l,xu),y=(yl,yu) Constructing the function y ═ Wx, such that ylAnd yuAre as similar as possible in the projection space, i.e. are
F (W) is measurement ylDistribution p ofl(y) and yuDistribution p ofu(y) an optimization function of the distance between;
distributing the projection to a distance measurement function DW(pl||pu) Substituting into formula (1) to obtain
I.e. a migration function.
Optionally, the collecting, by the sensor node, sample RSS values in a sample area, and combining the sample RSS values into a sensing matrix includes:
let the target measure the RSS values at each grid to obtain a perception matrix at all grids of the sample area
Wherein s isij={sij(1),…,sij(q),…,sij(Q)}T
Optionally, the acquiring, by the sensor node, a positioning RSS value in an area to be monitored, and combining the positioning RSS value into a measurement vector includes:
recording the RSS value of each link when the target is in the area to be monitored to obtain a measurement vector RM×1×Q=[r1,…,ri,…,rM]TWherein r isi={ri(1),…,ri(q),…,ri(Q)}T
Optionally, the migrating the sensing matrix of the sample region and the measurement vector in the region to be monitored according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector, including:
respectively multiplying the perception matrix and the measurement vector by a transfer function W to obtain a transferred perception matrix <math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> <mo>&times;</mo> <mi>Q</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mi>W</mi> <msubsup> <mi>s</mi> <mi>ij</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> </mrow> </math> And the measurement vector after migration <math> <mrow> <msubsup> <mi>R</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mn>1</mn> <mo>&times;</mo> <mi>Q</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mi>W</mi> <msubsup> <mi>r</mi> <mi>i</mi> <mi>u</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> <mo>;</mo> </mrow> </math>
And performing dimensionality reduction processing on the migrated sensing matrix and the migrated measurement vector to obtain a dimensionality reduced sensing matrix and a dimensionality reduced measurement vector.
Optionally, when the area of the sample region is smaller than the area of the region to be monitored, performing grid interpolation processing on the sample region to obtain a migrated high-resolution sensing matrix, including:
firstly, when the area l × a of the sample region is smaller than the area u × b of the region to be monitored, and the number of grids is N, the side length of the grid of the sample region is ωlThe length of the grid side of the region to be monitored is omegau
Secondly, dividing each grid in the area to be monitored intoSub-grids, each sub-grid having a side length ofThe number of the sub-grids is
And finally, selecting a grid where the sub-grid i 'is located and 8 adjacent grids closest to the grid, wherein 9 grids form an adjacent grid, and obtaining the RSS value of the sub-grid i' through interpolation so as to obtain the high-resolution sensing matrix after the migration.
Optionally, recovering the position of the target by using a compressed sensing theory according to the migrated sensing matrix and the migrated measurement vector, including:
by using a compressed sensing reconstruction algorithm (Algorithm) to obtain a position vector θ:
wherein,is a pseudo-inverse operator, c>0 is a constant which is also a constant but does not tend to 1, the positioning of the target in the area to be monitored is completed by obtaining theta, and
θ=[θ1,…,θj,…θN]T
wherein, thetajE {0,1}, when there is a target on the jth gridTime thetajOtherwise, it is 0.
The technical scheme provided by the invention has the beneficial effects that:
by transferring the sensing matrix of the sample region and the measurement vector of the monitoring region and using a positioning method of compressed sensing, the manpower consumption and communication overhead caused by reconstruction of the sensing matrix of the region to be monitored are avoided, and the feasibility of realizing positioning of different regions by using compressed sensing is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flowchart of a passive positioning method based on spatial migration compressed sensing according to the present invention;
FIG. 2 is a schematic diagram of a specific deployment of a sensor node provided by the present invention;
FIG. 3 is a diagram of the RSS pre-migration and post-migration profiles for different link lengths provided by the present invention;
FIG. 4 is a migration scheme provided by the present invention;
FIG. 5 is a geometric diagram of a Bregman dictionary provided by the present invention;
FIG. 6 is a schematic diagram of grid interpolation provided by the present invention;
FIG. 7 is a plot of cumulative probability distribution (CDF) of positioning error for several methods after the link length provided by the present invention has migrated from 4m to 12 m;
FIG. 8 is a time overhead for three migration scenarios provided by the present invention;
fig. 9 is a comparison of the energy consumption of several methods after the link length provided by the present invention has been shifted from 4m to 12 m.
Detailed Description
To make the structure and advantages of the present invention clearer, the structure of the present invention will be further described with reference to the accompanying drawings.
Example one
The invention provides a passive positioning method based on space migration compressed sensing, as shown in fig. 1, the passive positioning method based on space migration compressed sensing comprises the following steps:
respectively deploying sensor nodes in a sample region and a region to be monitored;
step two, acquiring RSS matrixes at reference positions in a sample area and an area to be monitored through the sensor nodes;
thirdly, obtaining a migration function according to the RSS matrixes of the sample region and the region to be monitored;
step four, collecting sample RSS values in a sample area through the sensor nodes, and combining the sample RSS values into a sensing matrix;
acquiring positioning RSS values in an area to be monitored through the sensor nodes, and combining the positioning RSS values into measurement vectors;
step six, migrating the sensing matrix of the sample area and the measurement vector in the area to be monitored according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector;
seventhly, when the area of the sample area is smaller than that of the area to be monitored, carrying out grid interpolation processing on the migrated sensing matrix to obtain a migrated high-resolution sensing matrix;
and step eight, recovering the position of the target by using a compressed sensing theory according to the migrated sensing matrix and the migrated measurement vector.
In implementation, the invention provides a DFL method (TCL) based on space migration CS on the basis of the DFL method based on CS. Among the various CS-based DFL methods, TCL has the advantages of the CS theory to solve the problems of high energy consumption and large manpower consumption. The present invention focuses on solving the high energy consumption problem and the large manpower consumption problem caused by performing CS-based DFL in different monitoring areas.
The RSS values at a small number of reference positions of the original monitoring area and the new monitoring area are collected first, and then the obtained RSS values are used for acquiring the migration function. The TCL can migrate the previously acquired sensing matrix of the original monitored area and then reuse it in a different monitored area. Therefore, the purposes of reducing energy consumption and manpower consumption caused by reconstruction of the sensing matrix in a new monitoring area can be achieved. The migrated portion of the TCL can also be used in other DFL methods, which can be used not only in the location of different areas but also in other locations. E.g., different object classes, over time, in which the migrated portion of the TCL is equally applicable.
Based on the theory, the invention provides a passive positioning method based on space migration compressed sensing, which is characterized in that a region where a sensing matrix is acquired is selected as a sample region, a region where a target position needs to be determined is selected as a region to be monitored, sensor nodes are deployed in the two regions, the specific deployment mode of the sensor nodes is shown in fig. 2, the areas of the sample region and the region to be monitored are different, and therefore, the lengths of wireless communication links formed by the deployed nodes are different. The RSS values at a small number of reference positions of the sample region and the region to be monitored are respectively obtained, so as to obtain a migration function that makes the RSS distributions of the sample region and the region to be monitored identical, that is, RSS distributions corresponding to the same RSS and different link lengths, as shown in fig. 3. RSS values of all positions in the sample area form a sensing matrix, measuring vectors formed by corresponding RSS values when the target is located in the area to be monitored are migrated and mapped to the same space according to a migration function, and the compressed sensing theory is used for recovering position information of the target by combining the migrated measuring vectors and the migrated sensing matrix, so that the effect of positioning the target in the area to be monitored is achieved.
Compared with a passive positioning method in the prior art, the method can migrate a previously obtained sensing matrix into a mapping space through the migration function, so that different monitoring areas (or different link lengths) can share the same migrated sensing matrix. In doing so, the human consumption of reconstructing the sensing matrix in different monitoring areas is greatly reduced.
Optionally, the passive positioning method based on spatial migration compressed sensing further includes:
and when the area of the sample region is larger than that of the region to be monitored, directly performing the step eight after the step six is completed.
In the implementation, step seven proposes that when the area of the sample region is smaller than the area of the region to be monitored, grid interpolation processing needs to be performed on the migrated sensing matrix to obtain the migrated high-resolution sensing matrix.
The grid interpolation processing needs to be performed on the migrated sensing matrix, because one of the important steps of the scheme is to migrate the sensing matrix in the sample area and the measurement vector of the area to be monitored into the mapping space, so that the target position in the area to be monitored can be calculated by the migrated sensing matrix and the migrated measurement vector, and because the sizes of the grids of the sample area and the area to be monitored are different and the size of the grid is the lowest resolution of the positioning result, the problem of whether the precision is reduced after the migration is finished exists.
If the area of the sample region is larger than that of the region to be monitored, namely the grid size of the sample region is larger than that of the region to be monitored, the positioning resolution of the region to be monitored is increased, so that the positioning accuracy is not reduced; if the area of the sample region is smaller than the area of the region to be monitored, that is, the grid size of the sample region is smaller than the grid size of the region to be monitored, after the sensing matrix is migrated, the size of the grid position corresponding to each matrix element data is increased, so that the data density in the region to be monitored is too low, the positioning resolution is reduced, and the positioning accuracy is reduced. Therefore, when the area of the sample region is smaller than that of the region to be monitored, grid interpolation processing needs to be performed on the perception matrix after the sample region is correspondingly migrated; and when the area of the sample area is not smaller than the area of the area to be monitored, the positioning accuracy of the migrated sensing matrix in the area to be monitored can be ensured not to be reduced, so that after the step six is completed, grid interpolation processing is not needed, and the positioning operation in the step eight is directly performed, so that the effect of saving resource consumption is achieved.
Optionally, the sensor nodes are respectively deployed in the sample region and the region to be monitored, and the method includes:
sensor nodes are respectively deployed on two sides of the sample region and the region to be monitored, as shown in fig. 2. Setting the area size l × a of the sample region, the area size of the region to be monitored as u × b, the length of a wireless link formed by nodes deployed in the sample region as l, the length of a wireless link formed by nodes deployed in the region to be monitored as u, wherein l is not equal to u, and the number of the nodes deployed in the sample region and the number of the nodes deployed in the region to be monitored are respectively 2M, so as to form a one-to-one correspondence relationship to construct M links ({ TXi,RXi},i∈[1,M]) As shown in fig. 4.
Optionally, acquiring, by the sensor node, RSS matrices at reference positions in a sample area and an area to be monitored, includes:
firstly, dividing the sample region and the region to be monitored into N square grids, wherein the grid size ratio of the sample region to the region to be monitored is l/u. Then the same reference position points (grid) are chosen in the sample area and the area to be monitored, denoted by 1,2,. N and 1 ', 2,. N', respectively, N ≦ N, as shown in fig. 2.
Then, the target respectively measures the RSS matrix of the sample area and the RSS matrix of the area to be monitored at the grid positions selected by the secondary station in the sample area and the area to be monitored, wherein the RSS matrix of the sample area and the RSS matrix of the area to be monitored are obtained;
<math> <mrow> <msup> <mi>s</mi> <mi>l</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mn>11</mn> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <mn>1</mn> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>Mj</mi> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>n</mi> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>Mn</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>s</mi> <mi>u</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mrow> <mn>11</mn> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <mn>1</mn> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>j</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>Mj</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>n</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>Mn</mi> <mo>&prime;</mo> </mrow> <mi>u</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
and sij={sij(1),…,sij(q),…,sij(Q)}TIndicating the ith link when the target is in the jth meshQ consecutive RSS values.
Optionally, obtaining a migration function according to the RSS matrix of the sample region and the region to be monitored, includes:
the RSS matrix s according to the sample regionlAnd the RSS matrix s of the area to be monitoreduA 1 is tolAnd suRespectively projecting to obtain yl=Wsl,yu=Wsl. Let x be (x)l,xu),y=(yl,yu) Constructing the function y ═ Wx, such that ylAnd yuAre as similar as possible in the projection space, i.e. are
F (W) is measurement ylDistribution p ofl(y) and yuDistribution p ofu(y) an optimization function of the distance between;
distributing the projection to a distance measurement function DW(pl||pu) Substituting into formula (1) to obtain
I.e. a migration function.
In implementation, in order to locate the target in the region to be monitored by using the sensing matrix of the sample region, the sensing matrix needs to be migrated, and in order to obtain the migration function, the following steps need to be performed:
for the matrix W described in formula (1), which is the original expression of the transfer function, the actually obtained transfer functions are different for different sample regions and regions to be monitored.
For RSS matrices s of sample regionslAnd the RSS matrix s of the area to be monitoreduP is calculated by Blodman Divergence (Bregman Divergence) with equal distribution after migrationl(y) and pu(y) distance between (a) and (b). If y islAnd yuThe same gaussian distribution, the same corresponding one-dimensional gaussian distribution, i.e. the same RSS distributions at the same positions corresponding to the sample region and the region to be monitored. So the problem is converted into solving pl(y) and pu(y) the distance between (a) and (b) is the smallest. Wherein D isW(pl||pu) Is to measure p in projection spacel(y) and pu(y) Bregman Divergence function for the distribution distance.
Next, for the optimization process of the migration function based on Bregman diversity theory:
in Bregman divrgence, this is given
Dw(f,g)=∫d(ξ(f),ξ(g))
d(ξ(f),ξ(g))=Φ(ξ(g))-f{(ξ(g))-ξ(f)}-Φ(ξ(f))
Where dv ═ dv (y) is the lebesque measure, d (ξ (f), ξ (g)) is the difference between the value of the function Φ at the ξ 0(g) point and f (ξ 1(g) - ξ (f)) + Φ (ξ (f)), where f (ξ (g) - ξ (f)) + Φ (ξ (f)) is the value of the tangent of the function Φ at the (ξ (f), Φ (ξ (f))) point at the ξ (g) point, as shown in fig. 5. Let g be pu(y),f=pl(y) then pl(y) and pu(y) Bregman Divergence-based distance may be expressed as
DW(pl||pu)∫{Φ(ξ(pu(y)))-Φ(ξ(pl(y)))}
-f{ξ(pu(y))-ξ(pl(y))}dv(y)
According to the above formula, the RSS value s can be setlAnd slMigrating to a mapping space by minimizing ylAnd ylThe distance between them.
To be easierTo obtain DW(pl||pu) We select the function phi (y) ═ y2By substituting this function into the above equation, the corresponding Bregman dictionary can be represented in a quadratic form:
<math> <mrow> <msub> <mi>D</mi> <mi>w</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>&Integral;</mo> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> <mi>dy</mi> <mo>=</mo> <mo>&Integral;</mo> <msubsup> <mi>p</mi> <mi>l</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <msub> <mi>p</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <msub> <mi>p</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>p</mi> <mi>u</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mi>dy</mi> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
considering single sample noise, and in order to try to alleviate sample changes caused by the environment, the probability Density is calculated by using a Kernel Density Estimation (KDE) method, and the probability Density is expressed as a weighted sum of kernels between an independent variable and other samples. The probability density function pl(y) and pu(y) is
<math> <mrow> <msub> <mi>p</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>Mn</mi> <mo>&CenterDot;</mo> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>p</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>Mn</mi> <mo>&CenterDot;</mo> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
Substituting equations (3) and (4) into equation (2), pl(y) and pu(y) Bregmandivergence between distributions becomes
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>&Integral;</mo> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mrow> <mi>Mn</mi> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> <mi>dy</mi> </mtd> </mtr> <mtr> <mtd> <mo>+</mo> <mo>&Integral;</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mrow> <mi>Mn</mi> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>dy</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mfrac> <mn>2</mn> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mfrac> <mo>)</mo> </mrow> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mi>dy</mi> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> </math>
While for a Gaussian kernel, the following equation exists
<math> <mrow> <msub> <mrow> <mo>&Integral;</mo> <mi>G</mi> </mrow> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mfrac> <mo>)</mo> </mrow> <msub> <mi>G</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </msub> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mi>dy</mi> <mo>=</mo> <msub> <mi>G</mi> <mrow> <msubsup> <mi>&sigma;</mi> <mi>l</mi> <mn>2</mn> </msubsup> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <mi>u</mi> <mn>2</mn> </msubsup> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mrow> </math>
Therefore, combining the above formulas, eventually
<math> <mrow> <mfenced open='' close='-'> <mtable> <mtr> <mtd> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msup> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> <mn>2</mn> </msup> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msub> <mi>G</mi> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>l&Sigma;l</mi> <mn>2</mn> </msubsup> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>+</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msup> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> <mn>2</mn> </msup> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msub> <mi>G</mi> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>u&Sigma;u</mi> <mn>2</mn> </msubsup> <mi></mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msubsup> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </msubsup> <msub> <mi>G</mi> <mrow> <msubsup> <mi>&sigma;</mi> <mi>l&Sigma;l</mi> <mn>2</mn> </msubsup> <mi></mi> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <mi>u&Sigma;u</mi> <mn>2</mn> </msubsup> <mi></mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
Substituting formula (5) into formulaIn this way, the actually required migration function W can be obtained.
Further, in the process of obtaining the migration function W, the main idea is to improve the solution generated by the genetic algorithm through a gradient descent algorithm so as to obtain a minimum iteration value. Considering that the gradient descent algorithm can obtain the local minimum iteration value closest to the initial guess, and the genetic algorithm can provide a good solution, but the local minimum value may be missed, we need to combine the advantages of the two schemes and solve the problem in a hybrid way, namely, the gradient descent algorithm is used to improve the solution generated by the genetic algorithm.
First, generating an initial solution by a genetic algorithm;
genetic algorithms are initially a kind of randomized search methods that have evolved by using the evolution law in the biology world, and organisms evolve by crossing and mutating through a certain number of generations. The solution of the algorithm includes W, σu,σlThe adaptability of each solution is calculatedAnd (4) showing. The following is an example of solving W:
1) and (3) estimating: retaining the solution with 10% of the highest fitness;
2) selecting: randomly generating a 10% solution;
3) and (3) crossing: two solutions were randomly selected from the parent and 60% of the solutions were generated by a linear combination of:
Wnew=τ·Wold(d)+(1-τ)·Wold(2),τ∈(0,1);
4) mutation: a solution is randomly selected from the parent, randomly increasing or decreasing the value produced by the exponential distribution, producing 20% of the solutions.
σu,σlThe process of (2) is the same as W. The process of the genetic algorithm is terminated when the solutions of the successive five generations do not improve. Since the process of genetic algorithm to find the optimal solution will take a lot of time, we set σ to reduce the time overheadu,σlIs within the noise range of the RSS value, thereby narrowing the search space and speeding up the convergence.
Secondly, improving a solution generated by a genetic algorithm by using a gradient descent algorithm;
the process of iteratively obtaining the optimal W using the gradient descent algorithm can be expressed as:
<math> <mrow> <msub> <mi>W</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>W</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>&eta;</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <msub> <mo>&PartialD;</mo> <mi>W</mi> </msub> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
ηkis the learning rate of the kth iteration of controlling the gradient step size, let ηk=η0/k,Denotes the gradient of W, when we know DW(pl||pu) We can obtain the optimal solution for W. For equation (5):
<math> <mrow> <mfrac> <mo>&PartialD;</mo> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mfrac> <msub> <mi>G</mi> <mrow> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msub> <mi>G</mi> <mrow> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
then D isW(pl||pu) Is a derivative of
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mfrac> <mrow> <msub> <mrow> <mo>&PartialD;</mo> <mi>D</mi> </mrow> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>W</mi> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> </mfrac> <mo>&CenterDot;</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>y</mi> </mrow> <mrow> <mo>&PartialD;</mo> <mi>W</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>&CenterDot;</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> <mrow> <mo>&PartialD;</mo> <mi>W</mi> </mrow> </mfrac> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> </mfrac> <mo>&CenterDot;</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> <mrow> <mo>&PartialD;</mo> <mi>W</mi> </mrow> </mfrac> </mtd> </mtr> <mtr> <mtd> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>MN</mi> </munderover> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>&CenterDot;</mo> <msub> <mi>&chi;</mi> <mi>i</mi> </msub> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> </mfrac> <mo>&CenterDot;</mo> <msub> <mi>&chi;</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mtd> </mtr> </mtable> </mfenced> </math>
Wherein
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>l</mi> <mn>4</mn> </msubsup> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mrow> <mi>l</mi> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </mrow> <mn>2</mn> </msubsup> <mi></mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&sigma;</mi> <mi>l</mi> <mn>2</mn> </msubsup> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <mi>u</mi> <mn>2</mn> </msubsup> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> <mo></mo> <msub> <msup> </msup> </msub> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>&prime;</mo> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <msubsup> <mi>&sigma;</mi> <mrow> <mi>l</mi> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </mrow> <mn>2</mn> </msubsup> <mi></mi> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <mrow> <mi>u</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </mrow> <mn>2</mn> </msubsup> <mi></mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mtd> </mtr> <mtr> <mtd> <mfrac> <mrow> <mo>&PartialD;</mo> <msub> <mi>D</mi> <mi>W</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>l</mi> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msubsup> <mi>&sigma;</mi> <mi>u</mi> <mn>4</mn> </msubsup> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mrow> <mi>u</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </mrow> <mn>2</mn> </msubsup> <mi></mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mfrac> <mrow> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&sigma;</mi> <mi>l</mi> <mn>2</mn> </msubsup> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <mi>u</mi> <mn>2</mn> </msubsup> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </mrow> <mrow> <msup> <mi>M</mi> <mn>2</mn> </msup> <msup> <mi>n</mi> <mn>2</mn> </msup> <msub> <mi>&sigma;</mi> <mi>l</mi> </msub> <msub> <mi>&sigma;</mi> <mi>u</mi> </msub> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>Mn</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>G</mi> <mrow> <msubsup> <mi>&sigma;</mi> <mrow> <mi>l</mi> <msub> <mi>&Sigma;</mi> <mi>l</mi> </msub> </mrow> <mn>2</mn> </msubsup> <mi></mi> <mo>+</mo> <msubsup> <mi>&sigma;</mi> <mrow> <mi>u</mi> <msub> <mi>&Sigma;</mi> <mi>u</mi> </msub> </mrow> <mn>2</mn> </msubsup> <mi></mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <msup> <mi>i</mi> <mo>&prime;</mo> </msup> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mtd> </mtr> </mtable> </mfenced> </math>
At this point, the optimal transfer function W can be obtained by continuous iteration.
Optionally, the RSS value of the sample in the sample area is collected by the sensor node, and the sample is processed
The RSS values are combined into a perceptual matrix comprising:
in practice, let the target measure the RSS values at each grid to obtain a perception matrix at all grids of the sample area
Wherein s isij={sij(1),...,sij(q),...,sij(Q)}TRepresenting Q consecutive RSS values for grid j for link i.
Optionally, the acquiring, by the sensor node, a positioning RSS value in an area to be monitored, and combining the positioning RSS value into a measurement vector includes:
in the implementation, the RSS value of each link when the target is in the area to be monitored is recorded to obtain a measurement vector RM×1×Q=[r1,...,ri,...,rM]TWherein r isi={ri(1),...,ri(1),...,ri(Q)}T
Optionally, migrating the sensing matrix of the sample region and the measurement vector in the domain to be monitored according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector, including:
in the implementation, the perception matrix and the measurement vector are respectively multiplied by a transfer function W to obtain a transferred perception matrix
<math> <mrow> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> <mo>&times;</mo> <mi>Q</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mi>W</mi> <msubsup> <mi>s</mi> <mi>ij</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
And the measurement vector after migration
<math> <mrow> <msubsup> <mi>R</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mn>1</mn> <mo>&times;</mo> <mi>Q</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mi>W</mi> <msubsup> <mi>r</mi> <mi>i</mi> <mi>u</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow> </math>
Respectively performing dimensionality reduction operation on the three-dimensional sensing matrix shown in the formula (6) and the two-dimensional measurement vector shown in the formula (7) by adopting a maximum probability value taking method, wherein the specific operation is performed by using the following formula:
<math> <mrow> <msub> <mi>s</mi> <mi>ij</mi> </msub> <mo>=</mo> <mi>arg</mi> <munder> <mi>max</mi> <mrow> <mn>1</mn> <mo>&le;</mo> <mi>q</mi> <mo>&le;</mo> <mi>Q</mi> </mrow> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>arg</mi> <munder> <mi>max</mi> <mrow> <mn>1</mn> <mo>&le;</mo> <mi>q</mi> <mo>&le;</mo> <mi>Q</mi> </mrow> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </math>
p (-) is Gaussian probability estimation, and a two-dimensional perception matrix S 'is obtained after dimensionality reduction'M×NAnd a one-dimensional measurement vector R'M×1
Optionally, when the area of the sample region is smaller than the area of the region to be monitored, performing grid interpolation processing on the migrated sensing matrix to obtain a migrated high-resolution sensing matrix, including:
in the implementation, since the number of grids in different regions is the same, the size of the grid increases as the area of the region or the length of the link increases, which may cause the resolution of the positioning to decrease, and further cause the accuracy of the positioning to decrease. Therefore, when the area of the area is increased, in order to achieve at least the same mesh resolution as that of the original area, the number of links of the new area needs to be increased to divide more meshes.
When the number of grids in the new area is increased, the number of elements of the sensing matrix after the original area is migrated needs to be increased, so that the purpose of improving the positioning accuracy is achieved. Because the RSS values of the links are similar when the target is at the adjacent position, the RSS values of the adjacent positions are used for obtaining the RSS values of all newly-added grids through interpolation, and therefore the high-resolution perception matrix after migration is obtained.
Specifically, taking two regions, i.e., the sample region and the region to be monitored, as an example, when the area l × a of the sample region is smaller than the area u × b of the region to be monitored, and the number of grids is N, the side length of the grid of the sample region is ωlThe length of the grid side of the region to be monitored is omegauI.e. the link length is l to u, the grid side length is from ωlBecomes omegau
In order to increase the grid resolution of the area to be monitored, it is necessary to divide each grid of the area to be monitored intoSub-grids, each sub-grid having a side length ofThe number of sub-grids isThe specific implementation divides more grids by increasing the number of links of the area to be monitored, and needs to deploy Mx [ u/l ]]Each link to achieve the same grid resolution as the sample region.
Correspondingly, interpolation is required to be performed on the migrated sensing matrix. The grid where the sub-grid i' is located and 8 adjacent grids closest to the grid are selected, and 9 grids form an adjacent grid, as shown in fig. 6, each grid is divided into 4 sub-grids. The RSS value for subgrid i' is found by the following equation:
<math> <mrow> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mo>&prime;</mo> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>9</mn> </munderover> <mfrac> <msub> <mi>s</mi> <mi>i</mi> </msub> <mrow> <msub> <mi>d</mi> <mi>j</mi> </msub> <mi>D</mi> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
wherein s isi'And siRSS values, d, for grids j and i', respectivelyjIs the euclidean distance of the two. And obtaining the RSS value of the sub-grid i' through interpolation, and further obtaining the high-resolution perception matrix after migration.
Optionally, recovering the position of the target by using a compressed sensing theory according to the migrated sensing matrix and the migrated measurement vector, including:
in practice, according to the CS theory, the following formula is shown
R'M×1=S'M×N·θN×1+N
S′M×NAnd R'M×1Is the perception matrix and the measurement vector after dimensionality reduction. N is the noise value. ThetaN×1=[θ1,…,θj,…θN]TIs a position vector, and θjE {0,1}, theta when there is a target on the jth gridjOtherwise, it is 0. By using a compressed sensing reconstruction algorithm (Algorithm) to obtain a position vector θ:
wherein,is a pseudo-inverse operator, c>0 is a constant. Is also a constant but does not tend to 1, and the positioning of the target in the area to be monitored is completed by obtaining theta.
The specific use flow of the above formula includes related contents such as detailed explanation of the pseudo operation.
It is noted that the above-mentioned method for restoring the position of the target by using the compressed sensing theory is a mature prior art, and therefore, the detailed description thereof is omitted here.
In the passive positioning method based on spatial migration compressed sensing provided in this embodiment, the sensing matrix of the sample region and the measurement vector of the monitoring region are migrated, and the positioning method based on compressed sensing is used, so that manpower consumption and communication overhead caused by reconstruction of the sensing matrix of the region to be monitored are avoided, and feasibility of positioning different regions by using compressed sensing is improved.
In the process of the migration type passive positioning, two theorems are used as theoretical bases, specifically:
theorem one, if RSS values of each row of the perception matrix S 'after migration satisfy gaussian distribution, and M ═ O (K log (N/K)), then S' satisfies K sparse vectors Θ of all N dimensions
<math> <mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&delta;</mi> <mo>)</mo> </mrow> <mo>&le;</mo> <mfrac> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>S</mi> <mo>&prime;</mo> </msup> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> </mfrac> <mo>&le;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>&delta;</mi> <mo>)</mo> </mrow> </mrow> </math>
The probability of (c) is close to 1, where e [0,1 ].
And (3) proving that: to demonstrate the above theorem, we first experimentally demonstrate that each row of the migrated perceptual matrix S' obeys a gaussian distribution. Then, to simplify the proof, S' is normalized toE (S'ij)μ,Var(S′ij)=E((S′ij)2) When σ is satisfied, the inner productIs the expectation and variance of
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <msqrt> <mi>&sigma;M</mi> </msqrt> </mfrac> <msubsup> <mi>S</mi> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>,</mo> <mi>&Theta;</mi> <mo>></mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mi>&sigma;M</mi> </msqrt> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>E</mi> <mrow> <mo>(</mo> <msubsup> <mi>S</mi> <mi>ij</mi> <mo>&prime;</mo> </msubsup> <mo>)</mo> </mrow> <msub> <mi>&theta;</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mi>K&mu;</mi> <msqrt> <mi>&sigma;M</mi> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <mi>Var</mi> <mrow> <mo>(</mo> <mo>&lt;</mo> <mfrac> <mn>1</mn> <msqrt> <mi>&sigma;M</mi> </msqrt> </mfrac> <msubsup> <mi>S</mi> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>,</mo> <mi>&Theta;</mi> <mo>></mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mi>&sigma;M</mi> </msqrt> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>Var</mi> <mrow> <mo>(</mo> <msubsup> <mi>S</mi> <mi>ij</mi> <mo>&prime;</mo> </msubsup> <mo>)</mo> </mrow> <msubsup> <mi>&theta;</mi> <mi>j</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mi>M</mi> </mfrac> <mo>.</mo> </mrow> </math>
Further obtainThe expectation of (2):
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>S</mi> <mo>&prime;</mo> </msup> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>Var</mi> <mrow> <mo>(</mo> <mo>&lt;</mo> <msqrt> <mfrac> <mn>1</mn> <mi>&sigma;M</mi> </mfrac> <msubsup> <mi>S</mi> <mi>i</mi> <mo>&prime;</mo> </msubsup> </msqrt> <mo>,</mo> <mi>&Theta;</mi> <mo>></mo> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>.</mo> </mrow> </math>
from the theory of Random interference on Random interference by m.a. davenport, Sparsesignal acquisition and processing, there are:
<math> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mo>|</mo> <mfrac> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>s</mi> <mo>&prime;</mo> </msup> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> </mfrac> <mo>-</mo> <mn>1</mn> <mo>|</mo> <mo>&GreaterEqual;</mo> <mi>&delta;</mi> <mo>)</mo> </mrow> <mo>&le;</mo> <mn>2</mn> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>-</mo> <mi>M</mi> <msup> <mi>&delta;</mi> <mn>2</mn> </msup> </mrow> <mi>c</mi> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
wherein c is a constant. To SN-dimensional subspace, K sparse vector Θ satisfies <math> <mrow> <mo>|</mo> <mfrac> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>s</mi> <mo>&prime;</mo> </msup> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> </mfrac> <mo>-</mo> <mn>1</mn> <mo>|</mo> <mo>&GreaterEqual;</mo> <mi>&delta;</mi> </mrow> </math> The probability of (c) is:
<math> <mrow> <msup> <mrow> <mo>(</mo> <mfrac> <mi>eN</mi> <mi>K</mi> </mfrac> <mo>)</mo> </mrow> <mi>K</mi> </msup> <mo>&CenterDot;</mo> <mn>2</mn> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>-</mo> <mi>M</mi> <msup> <mi>&delta;</mi> <mn>2</mn> </msup> </mrow> <mi>c</mi> </mfrac> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>-</mo> <mi>M</mi> <msup> <mi>&delta;</mi> <mn>2</mn> </msup> </mrow> <mi>c</mi> </mfrac> <mo>+</mo> <mi>K</mi> <mi>log</mi> <mrow> <mo>(</mo> <mfrac> <mi>N</mi> <mi>K</mi> </mfrac> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
while M ═ O (K log (N/K)), S' satisfies the K sparse vector Θ of all N dimensions
<math> <mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&delta;</mi> <mo>)</mo> </mrow> <mo>&le;</mo> <mfrac> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <msup> <mi>s</mi> <mo>&prime;</mo> </msup> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> </mfrac> <mo>&le;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>&delta;</mi> <mo>)</mo> </mrow> </mrow> </math>
The probability of (a) approaches 1.
And a second theorem, for the sensing matrix S 'after M multiplied by N dimension migration, the measurement vector R' after M multiplied by 1 dimension migration and the K sparse vector theta. Order toIs composed ofThe solution of the algorithm (the solution of equation (8)). Then at least inThe recovery error satisfies:
<math> <mrow> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mi>&Theta;</mi> <mover> <mo>-</mo> <mo>^</mo> </mover> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>&le;</mo> <mfrac> <mrow> <mn>8</mn> <msup> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>K</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>&mu;</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>min</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mi>&Theta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
whereinS′iAnd S'jI are each SthAnd jthA column vector.
And (3) proving that: from time to time, generally, let 1, | Θ (1) | ≧ Θ (2) | ≧ Θ. Order to <math> <mrow> <mi>a</mi> <mo>=</mo> <msup> <mi>R</mi> <mo>&prime;</mo> </msup> <mo>-</mo> <msup> <mi>S</mi> <mo>&prime;</mo> </msup> <mover> <mi>&Theta;</mi> <mo>^</mo> </mover> <mo>.</mo> </mrow> </math> Defining eventsThe standard range on the gaussian tail probability indicates:
i.e. the probability of occurrence of event a is at leastIn the next section, we assume event A to occur, i.e., that
Order toβ1={Θ(i):i=1,2,.....,k0},β2={Θ(i):k0+1,..., K }, and Θ ═ β12. Obviously, when i > k0When, | { i | (i) | is not less than 1} | < k |0And | Θ (i) | is less than 1. Therefore, the temperature of the molten metal is controlled,
<math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>&beta;</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <mo>|</mo> <mi>&Theta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&lt;</mo> <mi>K</mi> <mo>-</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>&beta;</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msup> <mrow> <mo>|</mo> <mi>&Theta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <msub> <mi>k</mi> <mn>0</mn> </msub> <mo>+</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <mi>min</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mi>&Theta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </msqrt> <mo>&le;</mo> <msqrt> <msub> <mi>k</mi> <mn>0</mn> </msub> </msqrt> </mrow> </math>
first we demonstrate beta1Is a feasible solution to equation (8). In fact, i for Sth(1 ≦ i ≦ M) column vector S'iComprises the following steps:
wherein,by Un-noisy random decomposition, D.L. DonohoIs provided with
<math> <mrow> <mo>|</mo> <mo>&lt;</mo> <msubsup> <mi>S</mi> <mi>i</mi> <mo>&prime;</mo> </msubsup> <mo>,</mo> <msup> <mi>S</mi> <mo>&prime;</mo> </msup> <msub> <mi>&beta;</mi> <mn>1</mn> </msub> <mo>-</mo> <msup> <mi>R</mi> <mo>&prime;</mo> </msup> <mo>></mo> <mo>|</mo> <mo>&le;</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> <mo>+</mo> <mfrac> <mn>3</mn> <mn>2</mn> </mfrac> <mo>&le;</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> </mrow> </math>
The above formula shows beta1Is a feasible solution to equation (8). Thus, it was obtained by L.Wang of Stable recovery of spark signals and an oracle identity
<math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mrow> <mover> <mi>&Theta;</mi> <mo>^</mo> </mover> <mo>-</mo> <msub> <mi>&beta;</mi> <mn>1</mn> </msub> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>&le;</mo> <mfrac> <mrow> <mn>2</mn> <msqrt> <mn>2</mn> </msqrt> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> <msqrt> <msub> <mi>k</mi> <mn>0</mn> </msub> </msqrt> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>k</mi> </mrow> <mn>0</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>&mu;</mi> </mrow> </mfrac> <mo>-</mo> <msqrt> <msub> <mi>k</mi> <mn>0</mn> </msub> </msqrt> </mrow> </math>
Then is provided with
<math> <mrow> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mrow> <mover> <mi>&Theta;</mi> <mo>^</mo> </mover> <mo>-</mo> <mi>&Theta;</mi> </mrow> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>&le;</mo> <msub> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>&Theta;</mi> <mo>^</mo> </mover> <mo>-</mo> <msub> <mi>&beta;</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mo>+</mo> <msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>&beta;</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msub> <mfrac> <mrow> <mn>2</mn> <msqrt> <mn>2</mn> </msqrt> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> <msqrt> <msub> <mi>k</mi> <mn>0</mn> </msub> </msqrt> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mrow> <mn>2</mn> <mi>k</mi> </mrow> <mn>0</mn> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>&mu;</mi> </mrow> </mfrac> </mrow> </math>
Thus the recovery error satisfies
<math> <mrow> <msubsup> <mrow> <mo>|</mo> <mo>|</mo> <mover> <mi>&Theta;</mi> <mo>^</mo> </mover> <mo>-</mo> <mi>&Theta;</mi> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> <mn>2</mn> </msubsup> <mo>&le;</mo> <mfrac> <mrow> <mn>8</mn> <msup> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <msqrt> <msub> <mi>k</mi> <mn>0</mn> </msub> </msqrt> </mrow> <msup> <mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mrow> <mo>(</mo> <mrow> <mn>2</mn> <mi>K</mi> </mrow> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mi>&mu;</mi> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> </mfrac> <mo>&le;</mo> <mfrac> <mrow> <mn>8</mn> <msup> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> <mi>log</mi> <mi>M</mi> </msqrt> <mo>+</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>K</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mi>&Theta;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
In summary, the passive positioning method based on spatial migration compressed sensing provided by the present invention satisfies the above theoretical basis, and has realizability.
Evaluation of invention Performance
We attempted to evaluate the present invention from three aspects: positioning performance, human consumption, and communication overhead.
Positioning performance: fig. 7 is a positioning error cumulative probability distribution (CDF) diagram when the link length shifts from 4m to 12 m. CS w/o Trans represents that sensing matrixes are respectively constructed for positioning areas with link lengths of 4m and 12m and are directly positioned by using a compressed sensing method; RTI is a method for positioning by utilizing tomography, and RTI w/trans represents that migration is carried out, and RTI w/o trans represents that migration is not carried out; the RASS method is a learning-based method proposed by Zygun, and positioning is carried out by using a support vector machine, and migration is represented by RTI w/Trans, and migration is not represented by RTI w/o Trans. By comparing the TCL method proposed by the present invention with other methods, the results show that the performance of TCL is close to CS w/o trans, and the positioning errors are 0.87m and 1.23m for 50% and 80% of grid points, respectively. Compared with the method without migration, the positioning errors after migration are respectively improved by 58% and 66% for 80% of grid points by the RTI method and the RASS method, which shows that the sensing matrix obtained in advance through migration after the area of the positioning area is changed can be reused in a new monitoring area.
Manpower consumption: typically, the RSS measurements for a new monitored area are manually obtained. We used the time spent before deployment to check for human consumption. The localization area was divided into grids with a side length of 0.5m, and 100 measurements were collected consecutively in each grid, each measurement taking 1.5 s. Thus for 4m by 4m and 12m by 12m regions, the construction of the perceptual matrix time cost is at least 2.67 and 24 hours. Fig. 8 is a comparison graph of time consumption in three different link length shifts, in which the labor consumption is reduced by 41% when the link length shifts from 3m to 6m, 88% when the link length shifts from 4m to 12m, and 93% when the link length shifts from 3m to 12 m. Therefore, the migration method provided by the invention can greatly reduce the manpower consumption brought by retraining the perception matrix under the condition that the positioning area is changed.
Communication overhead: energy consumption of TCL, RASS w/trans and RTI w/trans is compared, and energy consumption is estimated by increasing the number of links until the positioning accuracy reaches a fixed value. According to a first order radio model, the energy consumption of each packet on the link passes through Eradio=elBb2+2BEelcTo calculate. Where B is the size of the packet in binary, B is the link length, el=100pJ/(bit/m2),Eelc50 nJ/bit. In the experiment, B is 320bits, B is 12m, and 100 packets are transmitted at a time. The energy consumption of the M links is M × 3.66 mJ. The number of links required to achieve the same positioning accuracy is different for different methods. FIG. 9 is a comparison of energy consumption for several methods under different positioning errors. It can be seen that when the positioning error is less than 1m, the energy consumption of the TCL, RASS w/trans, and RTIw/trans is 18.3mJ, 47.59mJ, 54.91mJ, respectively, indicating that the RTI and RASS methods require more measurement values than the TCL, and thus the TCL method can reduce the energy consumption caused by the communication overhead.
In summary, the embodiment of the present invention provides a passive positioning method based on spatial migration compressed sensing, in which a sensing matrix of a sample region and a measurement vector of a monitoring region are migrated, and a positioning method of compressed sensing is used to determine position information of a target in the monitoring region, so that human consumption and communication overhead caused by reconstruction of the sensing matrix of the region to be monitored are avoided, and feasibility of realizing positioning of different regions by using compressed sensing is improved.
It should be noted that: the embodiment of performing glue solution coating by using the migration type passive positioning method provided in the above embodiment is only used as a description in practical application in the migration type passive positioning method, and the migration type passive positioning method may also be used in other application scenarios according to practical needs, and a specific implementation process thereof is similar to the above embodiment and is not described here again.
The serial numbers in the above embodiments are merely for description, and do not represent the sequence of the assembly or the use of the components.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A passive positioning method based on space migration compressed sensing is characterized in that the passive positioning method based on space migration compressed sensing comprises the following steps:
respectively deploying sensor nodes in a sample region and a region to be monitored;
step two, acquiring RSS matrixes at reference positions in a sample area and an area to be monitored through the sensor nodes;
thirdly, obtaining a migration function according to the RSS matrixes of the sample region and the region to be monitored;
step four, collecting sample RSS values in a sample area through the sensor nodes, and combining the sample RSS values into a sensing matrix;
acquiring positioning RSS values in an area to be monitored through the sensor nodes, and combining the positioning RSS values into measurement vectors;
step six, migrating the sensing matrix of the sample area and the measurement vector in the area to be monitored according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector;
seventhly, when the area of the sample area is smaller than that of the area to be monitored, carrying out grid interpolation processing on the migrated sensing matrix to obtain a migrated high-resolution sensing matrix;
and step eight, recovering the position of the target by using a compressed sensing theory according to the migrated sensing matrix and the migrated measurement vector.
2. The passive positioning method based on spatial migration compressed sensing according to claim 1, further comprising:
when the area of the sample region is not smaller than the area of the region to be monitored, after step six is completed, step eight is directly performed.
3. The passive positioning method based on spatial migration compressed sensing according to claim 1, wherein the deploying sensor nodes in the sample region and the region to be monitored respectively comprises:
the area size l × a of the sample region is set, the area size of the region to be monitored is u × b, the length of a wireless link formed by nodes deployed in the sample region is l, the length of a wireless link formed by nodes deployed in the region to be monitored is u, l is not equal to u, and the number of links deployed in the sample region and the region to be monitored is M.
4. The passive positioning method based on spatial migration compressed sensing according to claim 1, wherein the acquiring, by the sensor node, RSS matrices at reference positions in a sample area and an area to be monitored comprises:
firstly, dividing a sample area and an area to be monitored into N square grids, and then respectively selecting the same reference position points in the sample area and the area to be monitored, wherein the reference position points are respectively represented by 1,2, … N and 1 ', 2', … N ', and N is equal to or less than N'.
Then, the target respectively measures the RSS matrix s of the sample area according to the grid selected by the secondary station in the sample area and the area to be monitoredlAnd the RSS matrix s of the area to be monitoreduWherein;
<math> <mrow> <msup> <mi>s</mi> <mi>l</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mn>11</mn> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <mn>1</mn> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>Mj</mi> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>ln</mi> <mi>l</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mi>Mn</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>s</mi> <mi>u</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <msup> <mn>1</mn> <mo>&prime;</mo> </msup> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <msup> <mn>1</mn> <mo>&prime;</mo> </msup> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mn>1</mn> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <msup> <mi>j</mi> <mo>&prime;</mo> </msup> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>l</mi> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </mrow> <mi>u</mi> </msubsup> <mo>,</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <msubsup> <mi>s</mi> <mrow> <mi>M</mi> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </mrow> <mi>u</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
and sij={sij(1),…,sij(q),…,sij(Q)}TRepresenting Q consecutive RSS values for the ith link when the target is in the jth trellis.
5. The passive localization method based on spatial migration compressed sensing according to claim 1, wherein obtaining a migration function according to the RSS matrix of the sample region and the region to be monitored comprises:
the RSS matrix s according to the sample regionlAnd the RSS matrix s of the area to be monitoreduA 1 is tolAnd slRespectively projecting to obtain yl=Wsl,yu=WsuLet x be (x)l,xu),y=(yl,yu) Constructing the function y ═ Wx, such that ylAnd yuAre as similar as possible in the projection space, i.e. are
F (W) is measurement ylDistribution p ofl(y) and yuDistribution p ofu(y) an optimization function of the distance between;
distributing the projection to a distance measurement function DW(pl||pu) Substituting into formula (1) to obtain
I.e. a migration function.
6. The passive localization method based on spatial migration compressed sensing of claim 1, wherein the collecting, by the sensor nodes, sample RSS values in a sample area, combining the sample RSS values into a sensing matrix, comprises:
let the target measure the RSS values at each grid to obtain a perception matrix at all grids of the sample area
<math> <mrow> <msub> <mi>S</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> <mo>&times;</mo> <mi>Q</mi> </mrow> </msub> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>S</mi> <mn>11</mn> </msub> </mtd> <mtd> <mo>.</mo> <mo>.</mo> <mo>.</mo> </mtd> <mtd> <msub> <mi>S</mi> <mrow> <mi>M</mi> <mn>1</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <msub> <mi>S</mi> <mi>ij</mi> </msub> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>S</mi> <mrow> <mi>M</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <mo>.</mo> <mo>.</mo> <mo>.</mo> </mtd> <mtd> <msub> <mi>S</mi> <mi>MN</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
Wherein s isij={sij(1),…,sij(q),…,sij(Q)}T
7. The passive positioning method based on spatial migration compressed sensing according to claim 1, wherein the step of collecting positioning RSS values in an area to be monitored through the sensor nodes and combining the positioning RSS values into a measurement vector comprises:
recording the RSS value of each link when the target is in the area to be monitored to obtain a measurement vector RM×1×Q=[r1,…,ri,…,rM]TWherein r isi={ri(1),…,ri(q),…,ri(Q)}T
8. The passive localization method based on spatial migration compressed sensing according to claim 1, wherein the migrating the sensing matrix of the sample region and the measurement vector in the region to be monitored according to the migration function to obtain a migrated sensing matrix and a migrated measurement vector comprises:
respectively multiplying the perception matrix and the measurement vector by a transfer function W to obtain a transferred perception matrix <math> <mrow> <msubsup> <mi>S</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mi>N</mi> <mo>&times;</mo> <mi>Q</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>Ws</mi> <mi>ij</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> <mo>,</mo> <mi>j</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> </mrow> </math> And the measurement vector after migration <math> <mrow> <msubsup> <mi>R</mi> <mrow> <mi>M</mi> <mo>&times;</mo> <mn>1</mn> <mo>&times;</mo> <mi>Q</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <msubsup> <mi>Wr</mi> <mi>i</mi> <mi>u</mi> </msubsup> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>&Element;</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mi>M</mi> <mo>]</mo> <mo>;</mo> </mrow> </math>
And performing dimensionality reduction processing on the migrated sensing matrix and the migrated measurement vector to obtain a dimensionality reduced sensing matrix and a dimensionality reduced measurement vector.
9. The passive localization method based on spatial migration compressed sensing according to claim 1, wherein when the area of the sample region is smaller than the area of the region to be monitored, grid interpolation is performed on the sample region to obtain a migrated high-resolution sensing matrix, including:
firstly, when the area l × a of the sample region is smaller than the area u × b of the region to be monitored, and the number of grids is N, the side length of the grid of the sample region is ωlThe length of the grid side of the region to be monitored is omegau
Secondly, dividing each grid in the area to be monitored intoSub-grids, each sub-grid having a side length ofThe number of the sub-grids is
And finally, selecting a grid where the sub-grid i 'is located and 8 adjacent grids closest to the grid, wherein 9 grids form an adjacent grid, and obtaining the RSS value of the sub-grid i' through interpolation so as to obtain the high-resolution sensing matrix after the migration.
10. The passive positioning method based on spatial migration compressed sensing according to claim 1, wherein the recovering the position of the target by using a compressed sensing theory according to the sensing matrix after migration and the measurement vector after migration comprises:
by using a compressed sensing reconstruction algorithm (l)1The minimization algorithm) to obtain the position vector θ:
wherein,is a pseudo-inverse operator, c>0 is a constant which is also a constant but does not tend to 1, the positioning of the target in the area to be monitored is completed by obtaining theta, and
θ=[θ1,…,θj,…θN]T
wherein, thetajE {0,1}, theta when there is a target on the jth gridjOtherwise, it is 0.
CN201510157843.7A 2015-04-03 2015-04-03 A kind of passive type localization method based on spatial migration compressed sensing Active CN104898089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510157843.7A CN104898089B (en) 2015-04-03 2015-04-03 A kind of passive type localization method based on spatial migration compressed sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510157843.7A CN104898089B (en) 2015-04-03 2015-04-03 A kind of passive type localization method based on spatial migration compressed sensing

Publications (2)

Publication Number Publication Date
CN104898089A true CN104898089A (en) 2015-09-09
CN104898089B CN104898089B (en) 2017-07-28

Family

ID=54030854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510157843.7A Active CN104898089B (en) 2015-04-03 2015-04-03 A kind of passive type localization method based on spatial migration compressed sensing

Country Status (1)

Country Link
CN (1) CN104898089B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199514A (en) * 2016-07-13 2016-12-07 西北大学 A kind of passive type indoor orientation method based on the change of fingerprint adaptive environment
CN106454750A (en) * 2016-11-23 2017-02-22 湖南大学 Multi-region indoor safety positioning method based on compressed sensing technology
CN108871329A (en) * 2017-12-19 2018-11-23 北京邮电大学 A kind of indoor orientation method, device, electronic equipment and storage medium
CN110022527A (en) * 2019-04-10 2019-07-16 中国人民解放军陆军工程大学 Compressed sensing passive target positioning method based on measured value quantization
CN110568445A (en) * 2019-08-30 2019-12-13 浙江大学 Laser radar and vision fusion perception method of lightweight convolutional neural network
US11943834B2 (en) 2020-03-31 2024-03-26 Samsung Electronics Co., Ltd. Electronic device and control method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702282A (en) * 2013-12-04 2014-04-02 西北大学 Multi-variety multi-objective passive locating method based on migration compression perception

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702282A (en) * 2013-12-04 2014-04-02 西北大学 Multi-variety multi-objective passive locating method based on migration compression perception

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JU WANG: "LCS: Compressive Sensing based Device-Free Localization for Multiple Targets in Sensor Networks", 《2013 PROCEEDINGS IEEE INFOCOM》 *
LIQIONG CHANG: "NDP – A Novel Device-Free Localization Method With Little Efforts", 《INTERNATIONAL SYMPOSIUM ON INFORMATION PROCESSING IN SENSOR NETWORKS》 *
NEAL PATWARI AND JOEY WILSON: "RF Sensor Networks for Device-Free Localization:Measurements, Models,and Algorithms", 《PROCEEDINGS OF THE IEEE》 *
马帼嵘: "基于RSS的无线传感器网络无源被动定位研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199514A (en) * 2016-07-13 2016-12-07 西北大学 A kind of passive type indoor orientation method based on the change of fingerprint adaptive environment
CN106199514B (en) * 2016-07-13 2019-03-29 西北大学 A kind of passive type indoor orientation method based on the variation of fingerprint adaptive environment
CN106454750A (en) * 2016-11-23 2017-02-22 湖南大学 Multi-region indoor safety positioning method based on compressed sensing technology
CN106454750B (en) * 2016-11-23 2019-05-17 湖南大学 A kind of multizone indoor security localization method based on compressed sensing technology
CN108871329A (en) * 2017-12-19 2018-11-23 北京邮电大学 A kind of indoor orientation method, device, electronic equipment and storage medium
CN108871329B (en) * 2017-12-19 2020-07-28 北京邮电大学 Indoor positioning method and device, electronic equipment and storage medium
CN110022527A (en) * 2019-04-10 2019-07-16 中国人民解放军陆军工程大学 Compressed sensing passive target positioning method based on measured value quantization
CN110568445A (en) * 2019-08-30 2019-12-13 浙江大学 Laser radar and vision fusion perception method of lightweight convolutional neural network
US11943834B2 (en) 2020-03-31 2024-03-26 Samsung Electronics Co., Ltd. Electronic device and control method thereof

Also Published As

Publication number Publication date
CN104898089B (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN104898089B (en) A kind of passive type localization method based on spatial migration compressed sensing
CN111510157A (en) Quantum error correction decoding method, device and chip based on neural network
CN107563422A (en) A kind of polarization SAR sorting technique based on semi-supervised convolutional neural networks
CN110346654B (en) Electromagnetic spectrum map construction method based on common kriging interpolation
CN107798383B (en) Improved positioning method of nuclear extreme learning machine
CN106327345A (en) Social group discovering method based on multi-network modularity
CN103455612B (en) Based on two-stage policy non-overlapped with overlapping network community detection method
CN106940895B (en) Estimation method of degradation function applied to wireless tomography system
CN110705029A (en) Flow field prediction method of oscillating flapping wing energy acquisition system based on transfer learning
Bakirtzis et al. DeepRay: Deep learning meets ray-tracing
Zhao et al. Fast decentralized gradient descent method and applications to in-situ seismic tomography
Liu et al. Research note—A robustness assessment of global city network connectivity rankings
CN107392863A (en) SAR image change detection based on affine matrix fusion Spectral Clustering
CN115859805A (en) Self-adaptive sequential test design method and device based on mixed point adding criterion
US20230386098A1 (en) Three-dimensional spectrum situation completion method and device based on generative adversarial network
Bowman et al. Emulation of multivariate simulators using thin-plate splines with application to atmospheric dispersion
CN117609770B (en) Electromagnetic spectrum map construction method and system based on variogram structure
CN111541572A (en) Accurate reconstruction method of random opportunity network graph under low constraint condition
CN109033181B (en) Wind field geographic numerical simulation method for complex terrain area
CN106503386A (en) The good and bad method and device of assessment luminous power prediction algorithm performance
US11544425B2 (en) Systems and methods for expediting design of physical components through use of computationally efficient virtual simulations
US11295046B2 (en) Systems and methods for expediting design of physical components through use of computationally efficient virtual simulations
CN109088796B (en) Network flow matrix prediction method based on network tomography technology
Lin et al. Spatially clustered varying coefficient model
Zhang et al. Dynamic structure evolution of time-dependent network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant