CN113093162B - Personnel trajectory tracking system based on AIOT and video linkage - Google Patents

Personnel trajectory tracking system based on AIOT and video linkage Download PDF

Info

Publication number
CN113093162B
CN113093162B CN202110397706.6A CN202110397706A CN113093162B CN 113093162 B CN113093162 B CN 113093162B CN 202110397706 A CN202110397706 A CN 202110397706A CN 113093162 B CN113093162 B CN 113093162B
Authority
CN
China
Prior art keywords
node
image
nodes
reference node
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110397706.6A
Other languages
Chinese (zh)
Other versions
CN113093162A (en
Inventor
徐乙馨
徐致远
沈昀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoneng Smart Technology Development Jiangsu Co ltd
Original Assignee
Guoneng Smart Technology Development Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoneng Smart Technology Development Jiangsu Co ltd filed Critical Guoneng Smart Technology Development Jiangsu Co ltd
Priority to CN202110397706.6A priority Critical patent/CN113093162B/en
Publication of CN113093162A publication Critical patent/CN113093162A/en
Application granted granted Critical
Publication of CN113093162B publication Critical patent/CN113093162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • G01S11/06Systems for determining distance or velocity not using reflection or reradiation using radio waves using intensity measurements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a personnel trajectory tracking system based on AIOT and video linkage, which comprises: and the node setting module is used for setting a reference node and a blind node to be positioned. The environment attenuation calculation module is used for obtaining an unobstructed environment attenuation index between the blind node and the reference node, analyzing the scene image to obtain a pedestrian key point, a vehicle key point and a first reference node, obtaining an environment attenuation compensation index n' according to the obstruction of each vehicle contour to the propagation path, and calculating the distance from the blind node to each first reference node according to the ranging model. And the spatial matrix acquisition module is used for generating a reference node matrix according to the reference node image, obtaining a first positioning range image and a second positioning range image according to the pedestrian key point, and obtaining a spatial feature matrix for pixel assignment. And the distance compensation module is used for analyzing the spatial characteristic matrix to obtain a distance compensation value, and obtaining the accurate coordinates of the blind node by using a weighted centroid algorithm in combination with the previously obtained d.

Description

Personnel trajectory tracking system based on AIOT and video linkage
Technical Field
The application relates to the field of wireless positioning, in particular to a personnel trajectory tracking system based on AIOT and video linkage.
Background
With the continuous development of wireless communication technology, short-distance wireless communication technologies such as bluetooth and zigbee become more and more mature, and the short-distance wireless communication has the characteristics of low power consumption and easy integration. Nowadays, the AIOT (artificial intelligence internet of things) rapidly rises, and the AIOT needs to collect the positioning information of the terminal for data integration processing, so that it is essential for the AIOT to perform accurate positioning in a small-range area by using a short-range wireless communication technology.
The prior art generally estimates the distance from a wireless signal transmitting and receiving end to be positioned to a plurality of wireless signal transmitting and receiving ends for reference through RSSI (received signal strength indicator), and positions the wireless signal transmitting and receiving end to be positioned. The positioning accuracy by using the RSSI in an obstacle-free environment is high, but under the condition of an obstacle, a transmitted signal is influenced by the obstacle to generate shadow fading, so that the RSSI changes, and the positioning accuracy by using the RSSI is not ideal. Meanwhile, the RSSI value is also affected by the propagation distance of the signal, and if the propagation distance is long, the signal will generate multipath fading and path loss, and if the loss of the signal on the path is not considered, the positioning accuracy will also be reduced.
Disclosure of Invention
Aiming at the problems, the invention provides a personnel trajectory tracking system based on AIOT and video linkage. The system comprises: and the node setting module is used for setting a reference node and a blind node to be positioned. The environment attenuation calculation module is used for obtaining an unobstructed environment attenuation index n between the blind node and the reference node, analyzing the scene image to obtain a pedestrian key point, a vehicle key point and a first reference node, obtaining an environment attenuation compensation index n' according to the obstruction of each vehicle contour to the propagation path, and calculating the distance d from the blind node to each first reference node according to the RSSI ranging model. And the spatial matrix acquisition module is used for generating a reference node matrix according to the reference node image, obtaining a first positioning range image by taking the pedestrian key point as the center, obtaining a second positioning range image by translation, and obtaining a spatial characteristic matrix for pixel assignment. And the distance compensation module is used for analyzing the spatial characteristic matrix to obtain a distance compensation value, and obtaining the accurate coordinates of the blind node by using a weighted centroid algorithm in combination with the previously obtained d.
A person trajectory tracking system based on AIOT and video linkage, the system comprising:
the node setting module is used for setting a reference node and a blind node to be positioned;
the environment attenuation calculation module is used for obtaining an unobstructed environment attenuation index n between a blind node and reference nodes, analyzing a scene image to obtain pedestrian key points and vehicle key points, selecting 4 first reference nodes closest to the pedestrian key points, connecting the vehicle key points to obtain vehicle outlines, obtaining an environment attenuation compensation index n 'according to the obstruction length of each vehicle outline to a propagation path, wherein the propagation path is a line segment connecting the pedestrian key points and the first reference nodes, and calculating the distance from the blind node to each first reference node according to an n', n and RSSI ranging model;
the spatial matrix acquisition module is used for generating a reference node matrix according to a reference node image formed by enclosing of first reference nodes, obtaining a first positioning range image by taking a pedestrian key point as a center, translating the first positioning range image to obtain a second positioning range image, assigning values to pixels in the first positioning range image and the second positioning range image by using a Gaussian kernel, outputting the first positioning range matrix and the second positioning range matrix, and superposing the reference node matrix, the first positioning range matrix and the second positioning range matrix to obtain a spatial feature matrix;
and the distance compensation module is used for analyzing the spatial characteristic matrix to obtain a distance compensation value, and obtaining accurate coordinates of the blind nodes by utilizing a weighted centroid algorithm in combination with the distances from the blind nodes to the first reference nodes.
The reference nodes are distributed in a grid mode, and the blind nodes are carried by pedestrians.
The method for obtaining the attenuation index of the non-shielding environment comprises the following steps: when the first reference nodes are not shielded, calculating the shielding-free environment attenuation index of the propagation path between every two first reference nodes according to the RSSI ranging model; the ith first reference node AiIs connected with a pedestrian key point to obtain a propagation path l'iFind out except AiTwo first reference nodes A which are nearest to key points of pedestriansi1、Ai2Connected to obtain a line segment AiAi1、AiAi2Passing pedestrian key point AiAi1、AiAi2Perpendicular line li1、li2Then the propagation path A corresponding to the ith first reference nodeiAijThe weight is
Figure BDA0003019176500000021
j takes the value 1 or 2.
AiUnobstructed environmental attenuation index to pedestrian keypoint propagation path
Figure BDA0003019176500000022
Is a propagation path AiAijIs the unobstructed ambient attenuation index of (a).
The environment is attenuatedThe method for acquiring the compensation index comprises the following steps: for propagation path l 'between ith first reference node and pedestrian key point'i
Figure BDA0003019176500000023
mgTotal number of cars in the scene image, lgIs the g-th vehicle shielding propagation path l'iLength of (E), EgAn environmental attenuation compensation index for the unit occlusion length of the g-th vehicle; finally, a modified environment attenuation index n ″, which is n + n', on the propagation path from each first reference node to the pedestrian key point is obtained.
Said EgThe calculation method comprises the following steps: numbering all vehicles entering a scene image, and covering a propagation path l between two reference nodes by the g-th vehicleAThen, the propagation path l is calculatedAEnvironmental attenuation index n ″gAnd according to lAIs of the unobstructed environmental attenuation index ngAnd vehicle contour pair lAShielding length l'ACalculating
Figure BDA0003019176500000024
The aspect ratio of the first positioning range image is the same as the aspect ratio of the reference node image, the ratio of the perimeter of the first positioning range image to the perimeter of the reference node image is k,
Figure BDA0003019176500000025
lgithe shielding length of the g-th vehicle outline to the propagation path between the i-th first reference node and the pedestrian key point, mu is a mapping factor, mgThe total number of cars in the scene image.
The translating the first positioning range image to obtain a second positioning range image specifically includes: calculating pedestrian key points (x)e,ye) And reference node image center point (x)0,y0) Is equal to x, Δ xe-x0,Δy=ye-y0(ii) a The translation amount of the abscissa is delta x ═ k delta x, the translation amount of the ordinate is delta y ═ k delta y, and the first translation amount is determined according to the translation amounts of the abscissa and the ordinateAnd translating the positioning range image to obtain a second positioning range image.
The assignment of the pixels in the first positioning range image and the second positioning range image by using the Gaussian kernel is specifically as follows: and the size of the Gaussian kernel is equal to the side length of the longest edge of the first positioning range image, the center points of the two positioning range images are respectively used as the centers of the Gaussian kernels, and the value in the Gaussian kernels is assigned to the pixels corresponding to the first positioning range image and the second positioning range image.
The distance compensation module includes: the two-dimensional convolution encoder layer is used for fitting the spatial features in the spatial feature matrix and obtaining a one-dimensional feature vector after expansion operation; the full-connection network layer is used for processing the one-dimensional characteristic vector to obtain a distance compensation value; and the positioning algorithm layer is used for calculating the accurate coordinates of the blind nodes by adopting a weighted centroid algorithm according to the compensated distance and the coordinates of the first reference nodes.
The weighted centroid algorithm is:
Figure BDA0003019176500000031
(x, y) are the exact coordinates of the blind node, (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) As coordinates of the first reference node, d1、d2、d3、d4Distances, d ', from the blind node to the four first reference nodes obtained by the environment attenuation calculation module'1、d′2、d′3、d′4The distance compensation value for each first reference node.
Compared with the prior art, the invention has the following beneficial effects:
(1) the distance from the reference node to the blind node is obtained according to the image, the environmental attenuation coefficient is weighted, the distance from the blind node to the reference node is considered, and the positioning precision is improved when no obstacle exists.
(2) The method has the advantages that the obstacles in the image are identified through the neural network, the environment attenuation index is adjusted according to the blocking of the obstacle outline to the signal propagation path between the nodes, the dynamic adjustment of the environment attenuation index is realized, the adaptability of the positioning method is stronger, and the environment attenuation index can be adjusted under different blocking conditions.
(3) The method comprises the steps of roughly positioning pedestrian key points through videos, generating a positioning range matrix by combining the distances from the pedestrian key points to four nodes in a node area, extracting the spatial relation between points in an error range and the four nodes in the matrix through a neural network, compensating a weighted centroid positioning algorithm, and accurately obtaining the positioning of blind nodes.
Drawings
Fig. 1 is a system configuration diagram.
Fig. 2 is a diagram of a first reference node bounding region.
FIG. 3 is a vehicle occlusion map.
Fig. 4 is a diagram showing a structure of a distance compensation module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The first embodiment is as follows:
the invention mainly aims to realize accurate positioning of blind nodes.
In order to realize the content of the invention, the invention designs a person trajectory tracking system based on AIOT and video linkage, and the system structure diagram is shown in FIG. 1.
The RSSI is received signal strength indication, and the invention aims to locate a person carrying a signal transmitting and receiving device in an area where a vehicle comes and goes, and can determine a propagation distance according to the received signal strength in actual signal propagation. The distance measurement formula is
Figure BDA0003019176500000032
RSSI is the strength of the signal received by the signal receiving node, d is the distance from the signal sending node to the signal receiving node, d0For reference distances, for computational convenience d0Usually 1 m. At this time, A represents what the receiving node receives when the distance between the receiving node and the transmitting node is 1mA signal strength value. And n is an environment attenuation index value, represents the influence of the environment on the signal strength and is related to the complexity of the current environment. After the n, A and RSSI values are known, the distance d between the nodes can be calculated according to a ranging formula.
And a node setting module.
The node setting module is used for receiving signals of blind nodes, a node to be positioned is called a blind node in an actual application scene, a node used for positioning the blind node is called a reference node, and the position of the reference node is known. The blind node is a wireless signal transceiver carried by a pedestrian. The distance d from a plurality of reference nodes to the blind node needs to be obtained to realize the positioning of the blind node, and the position of a plurality of d is the position of the blind node.
Firstly, setting reference nodes and recording the positions of the reference nodes, wherein the reference nodes are in grid distribution and are sequentially connected to form a grid area.
An environmental attenuation calculation module.
The environment attenuation calculation module is used for calculating a corrected environment attenuation index from the blind node to the reference node according to the video so as to improve the accuracy of RSSI ranging. The invention detects whether the occlusion exists in the environment through the image data, so a camera needs to be arranged. The camera is a wide-angle camera, the shooting range is large, the pose of the wide-angle camera is fixed, and the whole monitoring area can be covered. The blind node is carried by the pedestrian, so that the approximate position of the blind node can be obtained according to the position of the pedestrian in the image.
The invention calculates an environment attenuation compensation index n' for the condition that vehicles come and go exist in a monitored area. The vehicle is moving continuously in the monitored area, so that the shadow fading caused by the vehicle occlusion is dynamic, and the environment attenuation compensation index should also be dynamic. The method calculates the environment attenuation compensation index n' by detecting the shielding length of the vehicle to the signal transmission path in real time through video identification.
The wide-angle camera collects scene images and detects key points in the scene images through a key point identification network, the key points comprise pedestrian key points and vehicle key points, the pedestrian key points are middle points of a connecting line of central points of two feet of a human body, and the vehicle key points are points of wheels of a vehicle and ground contact. The invention relates to a method for roughly positioning pedestrian key points, namely blind nodes.
The training steps of the key point identification network are as follows: taking a plurality of scene images as a data set; labeling the data set to be pedestrian key points and vehicle key points of each automobile in the scene image to generate labeled data; training is performed using a mean square error loss function.
After training is finished, the scene image is input into the trained key point identification network, the pedestrian key points and the vehicle key points of each automobile are detected, and the coordinates of the key points in the image coordinate system are obtained. And connecting the key points of the vehicles belonging to the same automobile in sequence to obtain a vehicle outline, and outputting a key point image.
And performing projection transformation through the homography matrix, and converting the key point image into an image of an overlooking visual angle. The wide-angle camera is fixed in pose, so that the key point image can be converted into a top view by acquiring a homography matrix through internal and external parameters of the wide-angle camera. The projective transformation methods are various and well known, and are not included as protection contents of the present invention. And establishing an image coordinate system in a top view, and acquiring the coordinates of the key points and the reference nodes.
The shorter the signal propagation path is under the same condition, the smaller the signal loss in propagation is, so that all reference nodes are not needed for positioning the blind node according to the RSSI, and only the reference node closer to the key point of the pedestrian is needed. The method selects 4 first reference nodes A closest to the key points of the pedestrian, and the first reference nodes enclose an area image as shown in figure 2. And connecting the pedestrian key points with the first reference nodes A to obtain four signal propagation paths l'.
Firstly, calculating an unshielded environment attenuation index n between every two first reference nodes when no shading exists according to an RSSI mathematical model; then the ith first reference node AiIs connected with a pedestrian key point to obtain a connecting line l'iFind out the removalAiTwo first reference nodes A which are nearest to key points of pedestriansi1、Ai2Connecting to obtain a propagation path AiAi1、AiAi2The attenuation index of the unobstructed environment on the two propagation paths is known as
Figure BDA0003019176500000051
The invention is right
Figure BDA0003019176500000052
Weighted calculation is performed to obtain l'iCorresponding unobstructed ambient attenuation index. Passing pedestrian key point AiAi1、AiAi2Vertical line segment li1、li2Signal propagation path AiAijThe weight is
Figure BDA0003019176500000053
j is 1 or 2. After the weights of the two propagation paths are obtained, the weights are multiplied by the corresponding attenuation indexes of the non-shielding environment and added to obtain AiUnobstructed ambient attenuation index to pedestrian keypoint propagation path
Figure BDA0003019176500000054
Figure BDA0003019176500000055
When a vehicle enters the shooting range of the camera from the outside, namely enters a scene image, the outline of the vehicle is detected, each vehicle outline is numbered, and the vehicle outline with the corresponding number can be found in a subsequent scene image by using a target tracking technology.
The length, width and internal structure of each vehicle are different, and the shielding degree of the propagation path is also different, namely the corresponding environmental attenuation indexes are different. The environmental attenuation index among the reference nodes is known, and the environmental attenuation compensation index E of each vehicle unit occlusion length can be calculated when the vehicle profile occludes the propagation path among the reference nodesg. The specific method comprises:
Numbering all vehicles in the scene image, and covering a propagation path l between two reference nodes by the g-th vehicleAAnd lACalculating the propagation path l only when being shielded by the vehicleAEnvironmental attenuation index n ″gSimultaneously obtaining the vehicle profile pair lAShielding length l'A. Known propagation path lAIs of the unobstructed environmental attenuation index ngThen the environmental attenuation compensation index of the unit shielding length of the g-th vehicle can be calculated
Figure BDA0003019176500000056
For propagation path l 'between ith first reference node and pedestrian key point'iThen it is necessary to detect each vehicle contour pair l'iIn combination with EgTo calculate the environment attenuation compensation index n', the vehicle occlusion map is shown in fig. 3.
Figure BDA0003019176500000057
mgTotal number of cars in the scene image, lgIs the g-th vehicle shielding propagation path l'iLength of (E), EgAnd (4) an environment attenuation compensation index of the unit shielding length of the g-th vehicle. Finally, a modified environment attenuation index n ″, which is n + n', on the propagation path from each first reference node to the pedestrian key point is obtained.
The distances d from the blind nodes to the nodes can be calculated according to the RSSI ranging model by combining the RSSI values of signals between the pedestrian key points and the reference nodes and the modified environment attenuation index n', and the four distances from the blind nodes to the corresponding first reference nodes are respectively d1、d2、d3、d4
And a spatial matrix acquisition module.
After the distance d from each node to the blind node is obtained, the coordinates of the blind node can be calculated according to the weighted centroid algorithm, but the d obtained by the environment attenuation calculation module does not consider multipath fading in the signal propagation process, the calculated d is not accurate enough, and in order to improve the accuracy of the weighted centroid algorithm, the distance compensation module is adopted for distance compensation.
The invention characterizes the relation of blind nodes and reference nodes on the space through a space characteristic matrix, and the steps of obtaining the space characteristic matrix are as follows:
and surrounding the first reference node into a rectangular reference node image, generating a reference node matrix according to the reference node image, wherein one pixel in the reference node image corresponds to one element in the reference node matrix, and assigning 0 values to all elements in the reference node matrix.
And obtaining a first positioning range image by taking the key point of the pedestrian as the center, wherein the first positioning range image is a rectangle. The aspect ratio of the first positioning range image is the same as that of the reference node image, and the long side of the first positioning range image is parallel to that of the reference node image. The ratio of the perimeters of the first location range image and the reference node image is k,
Figure BDA0003019176500000061
lgithe shielding length m of the g-th vehicle outline to the propagation path between the i-th first reference node and the pedestrian key pointgMu is the mapping factor for the total number of cars in the scene image, and a preferred value of mu is 0.0045. After k is calculated, the length and width of the reference node image can be multiplied by k to obtain the length and width of the first positioning range matrix.
Translating the first positioning range image to obtain a second positioning range image, and specifically comprising the following steps of: calculating pedestrian key points (x)e,ye) And reference node image center point (x)0,y0) Is equal to x, Δ xe-x0,Δy=ye-y0(ii) a And translating the first positioning range image according to the translation amounts of the horizontal coordinate and the vertical coordinate to obtain a second positioning range image.
And assigning values to pixels in the first positioning range image and the second positioning range image by using the Gaussian kernel, wherein the size of the Gaussian kernel is equal to the side length of the longest edge of the first positioning range image, the center points of the two positioning range images are respectively used as the centers of the Gaussian kernels, and the values in the Gaussian kernel are assigned to the pixels corresponding to the first positioning range image and the second positioning range image. And outputting the first positioning range matrix and the second positioning range matrix.
And superposing the reference node matrix, the first positioning range matrix and the second positioning range matrix to obtain a spatial characteristic matrix.
And a distance compensation module. The system comprises a two-dimensional convolutional encoder layer, a full-connection network layer and a positioning algorithm layer, and the structure of a distance compensation module is shown in figure 4. The method is used for analyzing the spatial characteristic matrix and calculating the accurate coordinates of the blind nodes.
Inputting the spatial feature matrix into a distance compensation network, fitting the spatial features in the feature matrix through a two-dimensional convolution encoder layer during operation, and obtaining a one-dimensional feature vector after expansion operation; inputting the one-dimensional characteristic vector into a full-connection network layer, and finally outputting distance compensation values d' of 4 reference nodes; and d' is input into a positioning algorithm layer, and the accurate coordinates of the blind nodes are obtained through a compensated weighted centroid algorithm. The weighted centroid algorithm is as follows:
Figure BDA0003019176500000062
(x, y) are the exact coordinates of the blind node, (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) As coordinates of the first reference node, d1、d2、d3、d4Distances, d ', from the blind node to the four first reference nodes obtained by the environment attenuation calculation module'1、d′2、d′3、d′4The distance compensation value for each first reference node.
The training steps of the distance compensation network are as follows: taking a plurality of spatial feature matrixes as a data set; labeling the data set to be the real coordinates of the blind nodes, and generating labeled data; training is performed using a mean square error loss function.
And after the training is finished, inputting the spatial characteristic matrix into the trained distance compensation network, calculating a distance compensation value according to the spatial characteristic, and finally outputting the accurate coordinates of the blind node.
The accurate positioning of the blind node is displayed on the two-dimensional GIS map, the designated blind node can be tracked, the track of the blind node is obtained, and the moving track of a person who equips the signal receiving equipment can be tracked.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A person trajectory tracking system based on AIOT and video linkage, the system comprising:
the node setting module is used for setting a reference node and a blind node to be positioned;
the environment attenuation calculation module is used for obtaining an unobstructed environment attenuation index n between a blind node and reference nodes, analyzing a scene image to obtain pedestrian key points and vehicle key points, selecting 4 first reference nodes closest to the pedestrian key points, connecting the vehicle key points to obtain vehicle outlines, obtaining an environment attenuation compensation index n 'according to the obstruction length of each vehicle outline to a propagation path, wherein the propagation path is a line segment connecting the pedestrian key points and the first reference nodes, and calculating the distance from the blind node to each first reference node according to an n', n and RSSI ranging model;
the spatial matrix acquisition module is used for generating a reference node matrix according to a reference node image formed by enclosing of first reference nodes, obtaining a first positioning range image by taking a pedestrian key point as a center, translating the first positioning range image to obtain a second positioning range image, assigning values to pixels in the first positioning range image and the second positioning range image by using a Gaussian kernel, outputting the first positioning range matrix and the second positioning range matrix, and superposing the reference node matrix, the first positioning range matrix and the second positioning range matrix to obtain a spatial feature matrix;
and the distance compensation module is used for analyzing the spatial characteristic matrix to obtain a distance compensation value, and obtaining accurate coordinates of the blind nodes by utilizing a weighted centroid algorithm in combination with the distances from the blind nodes to the first reference nodes.
2. The system of claim 1, wherein the reference nodes are distributed in a grid pattern, and the blind nodes are carried by pedestrians.
3. The system of claim 1, wherein the unobstructed environmental attenuation exponent is obtained by:
when the first reference nodes are not shielded, calculating the shielding-free environment attenuation index of the propagation path between every two first reference nodes according to the RSSI ranging model;
the ith first reference node AiIs connected with a pedestrian key point to obtain a propagation path l'iFind out except AiTwo first reference nodes A which are nearest to key points of pedestriansi1、Ai2Connected to obtain a line segment AiAi1、AiAi2Passing pedestrian key point AiAi1、AiAi2Perpendicular line li1、li2Then the propagation path A corresponding to the ith first reference nodeiAijThe weight is
Figure FDA0003019176490000011
j takes the value of 1 or 2;
Aiunobstructed environmental attenuation index to pedestrian keypoint propagation path
Figure FDA0003019176490000012
Figure FDA0003019176490000013
Is a propagation path AiAijIs the unobstructed ambient attenuation index of (a).
4. The system of claim 3, wherein the environmental attenuation compensation index is obtained by:
for the ith first reference nodePropagation path l 'between pedestrian key points'i
Figure FDA0003019176490000014
mgTotal number of cars in the scene image, lgIs the g-th vehicle shielding propagation path l'iA distance of (E)gAn environmental attenuation compensation index for the unit occlusion length of the g-th vehicle;
finally, a modified environment attenuation index n ″, which is n + n', on the propagation path from each first reference node to the pedestrian key point is obtained.
5. The system of claim 4, wherein E isgThe calculation method comprises the following steps:
numbering all vehicles entering a scene image, and covering a propagation path l between two reference nodes by the g-th vehicleAThen, the propagation path l is calculatedAEnvironmental attenuation index n ″gAnd according to lAIs of the unobstructed environmental attenuation index ngAnd vehicle contour pair lAShielding length l'ACalculating
Figure FDA0003019176490000021
6. The system of claim 1, wherein the aspect ratio of the first location range image is the same as the aspect ratio of the reference node image, the ratio of the first location range image to the perimeter of the reference node image is k,
Figure FDA0003019176490000022
lgithe shielding length of the g-th vehicle outline to the propagation path between the i-th first reference node and the pedestrian key point, mu is a mapping factor, mgThe total number of cars in the scene image.
7. The system of claim 6, wherein translating the first location range image to obtain the second location range image comprises:
calculating pedestrian key points (x)e,ye) And reference node image center point (x)0,y0) Is equal to x, Δ xe-x0,Δy=ye-y0
And translating the first positioning range image according to the translation amounts of the horizontal coordinate and the vertical coordinate to obtain a second positioning range image.
8. The system of claim 1, wherein the assigning the pixel values in the first localization range image and the second localization range image using the gaussian kernel is specifically: and the size of the Gaussian kernel is equal to the side length of the longest edge of the first positioning range image, the center points of the two positioning range images are respectively used as the centers of the Gaussian kernels, and the value in the Gaussian kernels is assigned to the pixels corresponding to the first positioning range image and the second positioning range image.
9. The system of claim 1, wherein the distance compensation module comprises:
the two-dimensional convolution encoder layer is used for fitting the spatial features in the spatial feature matrix and obtaining a one-dimensional feature vector after expansion operation;
the full-connection network layer is used for processing the one-dimensional characteristic vector to obtain a distance compensation value;
the positioning algorithm layer is used for calculating the accurate coordinates of the blind nodes by adopting a weighted centroid algorithm according to the compensated distance and the coordinates of the first reference nodes;
the weighted centroid algorithm is:
Figure FDA0003019176490000023
(x, y) are the exact coordinates of the blind node, (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) As coordinates of the first reference node, d1、d2、d3、d4Computing for environmental attenuationModule derived distances, d ', of blind nodes to four first reference nodes'1、d′2、d′3、d′4The distance compensation value for each first reference node.
CN202110397706.6A 2021-04-14 2021-04-14 Personnel trajectory tracking system based on AIOT and video linkage Active CN113093162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110397706.6A CN113093162B (en) 2021-04-14 2021-04-14 Personnel trajectory tracking system based on AIOT and video linkage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110397706.6A CN113093162B (en) 2021-04-14 2021-04-14 Personnel trajectory tracking system based on AIOT and video linkage

Publications (2)

Publication Number Publication Date
CN113093162A CN113093162A (en) 2021-07-09
CN113093162B true CN113093162B (en) 2022-04-01

Family

ID=76677015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110397706.6A Active CN113093162B (en) 2021-04-14 2021-04-14 Personnel trajectory tracking system based on AIOT and video linkage

Country Status (1)

Country Link
CN (1) CN113093162B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947123B (en) * 2021-11-19 2022-06-28 南京紫金体育产业股份有限公司 Personnel trajectory identification method, system, storage medium and equipment
CN114399537B (en) * 2022-03-23 2022-07-01 东莞先知大数据有限公司 Vehicle tracking method and system for target personnel

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110030397A (en) * 2009-09-16 2011-03-23 동국대학교 산학협력단 Apparatus and method for estimating position based on self organization algorithm, and recording medium thereof
CN107734479A (en) * 2017-09-11 2018-02-23 广东广业开元科技有限公司 A kind of fire fighter's localization method, system and device based on wireless sensor technology
CN109782227A (en) * 2019-02-20 2019-05-21 核芯互联科技(青岛)有限公司 A kind of indoor orientation method based on Bluetooth signal RSSI
CN110070027A (en) * 2019-04-17 2019-07-30 南京邮电大学 Pedestrian based on intelligent internet of things system recognition methods again
CN110109055A (en) * 2019-05-23 2019-08-09 南通云之建智能科技有限公司 A kind of indoor orientation method based on RSSI ranging
CN110166935A (en) * 2019-05-23 2019-08-23 南通云之建智能科技有限公司 A kind of weighted mass center location algorithm based on RSSI ranging
CN112184771A (en) * 2020-09-30 2021-01-05 青岛聚好联科技有限公司 Community personnel trajectory tracking method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110030397A (en) * 2009-09-16 2011-03-23 동국대학교 산학협력단 Apparatus and method for estimating position based on self organization algorithm, and recording medium thereof
CN107734479A (en) * 2017-09-11 2018-02-23 广东广业开元科技有限公司 A kind of fire fighter's localization method, system and device based on wireless sensor technology
CN109782227A (en) * 2019-02-20 2019-05-21 核芯互联科技(青岛)有限公司 A kind of indoor orientation method based on Bluetooth signal RSSI
CN110070027A (en) * 2019-04-17 2019-07-30 南京邮电大学 Pedestrian based on intelligent internet of things system recognition methods again
CN110109055A (en) * 2019-05-23 2019-08-09 南通云之建智能科技有限公司 A kind of indoor orientation method based on RSSI ranging
CN110166935A (en) * 2019-05-23 2019-08-23 南通云之建智能科技有限公司 A kind of weighted mass center location algorithm based on RSSI ranging
CN112184771A (en) * 2020-09-30 2021-01-05 青岛聚好联科技有限公司 Community personnel trajectory tracking method and device

Also Published As

Publication number Publication date
CN113093162A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN106793086B (en) Indoor positioning method
CN113093162B (en) Personnel trajectory tracking system based on AIOT and video linkage
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CN102472609B (en) Position and orientation calibration method and apparatus
CN103268616B (en) The moveable robot movement human body tracing method of multi-feature multi-sensor
CN110073362A (en) System and method for lane markings detection
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
KR102145557B1 (en) Apparatus and method for data fusion between heterogeneous sensors
CN111257892A (en) Obstacle detection method for automatic driving of vehicle
CN115731268A (en) Unmanned aerial vehicle multi-target tracking method based on visual/millimeter wave radar information fusion
CN104700408A (en) Indoor singe target positioning method based on camera network
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN111612823A (en) Robot autonomous tracking method based on vision
CN115359130B (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
CN114200442B (en) Road target detection and association method based on millimeter wave radar and vision
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN116430879A (en) Unmanned aerial vehicle accurate guiding landing method and system based on cooperative targets
CN114697165B (en) Signal source detection method based on unmanned aerial vehicle vision and wireless signal fusion
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
CN109521395A (en) Main passive sensor object localization method based on UT-DC
CN115629386A (en) High-precision positioning system and method for automatic parking
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
CN112884832A (en) Intelligent trolley track prediction method based on multi-view vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Personnel Trajectory Tracking System Based on AIOT and Video Linkage

Granted publication date: 20220401

Pledgee: Bank of China Limited Yixing branch

Pledgor: Guoneng smart technology development (Jiangsu) Co.,Ltd.

Registration number: Y2024980012078

PE01 Entry into force of the registration of the contract for pledge of patent right