CN113420720B - High-precision low-delay large-scale indoor stadium crowd distribution calculation method - Google Patents

High-precision low-delay large-scale indoor stadium crowd distribution calculation method Download PDF

Info

Publication number
CN113420720B
CN113420720B CN202110822900.4A CN202110822900A CN113420720B CN 113420720 B CN113420720 B CN 113420720B CN 202110822900 A CN202110822900 A CN 202110822900A CN 113420720 B CN113420720 B CN 113420720B
Authority
CN
China
Prior art keywords
wifi
grid
scene
value
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110822900.4A
Other languages
Chinese (zh)
Other versions
CN113420720A (en
Inventor
马军
孙斌
吴沂
任春磊
何升强
钱东海
张兴晔
李柯
卞瑶
林育芹
黄攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhaohua International Exhibition Development Co ltd
China Information Consulting and Designing Institute Co Ltd
Original Assignee
Shenzhen Zhaohua International Exhibition Development Co ltd
China Information Consulting and Designing Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhaohua International Exhibition Development Co ltd, China Information Consulting and Designing Institute Co Ltd filed Critical Shenzhen Zhaohua International Exhibition Development Co ltd
Priority to CN202110822900.4A priority Critical patent/CN113420720B/en
Publication of CN113420720A publication Critical patent/CN113420720A/en
Application granted granted Critical
Publication of CN113420720B publication Critical patent/CN113420720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a high-precision low-delay large-scale indoor stadium crowd distribution calculation method, which comprises the following steps: step 1, scene gridding; step 2, equipment deployment; step 3, sample data acquisition and processing; step 4, designing a cyclic neural network RNN; step 5, setting calculation and training periods; step 6, training a cyclic neural network RNN; and 7, calculating crowd distribution based on the cyclic neural network RNN. The crowd distribution obtained by the method is very similar to the real distribution, and has practical value.

Description

High-precision low-delay large-scale indoor stadium crowd distribution calculation method
Technical Field
The invention relates to a high-precision low-delay large-scale indoor stadium crowd distribution calculation method.
Background
With the high-speed development of economy, large and complex modern buildings such as exhibitions, stadiums, transportation junction stations, business complexes and the like are ever growing business cards for urban development. These venues often have large passenger flow and dense crowds, and because the venues have large areas and complex structures, the situation of crowd distribution is difficult to be perceived quickly and accurately, so that operation command is not timely or accurate, and further adverse events such as people flow blocking or trampling are caused, and great potential safety hazards exist, so that great pressure is brought to operation and management of the venues. In order to solve the above problems, the technical world advances various crowd positioning technologies, and currently, the mainstream crowd distribution positioning schemes include a positioning scheme based on image recognition and a positioning scheme based on radio frequency signals, wherein the positioning scheme based on image recognition is to calculate the number or position data of people in a scene by periodically capturing images monitored by each video, the positioning scheme based on radio frequency signals is to apply low-power wireless positioning technologies such as bluetooth and zigbee, deploy wireless base stations in a preset scene, and simultaneously enable the positioned crowd to carry wireless tags, the wireless tags periodically send wireless signals outwards, after the wireless tags enter the scene, the wireless base stations can detect the signals, and position the crowd distribution according to the signals in combination with base station position information or other position reference information.
In the current engineering practice, the two schemes of the positioning based on image recognition and the positioning based on radio frequency cannot well meet the requirements of large-scale indoor stadium crowd positioning. The positioning scheme based on image recognition mainly calculates the number and the positions of people in the image by performing image recognition on the image in video monitoring, and further obtains the personnel distribution condition of the monitoring area; the positioning scheme based on radio frequency generally applies low-power consumption wireless positioning technologies such as Bluetooth, wiFi or zigbee, a certain density of wireless communication base stations are deployed in a venue, meanwhile, the pre-positioning crowd needs to carry matched positioning labels or devices such as a bracelet, the wireless base station devices perform positioning by collecting wireless signals sent by the label devices, the real-time performance is good, but because the venue is large in area and large in people flow, a large number of base stations and labels are required to be input for scheme implementation, the one-time input and operation and maintenance cost is high, popularization and promotion are not facilitated, and the scheme is also not applicable.
Disclosure of Invention
The invention aims to: in order to solve the technical problems in the background technology, the invention provides a high-precision low-delay large-scale indoor stadium crowd distribution calculation method. The method comprises the following steps:
step 1, scene gridding;
step 2, equipment deployment;
step 3, sample data acquisition and processing;
step 4, designing a cyclic neural network RNN;
step 5, setting calculation and training periods;
step 6, training a cyclic neural network RNN;
and 7, calculating crowd distribution based on the cyclic neural network RNN.
The beneficial effects are that:
the method comprehensively applies WiFi positioning, image recognition and neural network technology to perform positioning calculation on stadium crowd distribution, has the characteristics of high precision, low delay and the like, can accurately sense the crowd density condition in the stadium in real time, and enables stadium operation management personnel to intervene and dredge in time, so that the safety operation level of a large stadium is greatly improved; meanwhile, the floor implementation of the method is to use gate, wiFi-AP and camera equipment which are deployed or marked in the large-scale stadium at present, so that the input cost of the system is greatly reduced, the multiplexing value of the equipment is improved, and the popularization and the promotion are facilitated.
Drawings
The foregoing and/or other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings and detailed description.
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic view of a scene.
Fig. 3 is a scene mesh rendering schematic.
Fig. 4 is a schematic view of scene meshing.
Fig. 5 is a diagram of an RNN network.
Fig. 6 is a schematic diagram of a camera taking a photograph and a covered scene grid.
Fig. 7 is a schematic diagram of actual distribution of people.
FIG. 8 is a schematic diagram of a population distribution calculated by the method of the present invention.
Fig. 9 is a schematic diagram of crowd distribution calculated by a wireless radio frequency calculation method based on WiFi.
Detailed Description
The invention provides a high-precision low-delay large-scale indoor stadium crowd distribution calculation method, which comprehensively uses gate counting, wiFi positioning, image recognition and neural network technology, effectively solves the problems of real-time performance and calculation precision of large-scale stadium crowd distribution calculation, can realize the collection of relevant calculation input data by using existing gates, wiFi-AP equipment and video monitoring equipment in a scene, and has strong economical efficiency. The working flow is shown in figure 1, step 1, scene gridding
As shown in fig. 2, the process of rasterizing a scene to be subjected to indoor crowd positioning includes the following steps: step 1-1, determining a coordinate system
The north direction is taken as the y-axis positive direction, the x-axis coordinate value of the most west edge or vertex of the scene is taken as 0, the y-axis coordinate value of the most south edge or vertex of the scene is taken as 0, a plane coordinate system is determined, and the maximum value of the scene in the y-axis direction and the maximum value in the x-axis direction can be obtained at the moment and are respectively set as y as shown in figure 2 cj_max And x cj_max
Step 1-2, determining the size of the grid
The square grid is selected to carry out gridding on the scene, the side length of the positive grid is represented by a, the unit is meter, the value of a is determined according to the crowd distribution calculation precision requirement, the higher the precision requirement is, the smaller the value of a is, and the range of the value of a for the scene a of the public building is generally 1-10 meters;
step 1-3, drawing grids
Drawing vertices (0, 0) in the confirmed coordinate system, And->Is drawn in rectangle +.>Straight line sum +.>Straight lines of x=j·a (where y, x represent straight line equations), where i and j are coefficients, the values of which are respectivelyThe straight line divides the rectangle intoThe squares are grids, so that the drawing of the grids is completed, and the drawing result is shown in fig. 3;
step 1-4, filtering the grid
Removing grids which are not intersected with the scene (generally manual removal), wherein the removed result is shown in fig. 4, setting the rest Nwg grids, and respectively recording y-axis coordinates of the upper edge and the lower edge of each grid and x-axis coordinates of the left edge and the right edge, wherein the y-axis coordinate of the upper edge of the wg grid is marked as YU (wg), the y-axis coordinate of the lower edge of the wg grid is marked as YD (wg), the x-axis coordinate of the left edge is marked as XL (wg), the x-axis coordinate of the right edge is marked as XR (wg), the wg is marked as wg epsilon {1,2,3, …, nwg }, wherein Nwg represents the number of grids after scene filtering, and the expression of the region covered by the wg grid is shown in formula (1);
wherein:
x represents the x-axis coordinate value of the point in the wg th grid;
y represents the y-axis coordinate value of the point in the wg th grid;
step 2, equipment deployment
Step 2-1, deploying gates capable of recording the number of people entering and exiting and communicating in real time at a scene entrance and setting Z gates, wherein the numbers of the gates are zj, zj epsilon {1,2,3, …, Z };
step 2-2, deploying W WiFi-AP (wireless access node of WiFi) equipment in the scene, wherein the deployment density is based on the fact that the WIFI signals can completely cover the whole scene, if the positioning accuracy of the WIFI is to be improved, the deployment density can be improved, and the number of each WIFI-AP equipment is wf, wf is {1,2,3, …, W };
2-3, disposing S video monitoring cameras in the scene, setting a monitoring angle, and disposing the monitoring area in a density based on the condition that the monitoring area can completely cover the scene, wherein if the accuracy of crowd distribution identification is to be improved, the top mounting is best carried out, the numbers of the cameras are expressed by sx, and sx is {1,2, …, S };
step 3, sample data acquisition and processing;
the step 3 comprises the following steps:
step 3-1, gate data acquisition and processing, specifically comprising the following steps:
step 3-1-1, collecting gate data: when people go in and out of a scene through the gates, each gate can record the number of people in and out of the scene, and when collection is started, the number of people in and out recorded by each gate is respectively represented by Cout (zj, k) and Cin (zj, k), wherein Cout (zj, k) represents the number of people coming out of the gate with the number zj in the kth collection period, and Cin (zj, k) represents the number of people coming in of the gate with the number zj in the kth collection period;
step 3-1-2, gate data statistics: subtracting the total number of the entrances from the total number of the exits acquired by each gate to obtain the total number of the entrances in the current scene, wherein the total number of the entrances is represented by Nzj (k):
step 3-2, wiFi data acquisition and processing, which specifically comprises the following steps:
step 3-2-1, wiFi data acquisition: when people carry WiFi equipment such as a mobile phone to enter a scene, if a WiFi communication switch of the WiFi equipment is turned on, a nearby WIFI-AP equipment can detect the MAC address of the WiFi equipment and the WIFI signal intensity value transmitted by the WIFI-AP equipment is received at the position of the WiFi equipment, and when data acquisition is started, the data are recorded and form a set, and the set is represented by APdata (k), wherein the set is represented as follows: :
wherein: APdata (k): the method comprises the steps that a data set of a WiFi device acquired by the WiFi-AP device in a kth acquisition period is represented;
w: the number of deployed WiFi-AP devices in the scenario;
wf: the number of the WiFi-AP device, wf ε {1,2,3, …, W };
MAC (i): network card address of WiFi device, i epsilon {1,2,3, …, num (wf) };
RSSI (wf, MAC (i)): the WiFi device with the network card address of MAC (i) detects the intensity value of a WiFi signal transmitted by the WIFI-AP device with the number of wf;
num (wf): the method includes the steps that the WiFi-AP equipment with the number wf detects the number of WiFi terminals in the acquisition period;
step 3-2, wiFi data processing: grouping elements in APdata (k) according to WiFi device network card addresses, wherein elements with the same WiFi device network card addresses are listed as a group, ng groups are divided, the network card addresses of the groups are used as collection numbers, the numbers of WiFi-AP devices and WiFi signal intensity values emitted by WiFi-AP devices, which are detected to be in butt joint with the numbers, are used as elements to form a new collection, te (MAC (g)) (k) is used for representing the network card address of the g group, g E {1,2, …, ng }, k is an acquisition period, and Te (MAC (g)) (k) is represented as follows:
wherein:
NumAp (MAC (g)): the method comprises the steps of representing the number of WiFi-AP equipment of WiFi equipment with a network card address of MAC (g) detected in a kth acquisition period;
NoAp (i): the number of the ith WiFi-AP device which detects the WiFi device with the network card address of MAC (g) is represented, noap (i) ∈ {1,2, …, W };
RSSI (Noap (i), MAC (g)): the WiFi device with the network card address of MAC (g) detects the intensity value of a WiFi signal transmitted by the AP device with the number of Noap (i);
step 3-2-3, wiFi device positioning: calculating a position coordinate value Po (MAC (g)) (k) of each WiFi device in a kth acquisition period by using Te (MAC (g)) (k), wherein an abscissa and an ordinate are respectively x (MAC (g)), and y (MAC (g));
step 3-2-4, calculating the number of WiFi devices in the scene grid: substituting the coordinates Po (MAC (g)) (k) of each WiFi device into formula (1), traversing each grid number, finding out the grid number which enables the equation to be established, and obtaining the number of the grid where the coordinates are located; counting the coordinate quantity of WiFi equipment in each numbered grid, and using Nwf (wg) (k), wherein wg represents the grid number, and k represents the acquisition period; the number of coordinates of the WiFi devices in each numbered grid is formed into a set, denoted by Nwf (k), and the values thereof are expressed as follows:
nwf (k) = { Nwfg (1) (k), nwfg (2) (k), …, nwfg (wg) (k), …, nwfg (Nwg) (k) } wherein:
nwfg (wg) (k): representing the number of WiFi devices in a grid with the number wg of the kth acquisition period obtained based on WiFi positioning calculation;
step 3-3, video data acquisition and processing, specifically comprising the following steps:
step 3-3-1, as shown in fig. 6, the coordinate area of the grid on the photo is set: starting all deployed cameras, drawing the area of the covered scene grid in the photo shot by each camera, and respectively recording coordinates WGI of 4 vertexes of each grid in the photo i In pixels, using WGI wg Representation, wherein wg represents the number of the grid, the value of which is expressed as:
WGI wg ={(x wg_sx_1 ,y wg_sx_1 ),(x wg_sw_2 ,y wg_sx_2 ),x wg_sw_3 ,y i_sx_3 ),(x wg_sw_4 ,y wg_sw_4 )} (5)
wherein:
x wg_sw_1 : the x coordinate of the 1 st vertex in the picture taken by the sx camera, representing the grid with the number wg;
y wg_sw_1 the y-coordinate of the 1 st vertex in the picture taken by the sx camera, representing the grid numbered wg;
x wg_sw_2 : the x coordinate of the 2 nd vertex of the grid with the number wg in the picture taken by the sx camera with the number wg;
y wg_sw_2 : the y coordinate of the 2 nd vertex of the grid with the number wg in the picture taken by the sx camera with the number wg;
x wg_sw_3 : the x coordinate of the 3 rd vertex in the photograph taken by the sx camera, representing the grid numbered wg;
y wg_sw_3 : representing the y-coordinate of the 3 rd vertex of the grid with the number w in the photograph taken by the sx camera;
x i_sx_4 : the x coordinate of the 4 th vertex in the photograph taken by the sx camera, representing the grid numbered i;
y i_sx_4 : the y-coordinate of the 4 th vertex in the photograph taken by the sx camera, representing the grid numbered i;
step 3-3-2, image acquisition and identification: the cameras start shooting at the same time, after shooting is completed, the numbers of the cameras and training periods are stored in photo naming, the number of people in each grid in the training period is identified by combining each grid area in each picture in 3-3-1 and applying an image identification method, a new set is formed by the number of people in each grid and is expressed by Nps (k), wherein k represents the kth acquisition period, and the value is expressed as follows:
wherein:
npsg (wg) (k): representing the number of people in the grid numbered wg for the kth acquisition period based on image recognition techniques. The image recognition method comprises the following steps: reference may be made to citation "Liu Hui. Zhu Chuang. Zhang Tianyong. Chen Yu. A head characteristic-based human head detection method. Optoelectronics technique. 2014,34 (1): 21-25"," Gu Dejun. Wu Tiejun. A demographic method based on head characteristics study. Mechanical manufacturing and automation 2010,39 (4): 134-138", or other image recognition methods, the higher the recognition accuracy, the better, preferably the recognition calculation error is controlled within 10%.
Step 4, designing a recurrent neural network RNN (Recurrent Neural Network, RNN)
As shown IN FIG. 5, the method selects an RNN network to perform crowd distribution calculation, the number of nodes of an RNN input layer of the cyclic neural network is set to Nwg +1, and the input value of each node is expressed by IN (k); the number of the nodes of the output layer is set to Nwg, and the output value of each node is expressed by OUT (k); the number of hidden layer nodes is set to Nwg +1, and each node value is represented by H (k); the activation functions of the hidden layer and the output layer are all ReLV functions, the function expression is f (x) =max (0, x), wherein max represents a value with larger values of 0 and x, and x is an independent variable of the function; the output values of the various layers of the recurrent neural network RNN are respectively expressed as follows:
IN(k)=[in(1)(k),in(2)(k),in(3)(k),…,in(Nwg+1)(k)] T (6)
H(k)=ReLV(U·IN(k)+A+Q·H(k-1)) (7)
wherein:
k: a calculation period for the cyclic neural network RNN;
in (wg) (k): representing the input value of an input node numbered wg in nodes of an RNN input layer of the cyclic neural network in the kth acquisition period;
u: an input weight matrix of (Nwg +1) x (Nwg +1), wherein the value of the input weight matrix is obtained through subsequent training of the cyclic neural network RNN;
q: a feedback weight matrix of (Nwg +1) x (Nwg +1), wherein the value of the feedback weight matrix is obtained through subsequent training of the cyclic neural network RNN;
a: an input offset coefficient matrix with the value of (Nwg +1) multiplied by 1 is obtained by training a subsequent cyclic neural network RNN;
ReLV: to activate the function, the function expression is expressed as f (x) =max (0, x); where max represents the value that takes on the larger of 0 and x, x being the argument of the function.
H (0): the value of H (k-1) when the acquisition period k is 1 is the all-zero matrix;
v: an output weight matrix of Nwg × (Nwg +1), the values of which are to be obtained by subsequent training of the recurrent neural network RNN;
b: an input offset coefficient matrix with the value of (Nwg +1) multiplied by 1 is obtained by training a subsequent cyclic neural network RNN;
out (Nwg) (k): representing the output value of an output node numbered Nwg in nodes of an RNN output layer of the cyclic neural network in the kth acquisition period;
step 5, setting calculation and training period
The calculation period of the method is a period for calculating crowd distribution based on an RNN network, the training period of the cyclic neural network RNN is a period for retraining the cyclic neural network RNN, the crowd distribution calculation period based on the cyclic neural network RNN is set to be T, the training period of the cyclic neural network RNN is set to be Txl, the Txl is X times of T, and the values are as follows:
t can be set according to the real-time requirement of crowd distribution calculation, and the higher the real-time requirement is, the shorter the T is, and the longer the T is, and the time is set to be 10 seconds;
txl is generally an integer multiple of T and is set to be X times, and the value of X is determined according to the time consumption of image processing and model training in actual engineering;
wherein: nyb: the number of sample data trained by the cyclic neural network RNN is determined according to the prediction precision of the cyclic neural network RNN, and the higher the precision requirement is, the larger the value is, and the reverse and forward are, and here, 100 is taken;
T pc : video monitoring of images for Nyb sets of sample data is time consuming to capture, transmit and identify images;
T mt : time is consumed for training the RNN of the cyclic neural network, and the value is obtained according to an actual engineering test;
T a : the remaining time is generally taken (T pc +T mt ) About 5 to 10 percent;
step 6, training a cyclic neural network RNN: when the current time is the training period of the cyclic neural network RNN, data are continuously and circularly acquired according to the step 3 by taking T as the period, and Nyb groups of sample data are obtained by calculation and are respectively set as YB_Nzj (YB),yb_mps (YB), YB e {1,2, …, nyb }, where YB is a sample period number, and its values are respectively as follows:
,yb∈{1,2,…,Nyb}
wherein:
YB_Nzj (YB) represents the number of people in the scene obtained by statistics of gate data according to step 3 in YB sample data;
cin (zj, yb): representing the number of entering personnel counted by a gate with the number zj acquired according to the step 3 in yb sample data;
cout (zj, yb): representing the number of the personnel out of the field obtained by counting the gate with the number zj acquired according to the step 3 in the yb sample data;
yb_nwf (YB): representing the scene obtained by WiFi positioning calculation according to step 3 in yb sample data
The number of WiFi devices in each grid;
nwfg (wg) (yb): representing the net obtained in step 3 by WiFi positioning in yb sample data
The number of WiFi devices in the grid with the grid number wg;
yb_mps (YB): representing the yb sample data, in the scene obtained by the image recognition technique according to step 3
The number of people in each grid;
npsg (wg) (yb): representing the number of people in the wg grid obtained by the image recognition technology according to the step 3 in the yb sample data;
the sample data are applied, and the operation is carried out according to the following steps:
step 6-1, initializing data, which specifically comprises the following steps:
step 6-1-1, assigning the value of IN (yb) according to the formula (10):
step 6-1-2, initializing all elements of each matrix U, Q, V, A, B of the cyclic neural network RNN to be random numbers between 0 and 1;
step 6-1-3, setting the values of the elements H (0) and OUT (yb) of the cyclic neural network RNN to 0, yb epsilon {1,2, …, nyb };
step 6-1-4, setting a target variance value EE and EE of the RNN training of the cyclic neural network as a constant, representing the network precision, wherein the smaller the value is, the higher the network calculation precision is, the smaller the inverse is, and the setting can be performed according to engineering requirements;
step 6-2, calculating the value of the error matrix Err (yb), wherein the calculation formula is as follows:
step 6-3, applying Err (yb) to train by using a BTPP algorithm, setting a training stepping value to be 0.001, and updating a U, Q, V value after training is completed;
step 6-4, substituting IN (yb), H (0), a and updated U, Q values into formula (7) to obtain H (yb), wherein the values are expressed as follows:
step 6-5, substituting the values of H (yb), B and updated V into formula (8) in order to obtain the value of OUT (yb), wherein the value is expressed as follows:
step 6-6, calculating the network variance using delta (k) 2 The values are calculated as follows:
step 6-7 if delta (k) 2 If the value of (2) is less than or equal to EE, finishing the RNN training of the cyclic neural network, and updating in the step 6-3The completed U, Q, V and A, B values generated in step 6-1 are updated into formulas (7) and (8), otherwise, the process jumps to step 6-2 to continue the training step;
step 7, calculating crowd distribution based on the cyclic neural network RNN:
if the current time is the starting time of the population distribution calculation period based on the cyclic neural network RNN, the population distribution calculation is performed according to the following steps:
step 7-1, obtaining the total number of people in the scene of the current calculation period and the number of WiFi devices in each numbered grid in the scene obtained based on WiFi positioning calculation through step 3, and setting the total number of people in the scene of the current calculation period and the number of WiFi devices in each numbered grid as JS_ Nzj (), JS_nwf (JS), wherein JS is the sequence number of the calculation period, and the value is expressed as follows:
wherein:
js_nzj (JS): the js calculation period is represented, and the number of people in the current scene is obtained through gate data statistics according to the step 3;
cin (zj, js): representing the number of entering personnel counted by a gate with the number zj acquired in the js calculation period;
cout (zj, js): representing the number of the personnel on the scene obtained by statistics of the gate with the number zj acquired in the js calculation period; js_nwf (JS): the js calculation period is represented, and the number of WiFi devices in each grid in the current scene is calculated through WiFi positioning according to the step 3;
nwfg (wg) (js): the js calculation period is represented, and the number of current WiFi devices in the grid with the number wg is obtained through WiFi positioning calculation according to the step 3;
step 7-2, setting the value of IN (js) as IN equation (15):
IN(js)=[JS_Nzj(js),Nwfg(1)(js),Nwfg(2)(js),…,Nwfg(Nwg)(js)] T (15)
step 7-3, substituting IN (js) into equations (7) and (8) IN order to obtain OUT (js), as shown IN equations (16) and (17), respectively:
H(js)=ReLV(U·IN(js)+A+Q·H(js-1)) (16)
note that: if js is 1, the values of the elements of the H (0) matrix are all 0;
step 7-4, calculating the number of people in each grid in the scene as shown in the following formula:
wherein:
n (wg) (js): representing the number of people in the js-th computing period for a grid numbered wg, wg e {1,2, …, nwg };
examples
By using the method and the WiFi-based wireless radio frequency calculation method respectively, in a stadium with 4 kilo-square meters, the stadium crowd distribution grid size is 2m multiplied by 2m square grids, the calculation period is 2 minutes, the calculation results of 100 continuous periods are selected for analysis, the calculation error values of each calculation method in each grid are obtained and are detailed in a table 1, the data in the table 1 shows that the accuracy of the method is improved, the performance is stable, the error details of the WiFi-based radio frequency calculation method are larger, and the fluctuation of each area is larger. The calculation result data and the real crowd distribution data of each technical method in one period are randomly selected, the spatial distribution diagram is shown as shown in fig. 7, 8 and 9, and the crowd distribution and the real distribution are very similar, but the result of the wireless radio frequency calculation method based on WiFi is obviously larger than the actual result, mainly due to the influence of the pseudo MAC address of the WiFi equipment, and the crowd distribution area is also inaccurate.
TABLE 1
Contrast parameter The method of the invention Wireless radio frequency computing method based on WiFi
Grid average error 0.36 5.31
Grid maximum error 7 31
Grid minimum error 0 0
The invention provides a high-precision low-delay large-scale indoor stadium crowd distribution calculation method, and the method and the way for realizing the technical scheme are numerous, the above is only a preferred embodiment of the invention, and it should be pointed out that a plurality of improvements and modifications can be made to those skilled in the art without departing from the principle of the invention, and the improvements and modifications are also regarded as the protection scope of the invention. The components not explicitly described in this embodiment can be implemented by using the prior art.

Claims (3)

1. The high-precision low-delay large-scale indoor stadium crowd distribution calculation method is characterized by comprising the following steps of:
step 1, scene gridding;
step 2, equipment deployment;
step 3, sample data acquisition and processing;
step 4, designing a cyclic neural network RNN;
step 5, setting calculation and training periods;
step 6, training a cyclic neural network RNN;
step 7, calculating crowd distribution based on the cyclic neural network RNN;
the step 1 comprises the following steps:
step 1-1, determining a coordinate system;
step 1-2, determining the size of the grid;
step 1-3, drawing grids;
step 1-4, filtering the grids;
step 1-1 includes: taking the north direction as the y-axis positive direction, taking the most west edge or vertex x-axis coordinate value of the scene as 0, and determining a plane coordinate system with the most south edge or vertex y-coordinate value of the scene as 0, obtaining the maximum value of the scene in the y-axis direction and the maximum value in the x-axis direction, and setting the maximum value and the maximum value as y respectively cj_max And x cj_max
The step 1-2 comprises the following steps: selecting square grids to grid the scene, wherein the side length of the positive grid is represented by a;
the steps 1-3 comprise: drawing vertexes (0, 0) in the plane coordinate system determined in the step 1-1, And->Is drawn in rectangle +.>Straight line sum +.>Straight lines of x=j.a, wherein i and j are coefficients, and the values are respectivelyThe straight line divides the rectangle intoThe squares are grids, and drawing of the grids is completed;
the steps 1-4 comprise: removing grids without intersections with the scene in the grids, setting the rest Nwg grids, and respectively recording y-axis coordinates of upper and lower edges and x-axis coordinates of left and right edges of each grid, wherein the y-axis coordinates of the upper edge of the wg grid are marked as YU (wg), the y-axis coordinates of the lower edge of the wg grid are marked as YD (wg), the x-axis coordinates of the left edge of the grid are marked as XL (wg), the x-axis coordinates of the right edge of the grid are marked as XR (wg), the wg takes the values of wg as {1,2, 3.. The expression of the area covered by the wg grid is shown as (1);
wherein:
x represents the x-axis coordinate value of the point in the wg th grid;
y represents the y-axis coordinate value of the point in the wg th grid;
the step 2 comprises the following steps:
step 2-1, deploying gates capable of recording the number of people entering and exiting and communicating in real time at a scene entrance and setting Z gates, wherein the numbers of the gates are zj, zj epsilon {1,2,3, …, Z };
step 2-2, deploying W WIFI-AP devices in the scene, wherein the serial number of each WIFI-AP device is wf, wf epsilon {1,2,3, …, W };
step 2-3, disposing S video monitoring cameras in the scene, wherein the numbers of the cameras are expressed by sx, and sx epsilon {1,2, …, S };
the step 3 comprises the following steps:
step 3-1, gate data acquisition and processing, specifically comprising the following steps:
step 3-1-1, collecting gate data: when people go in and out of a scene through the gates, each gate can record the number of people in and out of the scene, and when collection is started, the number of people in and out recorded by each gate is respectively represented by Cout (zj, k) and Cin (zj, k), wherein Cout (zj, k) represents the number of people coming out of the gate with the number zj in the kth collection period, and Cin (zj, k) represents the number of people coming in of the gate with the number zj in the kth collection period;
step 3-1-2, gate data statistics: subtracting the total number of the entrances from the total number of the exits acquired by each gate to obtain the total number of the entrances in the current scene, wherein the total number of the entrances is represented by Nzj (k):
step 3-2, wiFi data acquisition and processing, which specifically comprises the following steps:
step 3-2-1, wiFi data acquisition: when people carry WiFi equipment such as a mobile phone to enter a scene, if a WiFi communication switch of the WiFi equipment is turned on, a nearby WIFI-AP equipment can detect the MAC address of the WiFi equipment and the WIFI signal intensity value transmitted by the WIFI-AP equipment is received at the position of the WiFi equipment, and when data acquisition is started, the data are recorded and form a set, and the set is represented by APdata (k), wherein the set is represented as follows:
wherein: APdata (k): the method comprises the steps that a data set of a WiFi device acquired by the WiFi-AP device in a kth acquisition period is represented;
w: the number of deployed WiFi-AP devices in the scenario;
wf: the number of the WiFi-AP device, wf ε {1,2,3, …, W };
MAC (i): network card address of WiFi device, i epsilon {1,2,3, …, num (wf) };
RSSI (wf, MAC (i)): the WiFi device with the network card address of MAC (i) detects the intensity value of a WiFi signal transmitted by the WIFI-AP device with the number of wf;
num (wf): the method includes the steps that the WiFi-AP equipment with the number wf detects the number of WiFi terminals in the acquisition period;
step 3-2, wiFi data processing: grouping elements in APdata (k) according to WiFi device network card addresses, wherein elements with the same WiFi device network card addresses are listed as a group, ng groups are divided, the network card addresses of the groups are used as collection numbers, the numbers of WiFi-AP devices and WiFi signal intensity values emitted by WiFi-AP devices, which are detected to be in butt joint with the numbers, are used as elements to form a new collection, te (MAC (g)) (k) is used for representing the network card address of the g group, g E {1,2, …, ng }, k is an acquisition period, and Te (MAC (g)) (k) is represented as follows:
wherein:
NumAp (MAC (g)): the method comprises the steps of representing the number of WiFi-AP equipment of WiFi equipment with a network card address of MAC (g) detected in a kth acquisition period;
NoAp (i): the number of the ith WiFi-AP device which detects the WiFi device with the network card address of MAC (g) is represented, noap (i) ∈ {1,2, …, W };
RSSI (Noap (i), MAC (g)): the WiFi device with the network card address of MAC (g) detects the intensity value of a WiFi signal transmitted by the AP device with the number of Noap (i);
step 3-2-3, wiFi device positioning: calculating a position coordinate value Po (MAC (g)) (k) of each WiFi device in a kth acquisition period by using Te (MAC (g)) (k), wherein an abscissa and an ordinate are respectively x (MAC (g)), and y (MAC (g));
step 3-2-4, calculating the number of WiFi devices in the scene grid: substituting the coordinates Po (MAC (g)) (k) of each WiFi device into formula (1), traversing each grid number, finding out the grid number which enables the equation to be established, and obtaining the number of the grid where the coordinates are located; counting the coordinate quantity of WiFi equipment in each numbered grid, and using Nwf (wg) (k), wherein wg represents the grid number, and k represents the acquisition period; the number of coordinates of the WiFi devices in each numbered grid is formed into a set, denoted by Nwf (k), and the values thereof are expressed as follows:
nwf (k) = { Nwfg (1) (k), nwfg (2) (k), …, nwfg (wg) (k), …, nwfg (Nwg) (k) } wherein:
nwfg (wg) (k): representing the number of WiFi devices in a grid with the number wg of the kth acquisition period obtained based on WiFi positioning calculation;
step 3-3, video data acquisition and processing, specifically comprising the following steps:
step 3-3-1, setting a coordinate area of the grid on the photo: starting all deployed cameras, drawing the area of the covered scene grid in the photo shot by each camera, respectively recording coordinates WGIi of 4 vertexes of each grid in the photo, wherein the unit is pixels, and using the WGI wg Representation, wherein wg represents the number of the grid, the value of which is expressed as:
WGI wg ={(x wg_sx_1 ,y wg_sx_1 ),(x wg_sx_2 ,y wg_sx_2 ),x wg_sx_3 ,y i_sx_3 ),(x wgg_sx_4 ,y wg_sx_4 )} (5)
wherein:
x wgg_sx_1 : the x coordinate of the 1 st vertex in the picture taken by the sx camera, representing the grid with the number wg;
y wg_sx_1 the y-coordinate of the 1 st vertex in the picture taken by the sx camera, representing the grid numbered wg;
x wg_sx_2 : the x coordinate of the 2 nd vertex of the grid with the number wg in the picture taken by the sx camera with the number wg;
y wg_sx_2 : the y coordinate of the 2 nd vertex of the grid with the number wg in the picture taken by the sx camera with the number wg;
x wg_sx_3 : the x coordinate of the 3 rd vertex in the photograph taken by the sx camera, representing the grid numbered wg;
y wg_sx_s : representing the y-coordinate of the 3 rd vertex of the grid with the number w in the photograph taken by the sx camera;
x i_sx_4 : the x coordinate of the 4 th vertex in the photograph taken by the sx camera, representing the grid numbered i;
y i_sx_4 : representation numberThe y coordinate of the 4 th vertex in the picture taken by the sx camera with the grid of i;
step 3-3-2, image acquisition and identification: the cameras start shooting at the same time, after shooting is completed, the numbers of the cameras and training periods are stored in photo naming, the number of people in each grid in the training period is identified by combining each grid area in each picture in 3-3-1 and applying an image identification method, a new set is formed by the number of people in each grid and is expressed by Nps (k), wherein k represents the kth acquisition period, and the value is expressed as follows:
Nps(k)={Npsg(1)(k),Npsg(2)(k),…,Npsg(wg)(k),…,Npsg(Bwg)(k)}wg∈{1,2,...,Nwg};
wherein:
npsg (wg) (k): representing the number of people in the grid numbered wg for the kth acquisition period based on image recognition techniques.
2. The method of claim 1, wherein step 4 comprises: the number of nodes of the RNN input layer of the cyclic neural network is set to Nwg +1, and the nodes are expressed by IN (k); the number of output layer nodes is set to Nwg, expressed by OUT (k), the number of hidden layer nodes is set to Nwg +1, and each node value is expressed by H (k); the activation functions of the hidden layer and the output layer both select a ReLV function; the output values of the various layers of the recurrent neural network RNN are respectively expressed as follows:
IN(k)=[in(1)(k),in(2)(k),in(3)(k),…,in(Nwg+1)(k)] T (6)
H(k)=ReLV(U·IN(k)+A+Q·H(k-1)) (7)
wherein:
k: a calculation period for the cyclic neural network RNN;
in (wg) (k): representing the input value of an input node numbered wg in nodes of an RNN input layer of the cyclic neural network in the kth acquisition period;
u: an input weight matrix of (Nwg +1) x (Nwg +1);
q: a feedback weight matrix of (Nwg +1) x (Nwg +1);
v: an output weight matrix of Nwg × (Nwg +1);
a: an input offset coefficient matrix of (Nwg +1) x 1;
b: an input offset coefficient matrix of (Nwg +1) x 1;
h (0): the value of H (k-1) when the acquisition period k is 1 is the all-zero matrix;
ReLV: to activate the function, the function expression is expressed as f (x) =max (0, x), where max represents the greater of the values 0 and x, x being the argument of the function;
out (Nwg) (k): representing the output value of the output node numbered Nwg in the output layer node of the cyclic neural network RNN in the kth acquisition period.
3. The method of claim 2, wherein step 5 comprises: setting a crowd distribution calculation period based on a cyclic neural network RNN as T, wherein the training period of the cyclic neural network RNN is Txl, and the Txl is X times of T, and the value is as follows:
wherein: nyb: sample data quantity of cyclic neural network RNN training;
T pc : video monitoring of images for Nyb sets of sample data is time consuming to capture, transmit and identify images;
T mt : time-consuming training of the RNN for the recurrent neural network;
T a : allowance time;
the step 6 comprises the following steps: when the current time is the training period of the cyclic neural network RNN, data is continuously and circularly acquired according to the step 3 by taking T as the period, and Nyb groups of sample data are obtained by calculation, and the sample data are respectively set as Yb_nzj (YB), yb_nwf (YB), yb_mps (YB), YB epsilon {1,2, …, nyb }, wherein YB is a sample period sequence number, and the values of the sample period sequence numbers are respectively expressed as follows:
wherein: yb ε {1,2, …, nyb }
Yb_nzj (YB): representing the number of people in the scene obtained by statistics of gate data according to the step 3 in yb sample data;
cin (zj, yb): representing the number of entering personnel counted by a gate with the number zj acquired according to the step 3 in yb sample data;
cout (zj, yb): representing the number of the personnel out of the field obtained by counting the gate with the number zj acquired according to the step 3 in the yb sample data;
yb_nwf (YB): representing the number of WiFi devices in each grid in the scene obtained by WiFi positioning calculation according to the step 3 in yb sample data;
nwfg (wg) (yb): the number of WiFi devices in the grid with the grid number wg obtained by the WiFi positioning meter in the step 3 is represented in the yb sample data;
yb_mps (YB): representing the number of people in each grid in the scene obtained by the image recognition technology according to the step 3 in the yb sample data;
npsg (wg) (yb): representing the number of people in the wg grid obtained by the image recognition technology according to the step 3 in the yb sample data;
the following steps are then performed:
step 6-1, initializing data, which specifically comprises the following steps:
step 6-1-1, assigning the value of IN (yb) according to the formula (10):
step 6-1-2, initializing all elements of each matrix U, Q, V, A, B of the cyclic neural network RNN to be random numbers between 0 and 1;
step 6-1-3, setting the values of the elements H (0) and OUT (yb) of the cyclic neural network RNN to 0;
step 6-1-4, setting a training target variance value EE of the cyclic neural network RNN;
step 6-2, calculating the value of the error matrix Err (yb), wherein the calculation formula is as follows:
step 6-3, applying Err (yb) to train by using a BTPP algorithm, and updating U, Q, V value after training is completed;
step 6-4, substituting IN (yb), H (0), A and updated U, Q values into formula (7) to obtain H (yb):
step 6-5, substituting H (yb), B and updated V values into formula (8) in sequence to obtain the value of OUT (yb):
step 6-6, calculating the network variance using delta (k) 2 The values are calculated as follows:
step 6-7 if delta (k) 2 If the value of (a) is less than or equal to EE, the RNN training of the recurrent neural network is finished, the U, Q, V updated in the step 6-3 and the A, B value generated in the step 6-1 are updated into formulas (7) and (8), otherwise, the training step is continuously executed by jumping to the step 6-2;
the step 7 comprises the following steps:
if the current time is the starting time of the population distribution calculation period based on the cyclic neural network RNN, the population distribution calculation is performed according to the following steps:
step 7-1, obtaining the total number of people in the scene of the current calculation period and the number of WiFi devices in each numbered grid in the scene obtained based on WiFi positioning calculation through step 3, and setting the total number of people in the scene and the number of WiFi devices in each numbered grid as JS_Nzj (JS), JS_nwf (JS), wherein JS is the sequence number of the calculation period, and the value of JS is expressed as follows:
wherein:
js_nzj (JS): the js calculation period is represented, and the number of people in the current scene is obtained through gate data statistics according to the step 3;
cin (zj, js): representing the number of entering personnel counted by a gate with the number zj acquired in the js calculation period;
cout (zj, js): representing the number of the personnel on the scene obtained by statistics of the gate with the number zj acquired in the js calculation period;
js_nwf (JS): the js calculation period is represented, and the number of WiFi devices in each grid in the current scene is calculated through WiFi positioning according to the step 3;
nwfg (wg) (js): the js calculation period is represented, and the number of current WiFi devices in the grid with the number wg is obtained through WiFi positioning calculation according to the step 3;
step 7-2, setting the value of IN (js) as IN equation (15):
IN(js)=[JS_Nzj(js),Nwfg(1)(js),Nwfg(2)(js),…,Nwfg(Nwg)(js)] T (15)
step 7-3, substituting IN (js) into equations (7) and (8) IN order to obtain UT (js), as shown IN equations (16) and (17), respectively:
H(js)=ReLV(U·IN(js)+A+Q·H(js-1)) (16)
step 7-4, the number of people in each grid in the scene is calculated as shown in the following formula:
where N (wg) (js) represents the number of people in the js-th computing period for the grid numbered wg, wg ε {1,2, …, nwg }.
CN202110822900.4A 2021-07-21 2021-07-21 High-precision low-delay large-scale indoor stadium crowd distribution calculation method Active CN113420720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110822900.4A CN113420720B (en) 2021-07-21 2021-07-21 High-precision low-delay large-scale indoor stadium crowd distribution calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110822900.4A CN113420720B (en) 2021-07-21 2021-07-21 High-precision low-delay large-scale indoor stadium crowd distribution calculation method

Publications (2)

Publication Number Publication Date
CN113420720A CN113420720A (en) 2021-09-21
CN113420720B true CN113420720B (en) 2024-01-09

Family

ID=77721397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110822900.4A Active CN113420720B (en) 2021-07-21 2021-07-21 High-precision low-delay large-scale indoor stadium crowd distribution calculation method

Country Status (1)

Country Link
CN (1) CN113420720B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758421B (en) * 2022-06-14 2022-09-02 南京金科院大学科技园管理有限公司 University science and technology garden wisdom management system based on distributing type and big data

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326937A (en) * 2016-08-31 2017-01-11 郑州金惠计算机系统工程有限公司 Convolutional neural network based crowd density distribution estimation method
CN107944327A (en) * 2016-10-10 2018-04-20 杭州海康威视数字技术股份有限公司 A kind of demographic method and device
WO2018187632A1 (en) * 2017-04-05 2018-10-11 Carnegie Mellon University Deep learning methods for estimating density and/or flow of objects, and related methods and software
CN109389044A (en) * 2018-09-10 2019-02-26 中国人民解放军陆军工程大学 Multi-scene crowd density estimation method based on convolutional network and multi-task learning
CN109508583A (en) * 2017-09-15 2019-03-22 杭州海康威视数字技术股份有限公司 A kind of acquisition methods and device of distribution trend
CN110084173A (en) * 2019-04-23 2019-08-02 精伦电子股份有限公司 Number of people detection method and device
CN110097109A (en) * 2019-04-25 2019-08-06 湖北工业大学 A kind of road environment obstacle detection system and method based on deep learning
CN110401978A (en) * 2019-07-19 2019-11-01 中国电子科技集团公司第五十四研究所 Indoor orientation method based on neural network and particle filter multi-source fusion
CN110598669A (en) * 2019-09-20 2019-12-20 郑州大学 Method and system for detecting crowd density in complex scene
WO2020023399A1 (en) * 2018-07-23 2020-01-30 Magic Leap, Inc. Deep predictor recurrent neural network for head pose prediction
CN111209892A (en) * 2020-01-19 2020-05-29 浙江中创天成科技有限公司 Crowd density and quantity estimation method based on convolutional neural network
CN111611749A (en) * 2020-05-25 2020-09-01 山东师范大学 RNN-based indoor crowd evacuation automatic guiding simulation method and system
CN112396218A (en) * 2020-11-06 2021-02-23 南京航空航天大学 Crowd flow prediction method based on urban area multi-mode fusion
CN113111778A (en) * 2021-04-12 2021-07-13 内蒙古大学 Large-scale crowd analysis method with video and wireless integration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10423861B2 (en) * 2017-10-16 2019-09-24 Illumina, Inc. Deep learning-based techniques for training deep convolutional neural networks

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326937A (en) * 2016-08-31 2017-01-11 郑州金惠计算机系统工程有限公司 Convolutional neural network based crowd density distribution estimation method
CN107944327A (en) * 2016-10-10 2018-04-20 杭州海康威视数字技术股份有限公司 A kind of demographic method and device
WO2018187632A1 (en) * 2017-04-05 2018-10-11 Carnegie Mellon University Deep learning methods for estimating density and/or flow of objects, and related methods and software
CN109508583A (en) * 2017-09-15 2019-03-22 杭州海康威视数字技术股份有限公司 A kind of acquisition methods and device of distribution trend
WO2020023399A1 (en) * 2018-07-23 2020-01-30 Magic Leap, Inc. Deep predictor recurrent neural network for head pose prediction
CN109389044A (en) * 2018-09-10 2019-02-26 中国人民解放军陆军工程大学 Multi-scene crowd density estimation method based on convolutional network and multi-task learning
CN110084173A (en) * 2019-04-23 2019-08-02 精伦电子股份有限公司 Number of people detection method and device
CN110097109A (en) * 2019-04-25 2019-08-06 湖北工业大学 A kind of road environment obstacle detection system and method based on deep learning
CN110401978A (en) * 2019-07-19 2019-11-01 中国电子科技集团公司第五十四研究所 Indoor orientation method based on neural network and particle filter multi-source fusion
CN110598669A (en) * 2019-09-20 2019-12-20 郑州大学 Method and system for detecting crowd density in complex scene
CN111209892A (en) * 2020-01-19 2020-05-29 浙江中创天成科技有限公司 Crowd density and quantity estimation method based on convolutional neural network
CN111611749A (en) * 2020-05-25 2020-09-01 山东师范大学 RNN-based indoor crowd evacuation automatic guiding simulation method and system
CN112396218A (en) * 2020-11-06 2021-02-23 南京航空航天大学 Crowd flow prediction method based on urban area multi-mode fusion
CN113111778A (en) * 2021-04-12 2021-07-13 内蒙古大学 Large-scale crowd analysis method with video and wireless integration

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Predicting citywide crowd flows using deep spatio-temporal residual networks;Junbo Zhang et al.;《Artificial Intelligence》;全文 *
基于WiFi定位的区域人群轨迹模型;徐洋 等;《山东大学学报(理学版)》;全文 *
基于卷积神经网络和密度分布特征的人数统计方法;郭继昌 等;《电子科技大学学报》;全文 *
基于卷积神经网络的密集场景人流估计方案;马骞;《电子设计工程》;全文 *

Also Published As

Publication number Publication date
CN113420720A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN109697435B (en) People flow monitoring method and device, storage medium and equipment
CN106651916B (en) A kind of positioning and tracing method and device of target
CN103688186B (en) The location-based modification of alignment system, method and computing device application program
Kouyoumdjieva et al. Survey of non-image-based approaches for counting people
CN106248107B (en) A kind of track deduction calibration method and device based on indoor earth magnetism path matching
CN106682592B (en) Image automatic identification system and method based on neural network method
CN108120436A (en) Real scene navigation method in a kind of iBeacon auxiliary earth magnetism room
CN109190508A (en) A kind of multi-cam data fusion method based on space coordinates
CN109540144A (en) A kind of indoor orientation method and device
CN111160243A (en) Passenger flow volume statistical method and related product
CN103679674A (en) Method and system for splicing images of unmanned aircrafts in real time
CN102054166B (en) A kind of scene recognition method for Outdoor Augmented Reality System newly
CN109410330A (en) One kind being based on BIM technology unmanned plane modeling method
CN106844492A (en) A kind of method of recognition of face, client, server and system
CN109357679B (en) Indoor positioning method based on significance characteristic recognition
CN113420720B (en) High-precision low-delay large-scale indoor stadium crowd distribution calculation method
CN109655790A (en) Multi-target detection and identification system and method based on indoor LED light source
CN108413966A (en) Localization method based on a variety of sensing ranging technology indoor locating systems
CN107607110A (en) A kind of localization method and system based on image and inertial navigation technique
CN110049441B (en) WiFi indoor positioning method based on deep ensemble learning
CN111929672A (en) Method and device for determining movement track, storage medium and electronic device
CN114972645A (en) Three-dimensional reconstruction method and device, computer equipment and storage medium
CN114155489A (en) Multi-device cooperative unmanned aerial vehicle flyer detection method, device and storage medium
CN105960011B (en) Indoor objects localization method based on Sensor Network and bayes method
CN103412963A (en) Optical remote sensing satellite data obtaining method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant