CN106203302A - Pedestrian detection that view-based access control model and wireless aware combine and statistical method - Google Patents

Pedestrian detection that view-based access control model and wireless aware combine and statistical method Download PDF

Info

Publication number
CN106203302A
CN106203302A CN201610511828.2A CN201610511828A CN106203302A CN 106203302 A CN106203302 A CN 106203302A CN 201610511828 A CN201610511828 A CN 201610511828A CN 106203302 A CN106203302 A CN 106203302A
Authority
CN
China
Prior art keywords
wireless
target
detection
pedestrian
access control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610511828.2A
Other languages
Chinese (zh)
Other versions
CN106203302B (en
Inventor
屈桢深
李文瑞
周纪强
彭靖林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610511828.2A priority Critical patent/CN106203302B/en
Publication of CN106203302A publication Critical patent/CN106203302A/en
Application granted granted Critical
Publication of CN106203302B publication Critical patent/CN106203302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Abstract

The invention mainly relates to view-based access control model and pedestrian detection that wireless aware combines and statistical method, comprise the following steps: (1), monitoring image collection;(2), Image Information Processing: obtain the coordinate of the scope box of every frame line people's quantity and picture position, pedestrian place;(3), wireless signal acquiring: use libtins to obtain claim frame data, the mac address of analyzing device and signal intensity;(4), wireless signal processes: obtain the wireless exploration equipment spatial model to the spacing being calibrated equipment;(5), target is slightly mated: the detection target of camera head mated with the detection target of wireless exploration equipment;(6), target essence coupling: assess the matching degree of the detection target of camera head and the detection target of wireless exploration equipment according to positional information and motion vector;(7), conclude the data detecting region internal object, remove repetition target, the detection target of integration camera head and No. ID of the detection target of wireless exploration equipment, and unique ID number and real-time position information are stored in chained list.

Description

Pedestrian detection that view-based access control model and wireless aware combine and statistical method
Technical field
The invention mainly relates to a kind of detection and statistical method, tie mutually with wireless aware more particularly, to view-based access control model The pedestrian detection closed and statistical method.
Background technology
The pedestrian detection method using view-based access control model belongs to the main stream approach of object detection field from present's view, from 2005 The proposition of year Hog characteristic detection method, the application of machine vision also becomes more and more extensive, has emerged large quantities of outstanding calculation Method and leading edge operation person, wherein object detection methods based on parts are the main flow algorithms that current accuracy of detection is higher, and it has all Many advantages:
1, having various features model, it is divided into master cast and submodel, and utilizes spring deformation principle to calculate detection mesh Target final score, therefore shooting angle and deformation to human body have preferable robustness.
2, use Hog pyramid that the detection image of multiple yardstick is successively mated, be greatly improved various sizes of The recall rate of pedestrian's pixel.
3, insensitive to the circumstance of occlusion occurred in real process, actual effect can receive preferable effect.
Along with improving constantly of living standards of the people, more people can carry such as mobile phone, panel computer when trip Deng mobile terminal device, the collection of the wireless signal that mobile terminal device is sent therefore can be used to carry out indirect detection pedestrian, Complementation can be carried out with vision-based detection, compensate for the vision dead zone of video camera, and reduce because processing target in image and deliberately blocking, The missing inspection that large scale deformation and uneven illumination cause and detection Problem of Failure.
Summary of the invention
Present invention generally provides a kind of view-based access control model and pedestrian detection that wireless aware combines and statistical method, it is right to use The collection of the wireless signal that mobile terminal device sends carrys out indirect detection pedestrian, can carry out complementation with vision-based detection, compensate for The vision dead zone of video camera, and reduce because processing target in image and deliberately blocking, large scale deformation and uneven illumination cause Missing inspection and detection Problem of Failure.
For solving above-mentioned technical problem, the pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics side Method, it is characterised in that comprise the following steps:
(1), monitoring image collection: utilize camera head to gather video monitoring image, the image obtained is carried out size change Change;
(2), Image Information Processing: the imagery exploitation DPM method obtaining step (1) extracts the ROI in image, carries out color Color passage conversion, calculates feature pyramid, coupling master cast and submodel parts, calculating unit score, screens and demarcate Divide and exceed the target area setting threshold value, it is thus achieved that the coordinate of the scope box of every frame line people's quantity and picture position, pedestrian place;
(3), wireless signal acquiring: what pedestrian was carried by the wireless exploration equipment that utilization is arranged in detection region has nothing The calibration facility of line signal transmitting and receiving function makes requests on frame search, uses libtins to obtain claim frame data, analyzing device Mac address and signal intensity;
(4), wireless signal process: according to signal attenuation formula Los=32.44+20lgd+20lgf,
In formula, Los is propagation loss, and unit is dB;D is distance, and unit is Km;F is operating frequency, and unit is MHz;
Signal intensity in detection region is modeled, it is thus achieved that wireless exploration equipment is to the sky of the spacing being calibrated equipment Between model;
(5), target is slightly mated: sets up the camera head space coordinates to the spacing of pedestrian, divides monitoring image and exist Corresponding domain in space, mates the detection target of camera head with the detection target of wireless exploration equipment, it is thus achieved that same Target sequence number corresponding relation in two data fields;
(6), target essence coupling: according to the movement relation of front and back's frame target, detection target and the nothing to camera head simultaneously The detection target of line detecting devices carries out Kalman in the position of subsequent time and estimates, it is thus achieved that its moving displacement vector, according to position The detection target of confidence breath and motion vector assessment camera head and the matching degree of the detection target of wireless exploration equipment, arranged Whether two targets are the matching value of same target;
(7), conclude the data detecting region internal object, remove repetition target, integrate detection target and the nothing of camera head No. ID of the detection target of line detecting devices, and unique ID number and real-time position information are stored in chained list.
As the further optimization of this programme, pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics Transforming image dimension in step described in method (1) is suitable for VGA-QVGA.
As the further optimization of this programme, pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics Color channel described in method is transformed to be converted to RGB triple channel information single-channel data, the master cast number in described step (2) Amount is 2, and submodel number of components is 5, and the feature pyramid iteration number of plies is 10-48 layer, score threshold range set value be [- 1.8f,1.0f]。
As the further optimization of this programme, pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics Calibration facility in step described in method (3) is wireless network card or wireless router.
As the further optimization of this programme, pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics The number of calibration facility described in method is more than 2.
As the further optimization of this programme, pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics In step described in method (4), spatial modeling should model with wireless exploration equipment position, image division angle [45 °, 180°]。
As the further optimization of this programme, pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistics The allowable error of the matching value in step described in method (6) not should be greater than 3m.
Pedestrian detection that view-based access control model of the present invention and wireless aware combine and statistical method have the beneficial effect that use is right The collection of the wireless signal that mobile terminal device sends carrys out indirect detection pedestrian, can carry out complementation with vision-based detection, compensate for The vision dead zone of video camera, and reduce because processing target in image and deliberately blocking, large scale deformation and uneven illumination cause Missing inspection and detection Problem of Failure.
Accompanying drawing explanation
The present invention will be further described in detail with specific implementation method below in conjunction with the accompanying drawings.
Fig. 1 is view-based access control model of the present invention and wireless aware combines pedestrian detection and the flow chart of statistical method.
Fig. 2 is vision-based detection partial process view in the present invention.
Fig. 3 is signal detection partial process view in the present invention.
Detailed description of the invention
Illustrate what patent of the present invention, view-based access control model described in patent of the present invention and wireless aware combined in conjunction with Fig. 1,2,3 Pedestrian detection and statistical method, comprise the following steps:
(1), monitoring image collection: utilize camera head to gather video monitoring image, use resize function in Opencv The image obtained is carried out size change over, and transforming image dimension is suitable for VGA-QVGA;
(2), Image Information Processing: the imagery exploitation DPM method obtaining step (1) extracts the ROI in image, utilizes and turns Changing formula Grey=0.03*red+0.59*green+0.11*blue and carry out color channel conversion, color channel is transformed to RGB Triple channel information is converted to single-channel data, and in conversion formula, grey represents that gray value, red represent the value of R passage, green table Showing the value of G passage, blue represents the value of channel B;
Calculate feature pyramid and first carry out channel characteristics scaling:
Law between channel characteristics and each yardstick:
CS=Ω (R (I, s))
Wherein Ω is any translation invariant Feature Mapping function, and I is given input picture, characteristics of image C=Ω (I), C is called channel characteristics, CSFor the channel characteristics of S layer, R is sampling function, and first zoomed image I is to yardstick s, then calculates ISFeature,
Then quick approximate formula is used
C S Ω ≈ R ( C Ω , s ) × s - λ Ω
Wherein s take [1,1/2,1/4 ...], calculate feature pyramid, coupling master cast and submodel parts;
Calculating unit score:
s c o r e ( x 0 , y 0 , l 0 ) = R 0 , l 0 ( x 0 , y 0 ) + Σ i = 1 n D i , , l 0 - λ ( 2 ( x 0 , y 0 ) + v i ) + b
Score represents target PTS, and it comprises three variable l0, x0, y0,(x0, y0) it is master cast wave filter (Root Filter) left summit is at the coordinate of master cast characteristic pattern (Root Feature Map), l0For the number of plies at master cast place, its R0,l0(x0,y0) it is the score of master cast,Score for submodel.Parameter b be in order to main mould The compensation that type directly aligns and arranges, wherein (x0, y0) it is that the left summit of master cast wave filter (Root Filter) is at master cast The coordinate of characteristic pattern (Root Feature Map), 2 (x0, y0)+viIt is mapped to for i-th parts wave filter (Part Filter) Coordinate in partial model characteristic pattern (Part Feature Map), owing to the segmentation of partial model characteristic pattern is master cast feature The twice of figure, it is therefore desirable to be multiplied by twice.viIllustrate the skew on the left summit relative to master cast wave filter.
Wherein, the score of parts wave filter is as follows:
DI, l(x, y)=maxDx, dv(Ri,l(x+dx, y+dy)-di·Φd(dx, dy))
Di,l(x, y) represents the score of parts wave filter, and i is unit number, diFor offset vector (dz,dv,dz 2,dv 2), Фd (dx,dy) it is the Cost weights offset, such as Фd(dx,dy)=(0,0,1,1) then diФd(dx,dy) it is most common Euclidean Distance, this step is referred to as range conversion.
At parts wave filter ideal position, (x, y), i.e. in the anchor point position certain limit of parts, finds a comprehensive matching The position optimum with deformation.
Set threshold value m and the target score in object detection results vector detetion is circulated judgement, if less than m, From detetion, remove this target, thus screening and demarcation score exceed the target area setting threshold value.
The size of the detection vector detected is read out, thus obtains the number of target, the coordinate to target Determine use non-maxima suppression method, coordinates of targets generally have multiple parts detection box determine, therefore box be set as The matrix of one m*n, wherein m is the number of box, and front the 4 of n are classified as the coordinate of each box, and form is (x1, y1, x2, y2), obtains Obtain the coordinate of the scope box of every frame line people's quantity and picture position, pedestrian place;Master cast quantity is 2, and submodel number of components is 5, the feature pyramid iteration number of plies is 10-48 layer, and score threshold range set value is [-1.8f, 1.0f].
(3), wireless signal acquiring: what pedestrian was carried by the wireless exploration equipment that utilization is arranged in detection region has nothing The calibration facility of line signal transmitting and receiving function makes requests on frame search, uses libtins to obtain claim frame data, analyzing device Mac address and signal intensity;Calibration facility is wireless network card or wireless router, and the number of calibration facility is more than 2.
Obtain the title of native network equipment, be saved in set deviceArr
Set<std::string>iface_set=Utils::network_interfaces ();
Arranging network interface card is monitoring pattern
Sniffer sniffer(iface,Sniffer::PROMISC,"",true);
The analytical function that should run when often capturing a signal is set
sniffer.sniff_loop(make_sniffer_handler(this,&ProbeSniffer:: callback));
ProbeRequest frame is looked in the data captured
If capturing probe frame
Probe.to_ds ()==0&&probe.from_ds ()==0
Then extract radiotap frame,
This frame storage signal intensity
Const RadioTap&radio=pdu.rfind_pdu<RadioTap>();
Extract the mac address in probe
Address_type addr=probe.addr2 ();
Judge that the mac address captured occurred the most
AddrSet_type::iterator it=addr_set.find (addr)
If this mac address did not occurred, then mac address is stored in set
addr_set.insert(addr);
(4), wireless signal process: according to signal attenuation formula Los=32.44+20lgd+20lgf,
In formula, Los is propagation loss, and unit is dB;D is distance, and unit is Km;F is operating frequency, and unit is MHz;
Signal intensity in detection region is modeled, it is thus achieved that wireless exploration equipment is to the sky of the spacing being calibrated equipment Between model, spatial modeling should model with wireless exploration equipment position, and image division angle is at [45 °, 180 °];
Wherein spatial model to set up process as follows:
A, for plane, environment to be detected being evenly divided into m n with ground and take advantage of the net region of n, n typically takes 1m extremely 2m。
B, determine radio detection equipment position in a model, its storage format be (x, y).Wherein x, y are respectively grid At the coordinate figure of plane and as a reference point.
C, signal intensity step (3) gathered are by being stored as following data form after step (4) process: (int X, intY,struct MAC MAC)
(5), target is slightly mated: sets up the camera head space coordinates to the spacing of pedestrian, divides monitoring image and exist Corresponding domain in space, mates the detection target of camera head with the detection target of wireless exploration equipment, it is thus achieved that same Target sequence number corresponding relation in two data fields;
Wherein the determination process of camera head respective coordinates is as follows:
A, process distortion: distortion is that the shape of the optical imaging device such as lens having camera head causes, and has the most abnormal Become and radial distortion.
Distortion correction formula:
x p y p = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) x d y d + 2 p 1 x d y d + p 2 ( r 2 + 2 x d 2 ) p 1 ( r 2 + 2 x y 2 ) + 2 p 2 x d y d
(x in formulap, yp) represent picture point normalization coordinate ideally, (xd,yd) indicate the picture point of distortion Normalization coordinate, the lens radius that r currently puts, k1, k2, k3Determine the degree of radial distortion, and its value is Taylor's level respectively First 3 of number expansion.
B, three-dimensional scaling: the scene of video camera shooting is carried out space demarcation.
According to camera imaging model:
x = fX c Z c
y = fY c Z c
Wherein x, y are the coordinate at image midpoint, band Xc, Yc, ZcBeing respectively the coordinate of the point in space, f is focal length.
Corresponding relation between image coordinate system and physical coordinates mooring points is as follows:
u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 x y 1
Wherein u, v are the point of image coordinate system, and x, y are the point of physical coordinates system, u0, v0Initial point for image coordinate system.
The relation of camera coordinate system and world coordinate system is as follows:
X c Y c Z c 1 = R t 0 T 1 X w Y w Y w 1
Point under wherein subscript c represents camera coordinate system, subscript w represents the point under world coordinate system.R is camera Spin matrix, t is the translation vector of camera.
Z c u v 1 = 1 d x 0 u 0 0 1 d y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R t 0 T 1 X w Y w Y w 1 = f x 0 u 0 0 0 f y v 0 0 0 0 1 0 R t 0 T 1 X w Y w Y w 1 = M 1 M 2 M W = MX W
Wherein M is the projection matrix of image.
C, in the picture delimitation ground grid
Draw grid in the picture so that it is corresponding relation is consistent with the coordinate system set in step (2), and representation is (x, the block position on ground during y) wherein x, y distinguish representation space model.Determine that two dimensional image is corresponding with the position of threedimensional model to close System.
D, image store data, is space coordinates (x, value y), and look into by the target designation detected in picture pick-up device Look for the target that the value of synchronization wireless exploration equipment and x, y is similar.
Meet:
|xi-xj| < 2
|yi-yj| < 2
Wherein xi, yiRepresent the target location that wireless exploration equipment obtains.I takes 1,2,3 ...
xj, yjRepresent the target location that wireless exploration equipment obtains.J takes 1,2,3 ...
Object matching matrix M is set to the corresponding relation characterizing between target, if meeting two above inequality, then in matrix Element MI, j=1, it is otherwise 0.
(6), target essence coupling: according to the movement relation of front and back's frame target, detection target and the nothing to camera head simultaneously The detection target of line detecting devices carries out Kalman in the position of subsequent time and estimates, it is thus achieved that its moving displacement vector, according to position The detection target of confidence breath and motion vector assessment camera head and the matching degree of the detection target of wireless exploration equipment, arranged Whether two targets are the matching value of same target, and the allowable error of matching value not should be greater than 3m;
A, use Kalman filtering determine target travel, wherein state equation:
X (k+1)=A (k+1, k) X (k)+w (k)
Observational equation:
Z (k)=H (k) X (k)+v (k)
In formula, X (k) is state vector;Z (k) is state vector;(k+1 k) is state-transition matrix to A;H (k) is observation Matrix;W (k) is system noise vector;V (k) is observation noise vector.
W (k), V (k) usually assume that as 0 orthogonal average white Gaussian noise vectorial, and their covariance matrix is:
E = &lsqb; w ( k ) w T ( i ) &rsqb; = Q ( k ) , i = k 0 , i &NotEqual; k
E = &lsqb; v ( k ) v T ( i ) &rsqb; = R ( k ) , i = k 0 , i &NotEqual; k
To all of k and i,
E=[w (k) vT(i)]=0
Predictive equation group is:
X ' (k+1 | k)=A (k+1 | k) X ' (k | k)
P (k+1 | k)=A (k+1 | k) P ' (k | k)
AT(k+1, k)=Q (k)
Renewal equation group
K (k+1)=P (k+1 | k) HT(k+1)[H(k+1)P(k+1|k)HT(k+1)+R(k+1)]-1X′(k+1|k+1)
=X ' (k+1 | k)+K (k+1) [Z (k+1)-H (k+1) X ' (k+1 | k)] ^ (-1)
P (k+1 | k+1)=[I-K (k+1) H (k+1)] P (k+1 | k)
The position that subsequent time target occurs can be constantly predicted by predictive equation.
B, set up vector according to current time position (x1, y1) and subsequent time position (x2, y2)
K1=[(x2-x1), (y2-y1)]
The vector device predicted with wireless exploration
K2=[(x ' 2-x ' 1), (y ' 2-y ' 1)]
Comparing, judgement schematics is:
S = &lsqb; ( x 2 - x 1 ) 2 - ( x &prime; 2 - x &prime; 1 ) 2 &rsqb; + &lsqb; ( y 2 - y 1 ) 2 - ( y &prime; 2 - y &prime; 1 ) 2 &rsqb;
Setting threshold value m, < during m, target can merge S, and m then needs to be set according to scene.
(7), conclude the data detecting region internal object, remove repetition target, integrate detection target and the nothing of camera head No. ID of the detection target of line detecting devices, and unique ID number and real-time position information are stored in chained list.Target after merging Form:
(struct ID ID, int x, int y, struct MAC mac, point2f image)
Such as: A0007,21,15, F0:25:B7:4C:D5:D8,215.4,107.5
Although the present invention is open the most as above with preferred embodiment, but it is not limited to the present invention, any is familiar with this The people of technology, without departing from the spirit and scope of the present invention, can do various change and modification, the therefore protection of the present invention Scope should be with being as the criterion that claims are defined.

Claims (7)

1. view-based access control model and wireless aware combine pedestrian detection and statistical method, it is characterised in that comprise the following steps:
(1), monitoring image collection: utilize camera head to gather video monitoring image, the image obtained is carried out size change over;
(2), Image Information Processing: the imagery exploitation DPM method obtaining step (1) extracts the ROI in image, carries out color and leads to Road converts, and calculates feature pyramid, coupling master cast and submodel parts, calculating unit score, screens and demarcates score and surpass Cross the target area setting threshold value, it is thus achieved that the coordinate of the scope box of every frame line people's quantity and picture position, pedestrian place;
(3), wireless signal acquiring: what pedestrian was carried by the wireless exploration equipment that utilization is arranged in detection region has wireless communication The calibration facility of number transmission-receiving function makes requests on frame search, uses libtins to obtain claim frame data, the mac ground of analyzing device Location and signal intensity;
(4), wireless signal process: according to signal attenuation formula Los=32.44+20lgd+20lgf,
In formula, Los is propagation loss, and unit is dB;D is distance, and unit is Km;F is operating frequency, and unit is MHz;
Signal intensity in detection region is modeled, it is thus achieved that wireless exploration equipment is to the spatial mode of the spacing being calibrated equipment Type;
(5), target is slightly mated: sets up the camera head space coordinates to the spacing of pedestrian, divides monitoring image in space In corresponding domain, the detection target of camera head is mated with the detection target of wireless exploration equipment, it is thus achieved that same target Sequence number corresponding relation in two data fields;
(6), target essence coupling: according to the movement relation of front and back's frame target, detection target and the wireless spy to camera head simultaneously The detection target of measurement equipment carries out Kalman in the position of subsequent time and estimates, it is thus achieved that its moving displacement vector, believes according to position The matching degree of the detection target of breath and the detection target of motion vector assessment camera head and wireless exploration equipment, arranges two mesh Whether mark is the matching value of same target;
(7), conclude the data detecting region internal object, remove repetition target, integrate the detection target of camera head and wireless spy No. ID of the detection target of measurement equipment, and unique ID number and real-time position information are stored in chained list.
Pedestrian detection that view-based access control model the most according to claim 1 and wireless aware combine and statistical method, its feature Being, the transforming image dimension in described step (1) is suitable for VGA-QVGA.
Pedestrian detection that view-based access control model the most according to claim 1 and wireless aware combine and statistical method, its feature Being, the color channel in described step (2) is transformed to be converted to RGB triple channel information single-channel data, described step (2) In master cast quantity be 2, submodel number of components is 5, and the feature pyramid iteration number of plies is 10-48 layer, score threshold scope Setting value is [-1.8f, 1.0f].
Pedestrian detection that view-based access control model the most according to claim 1 and wireless aware combine and statistical method, its feature Being, the calibration facility in described step (3) is wireless network card or wireless router.
5. the pedestrian detection combined according to the view-based access control model described in claim 1 or 4 and wireless aware and statistical method, it is special Levying and be, the number of described calibration facility is more than 2.
Pedestrian detection that view-based access control model the most according to claim 1 and wireless aware combine and statistical method, its feature Being, in described step (4), spatial modeling should model with wireless exploration equipment position, image division angle [45 °, 180°]。
Pedestrian detection that view-based access control model the most according to claim 1 and wireless aware combine and statistical method, its feature Being, the allowable error of the matching value in described step (6) not should be greater than 3m.
CN201610511828.2A 2016-07-01 2016-07-01 The pedestrian detection and statistical method that view-based access control model and wireless aware combine Active CN106203302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610511828.2A CN106203302B (en) 2016-07-01 2016-07-01 The pedestrian detection and statistical method that view-based access control model and wireless aware combine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610511828.2A CN106203302B (en) 2016-07-01 2016-07-01 The pedestrian detection and statistical method that view-based access control model and wireless aware combine

Publications (2)

Publication Number Publication Date
CN106203302A true CN106203302A (en) 2016-12-07
CN106203302B CN106203302B (en) 2019-05-21

Family

ID=57464029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610511828.2A Active CN106203302B (en) 2016-07-01 2016-07-01 The pedestrian detection and statistical method that view-based access control model and wireless aware combine

Country Status (1)

Country Link
CN (1) CN106203302B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603747A (en) * 2016-12-30 2017-04-26 广东创我科技发展有限公司 Camera and system having MAC address acquisition function, and data processing method of system
CN109005390A (en) * 2018-08-31 2018-12-14 山东建筑大学 Personnel's distributed model method for building up and system based on signal strength and video
CN109101893A (en) * 2018-07-17 2018-12-28 贵州大学 A kind of Pedestrian flow detection method of view-based access control model and WiFi
CN110516648A (en) * 2019-09-02 2019-11-29 湖南农业大学 Ramie strain number recognition methods based on unmanned aerial vehicle remote sensing and pattern-recognition
CN113115341A (en) * 2021-04-15 2021-07-13 成都极米科技股份有限公司 Method, device, equipment and storage medium for negotiating wireless sensing process

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160010086A1 (en) * 2009-02-06 2016-01-14 Isis Pharmaceuticals, Inc. Oligomeric compounds and excipients
CN105357480A (en) * 2015-11-10 2016-02-24 杭州敦崇科技股份有限公司 Public place wireless internet access security management system and operation method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160010086A1 (en) * 2009-02-06 2016-01-14 Isis Pharmaceuticals, Inc. Oligomeric compounds and excipients
CN105357480A (en) * 2015-11-10 2016-02-24 杭州敦崇科技股份有限公司 Public place wireless internet access security management system and operation method thereof

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106603747A (en) * 2016-12-30 2017-04-26 广东创我科技发展有限公司 Camera and system having MAC address acquisition function, and data processing method of system
CN109101893A (en) * 2018-07-17 2018-12-28 贵州大学 A kind of Pedestrian flow detection method of view-based access control model and WiFi
CN109005390A (en) * 2018-08-31 2018-12-14 山东建筑大学 Personnel's distributed model method for building up and system based on signal strength and video
CN109005390B (en) * 2018-08-31 2020-12-04 山东建筑大学 Method and system for establishing personnel distribution model based on signal intensity and video
CN110516648A (en) * 2019-09-02 2019-11-29 湖南农业大学 Ramie strain number recognition methods based on unmanned aerial vehicle remote sensing and pattern-recognition
CN110516648B (en) * 2019-09-02 2022-04-19 湖南农业大学 Ramie plant number identification method based on unmanned aerial vehicle remote sensing and pattern identification
CN113115341A (en) * 2021-04-15 2021-07-13 成都极米科技股份有限公司 Method, device, equipment and storage medium for negotiating wireless sensing process
CN113115341B (en) * 2021-04-15 2022-06-21 成都极米科技股份有限公司 Method, device, equipment and storage medium for negotiating wireless sensing process

Also Published As

Publication number Publication date
CN106203302B (en) 2019-05-21

Similar Documents

Publication Publication Date Title
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
CN106203302A (en) Pedestrian detection that view-based access control model and wireless aware combine and statistical method
CN110378931A (en) A kind of pedestrian target motion track acquisition methods and system based on multi-cam
Carr et al. Monocular object detection using 3d geometric primitives
CN103325112B (en) Moving target method for quick in dynamic scene
WO2018024030A1 (en) Saliency-based method for extracting road target from night vision infrared image
CN101950426B (en) Vehicle relay tracking method in multi-camera scene
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
WO2018023916A1 (en) Shadow removing method for color image and application
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN112102409B (en) Target detection method, device, equipment and storage medium
CN105930822A (en) Human face snapshot method and system
WO2018076392A1 (en) Pedestrian statistical method and apparatus based on recognition of parietal region of human body
CN113192646B (en) Target detection model construction method and device for monitoring distance between different targets
CN105321189A (en) Complex environment target tracking method based on continuous adaptive mean shift multi-feature fusion
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN107403451B (en) Self-adaptive binary characteristic monocular vision odometer method, computer and robot
CN107103299B (en) People counting method in monitoring video
CN110113560A (en) The method and server of video intelligent linkage
CN110503637A (en) A kind of crack on road automatic testing method based on convolutional neural networks
CN106778822B (en) Image straight line detection method based on funnel transformation
CN111062384B (en) Vehicle window accurate positioning method based on deep learning
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant