CN111460924A - Gate ticket-passing behavior detection method based on target detection - Google Patents

Gate ticket-passing behavior detection method based on target detection Download PDF

Info

Publication number
CN111460924A
CN111460924A CN202010182708.9A CN202010182708A CN111460924A CN 111460924 A CN111460924 A CN 111460924A CN 202010182708 A CN202010182708 A CN 202010182708A CN 111460924 A CN111460924 A CN 111460924A
Authority
CN
China
Prior art keywords
gate
detection
ith
grid
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010182708.9A
Other languages
Chinese (zh)
Other versions
CN111460924B (en
Inventor
苏颖
盛馨心
王斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CN202010182708.9A priority Critical patent/CN111460924B/en
Publication of CN111460924A publication Critical patent/CN111460924A/en
Application granted granted Critical
Publication of CN111460924B publication Critical patent/CN111460924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07BTICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
    • G07B11/00Apparatus for validating or cancelling issued tickets
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a gate ticket-passing behavior detection method based on target detection, which is used for judging ticket-passing behaviors of pedestrians passing through the gate, wherein a camera is arranged above the gate, and the method comprises the following steps: s1: driving the camera by using a pulse signal which is input into the gate by swiping a card by a pedestrian, and shooting by aiming at the gate; s2: based on the images shot by the camera, the number of people entering the gate is detected abnormally by utilizing a pre-trained target detection network; s3: and judging the ticket evasion behavior of the gate based on a pre-established gate abnormal behavior judgment rule. Compared with the prior art, the method can automatically analyze and judge the ticket evasion condition of the pedestrian, reduce the labor cost, and has the advantages of high detection accuracy, good real-time performance and the like.

Description

Gate ticket-passing behavior detection method based on target detection
Technical Field
The invention relates to the technical field of computer vision, in particular to a gate ticket passing behavior detection method based on target detection.
Background
Along with the rapid development of economy in China, the urbanization process is gradually accelerated, the rail transit travel modes of subways, high-speed rails and the like are more and more popular, the number of people at entrances and exits of hot scenic spots is gradually increased, and more gate systems are used for realizing automatic ticket checking to ensure rapid and ordered travel. But with the attendant occurrence of a significant number of low-quality people. In order to reduce the occurrence of the ticket evasion behavior, a worker is equipped for each gate, and the method needs to consume a large amount of labor cost, so that the subway monitoring system is extremely necessary to be perfected to avoid the occurrence of the ticket evasion event and avoid the property loss of the state. The method based on target detection can detect when a pedestrian enters the gate, and judge whether the person escapes through detecting the number of people entering the gate instantaneously, so that the intelligent algorithm can provide theoretical method guarantee for solving the practical engineering problem.
The method for monitoring the ticket evasion behavior detection and judgment of the gate is necessary aiming at the problem that the existing gate does not solve the problem of ticket evasion behavior detection in the ticket checking process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a gate ticket-passing behavior detection method based on target detection.
The purpose of the invention can be realized by the following technical scheme:
a gate ticket-passing behavior detection method based on target detection is used for distinguishing a ticket-passing behavior of a pedestrian passing through the gate, a camera is arranged above the gate, and the method comprises the following steps:
s1: driving the camera by using a pulse signal which is input into the gate by swiping a card by a pedestrian, and shooting by aiming at the gate;
s2: based on the images shot by the camera, the number of people entering the gate is detected abnormally by utilizing a pre-trained target detection network;
s3: and judging the ticket evasion behavior of the gate based on a pre-established gate abnormal behavior judgment rule.
Further, in step S1, the driving of the camera by the pulse signal when the pedestrian swipes the card into the gate is specifically that after the pedestrian swipes the card, the gate detects the card swiping information, transmits a door opening signal to the motherboard, and the motherboard sends a control signal to open the gate to allow the pedestrian to pass; meanwhile, the main board sends a signal to drive a camera above the gate to shoot.
Further, step S1 includes, when the camera does not receive the pulse signal, capturing the image by using the camera to align the gate in a regular shooting manner.
Further, the target detection network is a Yolo V3 network, and the output of the Yolo V3 network includes three scales: y1, for detecting tall people; y2, for detecting persons of intermediate stature; y3 for detecting children.
Further, the network is trained before target detection is performed, so that optimal parameters are set for target detection algorithm software and are injected into a camera front-end system. When the pedestrian is detected, the pedestrian passing through the gate can be accurately and quickly detected by directly using the parameter; in step S2, the pre-training process of the target detection model includes the following steps:
s201: acquiring crowd images to form a training data set;
s202: marking people in each crowd image in the training data set, and generating a marking file, wherein the marking file comprises a central coordinate point of a marking frame and the width and the height of the marking frame;
s203: clustering the training data set based on the width and the height of the labeling frame to obtain preset frames of three output scales of the Yolo V3 network;
s204: and detecting the crowd images in the training data set by using a preset frame, and optimizing the Yolo V3 network by using a loss function and a gradient descent training parameter.
Further, the step S203 specifically includes the following steps:
1) randomly selecting K clustering centers;
2) calculating the similarity degree of each labeling frame and each clustering center, and classifying the combinations of the labeling frame with the minimum similarity degree and the clustering center into one class;
3) recalculating the clustering center of each type of labeling frame, and if the difference between the recalculated clustering center and the corresponding clustering center in the step 1) is greater than a preset first value, replacing the corresponding clustering center in the step 1) with the recalculated clustering center, and executing the step 2) and the step 3) again in sequence; otherwise, clustering is completed.
Further, in step 2), the calculation expression of the similarity degree is:
d=1-IOU[(wj,hj),(Wi,Hi)],j∈{1,2,...,N},i∈{1,2,...,9}
Figure BDA0002413122920000031
wherein d is the degree of similarity, wjFor the jth width of the box, hjFor the jth height of the box, WiIs the width of the ith cluster center, HiIs the height of the ith cluster center.
Further, in step 3), the calculation expression for recalculating the cluster center of each type of label box is as follows:
Figure BDA0002413122920000032
in the formula, Wi' is the width of the ith cluster center after recalculation, Hi' is the height of the ith cluster center after recalculation, NiNumber of label boxes for classification with i-th cluster center, wjFor the jth width of the box, hjThe height of the box is marked for the jth.
Further, the computational expression of the loss function is:
Figure BDA0002413122920000033
wherein S × S is a grid of the crowd image divided into S rows and S columns,
Figure BDA0002413122920000034
for the abscissa offset of the jth preset box in the ith grid,
Figure BDA0002413122920000035
the vertical coordinate offset of the jth preset frame in the ith grid,
Figure BDA0002413122920000036
the scaling ratio for the width of the jth preset box in the ith grid,
Figure BDA0002413122920000037
for the height scaling of the jth preset box in the ith grid,
Figure BDA0002413122920000038
the horizontal coordinate offset of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA0002413122920000039
the vertical coordinate offset of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA00024131229200000310
the width scaling ratio of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA00024131229200000311
the height scaling ratio of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA00024131229200000312
is the first flag bit, when the jth preset frame in the ith grid is responsible for the detected target,
Figure BDA00024131229200000313
otherwise
Figure BDA00024131229200000314
Figure BDA00024131229200000315
For the confidence of the jth preset box in the ith grid,
Figure BDA00024131229200000316
is the second flag bit, when the jth preset frame in the ith grid is not responsible for the detected target,
Figure BDA00024131229200000317
otherwise
Figure BDA00024131229200000318
Figure BDA00024131229200000319
The confidence value of the jth preset frame in the ith grid is the true value, namely whether the ith grid has the preset frame to detect the target or not, if so,
Figure BDA00024131229200000320
otherwise
Figure BDA0002413122920000041
PiThe prediction class probability for the ith mesh,
Figure BDA0002413122920000042
is the true probability that the ith mesh belongs to a certain class, classes is the set of prediction classes, c is the prediction class, lambdanoobjFor the first coordinate error correction parameter, λcoordAnd each grid comprises M preset frames for the second coordinate error correction parameter.
A large portion of the content in an image is not targeted, which results in the computing portion without targets contributing more than the computing portion with targets, where the network tends to be deadWith targeted prediction, a weight is set to reduce the contribution of the parts without targeted computation, i.e. λnoobj=0.5,λcoord5 is used to correct the coordinate error.
When the jth preset frame of the ith grid is in charge of a certain real target, solving a central coordinate offset error, a width and height scaling error, a confidence error and a classification error according to a loss function; when the preset box is not responsible for a real target, only one confidence error is required.
Further, the detection result of the Yolo V3 network includes detection frames of each person; in step S3, the gate abnormal behavior determination rule is specifically that when only one person is detected to pass through the gate, no alarm is given; when more than one person is detected to pass through the gate, judging the coincidence degree of the size of the detection frame and the size of the preset frame, and when a small detection frame is arranged in the detection frame and the size of the detection frame is similar to the size of the preset frame output by y3, not giving an alarm; when the two detection boxes do not coincide and the sizes of the two detection boxes are similar to the preset box size output by y1 or y2, an alarm is issued.
Compared with the prior art, the invention has the following advantages:
(1) the invention adopts a YoloV3 network to detect pedestrians entering a gate, and judges whether the pedestrians have ticket evasion behaviors or not through the accurate detection forms of the pedestrians with different sizes; the accuracy is improved, and meanwhile, the real-time performance is also met; and the detection process does not need manual intervention, the ticket evading condition of the pedestrian can be automatically analyzed and judged, and the labor cost is reduced.
(2) The loss function for training the Yolo V3 network comprises the calculation of coordinate prediction loss, probability loss of detected objects, probability loss without the detected objects and category prediction loss, is comprehensive in coverage, and is beneficial to improving the accuracy of the detection result of the Yolo V3 network.
(3) The method not only detects the ticket evasion behavior when the pedestrian enters the gate, but also takes a snapshot in a regular shooting mode to detect the ticket evasion behavior when the pedestrian passes through the gate without swiping a card, thereby improving the reliability of the method.
(4) The method considers that pedestrians and children pass through the gate normally and cannot judge the ticket-evading behavior of the gate by people, adopts a method for judging the ticket-evading behavior by combining the detected people and the size of the detection frame, considers the whole situation, classifies the size of the detection frame by adopting a Yolo V3 network with three output layers so as to distinguish tall people, middle-sized people and children, and is novel, accurate and reliable.
Drawings
FIG. 1 is a schematic flow chart of a gate fare evasion behavior detection method based on object detection according to the present invention;
FIG. 2 is a schematic diagram of a Yolo V3 network structure employed in the present invention;
FIG. 3 is a schematic diagram of performing lattice segmentation on an input image during network training according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
As shown in fig. 1, the embodiment provides a gate-passing fare evasion behavior detection method based on target detection, which determines whether a fare evasion behavior occurs or not by the size and number of marking frames for detecting pedestrians, and sends out alarm information to a worker if the fare evasion behavior occurs, and the method includes the following steps:
step 1: after the card is swiped by a pedestrian, the gate system detects the card swiping information, transmits a door opening signal to the main board, and the main board sends a control signal to open the gate to allow the pedestrian to pass; meanwhile, the main board sends a signal again to drive the camera which corresponds to the oblique upper part of the gate machine to shoot the pedestrian entering the gate machine.
Step 2: based on the shot images, the number of people entering the gate is detected abnormally by using trained target detection algorithm (Yolo V3) software. Before target detection is carried out, the network needs to be trained so as to set optimal parameters for target detection algorithm software and inject the optimal parameters into a front-end system of a camera. The pedestrian who passes through the floodgate machine can accurate quick detection of this parameter of direct use when detecting the pedestrian.
The structure of the Yolo V3 network in this embodiment is shown in fig. 2. The meaning of each parameter in the figures is prior art and will not be described in detail here.
Specifically, the training process of the network comprises the following steps:
step 2.1: and shooting crowd images by using a camera in a plurality of scenes or selecting the crowd images on the network to form a training data set.
Step 2.2, labeling people in each image in the data set by using L abelmg, generating a file (the type is person) containing the position and the type of a labeling frame by using the labeled images, generating a file by using an image, wherein each line in the file contains the type c of the labeling frame and the central coordinate point (x) of the labeling framej,yj) And width and height (w) of the label boxj,hj) J ∈ {1, 2.. multidot.N }, and clustering the data set consisting of all the files only containing the position information of the labeling frame.
Specifically, clustering comprises the steps of:
step 2.2.1: randomly select K-9 cluster centers (W)i,Hi) I ∈ {1, 2.., 9}, where Wi,HiThe width and height of the marking frame;
step 2.2.2: and calculating the similarity degree d of each labeling frame and each clustering center according to the formula 1, and classifying the labeling frame with the minimum d and the clustering center into one class. The expression of equation 1 is as follows:
d=1-IOU[(wj,hj),(Wi,Hi)],j∈{1,2,...,N},i∈{1,2,...,9} (1)
Figure BDA0002413122920000061
step 2.2.3: after all the label boxes are allocated, the cluster center point (W) is recalculated for each class according to formula 3i',Hi'). If (W)i',Hi') and (Wi,Hi) If the difference is large, then use (W)i',Hi') substitution (W)i,Hi) Becoming a new clustering center, returning to the step 2, and clustering again; if (W)i',Hi') and (Wi,Hi) If the difference is small or no difference exists, the clustering is completed. The expression of equation 3 is as follows:
Figure BDA0002413122920000062
in the formula, NiThe number of the label boxes of the ith class.
After K-means clustering in this embodiment, the sizes of the feature maps of the three output layers and the size of each feature map preset box are shown in table 1.
TABLE 1 Preset frame dimensions
Feature map layer Feature size (S × S) Preset frame size
y1 13×13 37×101、26×94、25×59
y2 26×26 23×68、19×45、14×70
y3 52×52 15×31、11×45、9×25
Step 2.3: training and detecting the crowd in the data set by using a preset frame, optimizing the network by using a loss function and a gradient descent training parameter, and selecting Yolo V3 as a target detection network.
Specifically, step 2.3 includes the following steps:
step 2.3.1 as shown in fig. 3, the input image is divided into S × S grids, and if the center of a person falls within one grid, that grid is responsible for detecting the person, each grid has M prediction boxes (M ═ 3), each prediction box contains 5 prediction values, which are the center coordinates (b) of the preset box respectivelyx、by) The width and height of the frame are preset (b)w、bh) And confidence (c), the calculation formula is as follows:
bx=σ(x)+cx,by=σ(y)+cy,bw=pwew,bh=pheh
wherein, cxAnd cyRespectively, the offset of the coordinate, pwAnd phRespectively the width and height of the preset frame.
The output of the Yolo V3 comprises three scales, namely y1, y2 and y3, wherein each scale is allocated with three preset frames for detection, and different people are detected through different scales, the y1 divides the image into 13 × 13 grids for detecting people with high body, the y2 divides the image into 26 × 26 grids for detecting people with medium body, and the y3 divides the image into 52 × 52 grids for detecting children.
Step 2.3.2: the aim of optimizing the network is achieved through the loss function and the gradient descent training parameter.
The formula for calculating the loss function is shown in equation 4:
Figure BDA0002413122920000071
wherein S × S is a crowdThe image is divided into a grid of S rows and S columns,
Figure BDA0002413122920000072
for the abscissa offset of the jth preset box in the ith grid,
Figure BDA0002413122920000073
the vertical coordinate offset of the jth preset frame in the ith grid,
Figure BDA0002413122920000074
the scaling ratio for the width of the jth preset box in the ith grid,
Figure BDA0002413122920000075
for the height scaling of the jth preset box in the ith grid,
Figure BDA0002413122920000076
the horizontal coordinate offset of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA0002413122920000077
the vertical coordinate offset of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA0002413122920000078
the width scaling ratio of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA0002413122920000079
the height scaling ratio of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure BDA00024131229200000710
is the first flag bit, when the jth preset frame in the ith grid is responsible for the detected target,
Figure BDA00024131229200000711
otherwise
Figure BDA00024131229200000712
Figure BDA00024131229200000713
For the confidence of the jth preset box in the ith grid,
Figure BDA00024131229200000714
is the second flag bit, when the jth preset frame in the ith grid is not responsible for the detected target,
Figure BDA00024131229200000715
otherwise
Figure BDA00024131229200000716
Figure BDA00024131229200000717
The confidence value of the jth preset frame in the ith grid is the true value, namely whether the ith grid has the preset frame to detect the target or not, if so,
Figure BDA00024131229200000718
otherwise
Figure BDA00024131229200000719
PiThe prediction class probability for the ith mesh,
Figure BDA00024131229200000720
is the true probability that the ith mesh belongs to a certain class, classes is the set of prediction classes, c is the prediction class, lambdanoobjFor the first coordinate error correction parameter, λcoordAnd each grid comprises M preset frames for the second coordinate error correction parameter.
A picture in which most of the content is not object-containing will result in the computation portion without objects contributing more than the computation portion with objects, where the network will tend to be predictive of no objects, and therefore a reduction of the number of objects absent is arrangedCalculating the contribution weight of the part, i.e. λnoobj=0.5,λcoord5 is used to correct the coordinate error.
When the jth preset frame of the ith grid is in charge of a certain real target, solving a central coordinate offset error, a width and height scaling error, a confidence error and a classification error according to a loss function; when the preset box is not responsible for a real target, only one confidence error is required.
Step 2.4: after the network training is finished, setting a network with optimal parameters and injecting the network into a camera front-end system, and detecting by using a camera with a Yolo V3 system when a pedestrian gates, wherein the detection steps are as follows:
step 2.4.1: shooting the pedestrian entering the gate by using the camera;
step 2.4.2, divide the picture into S × S grids, if there is a person' S centre in a grid, that grid is responsible for detecting the person, each grid has M predictions;
step 2.5.3: and finely adjusting the detection frame of each pedestrian according to the size of the preset frame, so that the detection frame is more suitable for the size of the real number of people.
And step 3: and judging the ticket evasion behavior of the gate opening based on the abnormal behavior detection and judgment rule of the entering gate. Specifically, step 3 includes the steps of:
after the pedestrian passes through the floodgate machine, send control signal for the camera by the mainboard, inform the pedestrian and have passed through the floodgate machine, judge the number of people who passes through the floodgate machine and whether have the action of fleing for a fee, the step is as follows:
step 3.1: when only one person is detected to pass through the gate, the phenomenon is normal and no alarm is given;
step 3.2: when more than one person is detected to pass through the gate, the coincidence degree of the size of the detection frame and the size of the preset frame is judged, when a small detection frame is arranged in the detection frame and the size is similar to the size of the preset frame output by y3, the person is shown to be carrying a child, which is a normal phenomenon and does not give an alarm; when the two detection frames do not coincide and the size is similar to the preset frame size output by y1 and y2, the situation that two persons enter the gate at the same time is indicated, and the situation that the persons flee the ticket is indicated. And the alarm is given through a voice alarm module.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A gate ticket-passing behavior detection method based on target detection is used for distinguishing a ticket-passing behavior of a pedestrian passing through the gate, and a camera is arranged above the gate, and is characterized by comprising the following steps:
s1: driving the camera by using a pulse signal which is input into the gate by swiping a card by a pedestrian, and shooting by aiming at the gate;
s2: based on the images shot by the camera, the number of people entering the gate is detected abnormally by utilizing a pre-trained target detection network;
s3: and judging the ticket evasion behavior of the gate based on a pre-established gate abnormal behavior judgment rule.
2. The method for detecting the ticket evasion behavior of the gate based on the object detection as claimed in claim 1, wherein in step S1, the camera is driven by the pulse signal of the pedestrian swiping the card to enter the gate, specifically, after the pedestrian swipes the card, the gate detects the card swiping information, transmits a door opening signal to the main board, and the main board sends a control signal to open the gate to allow the pedestrian to pass; meanwhile, the main board sends a signal to drive a camera above the gate to shoot.
3. The method for detecting gate-passing ticket evasion behavior based on object detection as claimed in claim 1, wherein said step S1 further comprises capturing a snapshot of said gate by said camera in a periodic shooting manner when said pulse signal is not received by said camera.
4. The gate fare evasion behavior detection method based on object detection as claimed in claim 1, wherein said object detection network is a Yolo V3 network, and the output of the Yolo V3 network includes three dimensions: y1, for detecting tall people; y2, for detecting persons of intermediate stature; y3 for detecting children.
5. The gate ticket evasion behavior detection method based on object detection as claimed in claim 4, wherein in step S2, the pre-training process of the object detection model comprises the following steps:
s201: acquiring crowd images to form a training data set;
s202: marking people in each crowd image in the training data set, and generating a marking file, wherein the marking file comprises a central coordinate point of a marking frame and the width and the height of the marking frame;
s203: clustering the training data set based on the width and the height of the labeling frame to obtain preset frames of three output scales of the Yolo V3 network;
s204: and detecting the crowd images in the training data set by using a preset frame, and optimizing the Yolo V3 network by using a loss function and a gradient descent training parameter.
6. The method for detecting gate-passing ticket evasion behavior based on object detection as claimed in claim 5, wherein said step S203 specifically comprises the following steps:
1) randomly selecting K clustering centers;
2) calculating the similarity degree of each labeling frame and each clustering center, and classifying the combinations of the labeling frame with the minimum similarity degree and the clustering center into one class;
3) recalculating the clustering center of each type of labeling frame, and if the difference between the recalculated clustering center and the corresponding clustering center in the step 1) is greater than a preset first value, replacing the corresponding clustering center in the step 1) with the recalculated clustering center, and executing the step 2) and the step 3) again in sequence; otherwise, clustering is completed.
7. The gate ticket evasion behavior detection method based on object detection as claimed in claim 6, wherein in step 2), the calculation expression of the similarity degree is:
d=1-IOU[(wj,hj),(Wi,Hi)],j∈{1,2,...,N},i∈{1,2,...,9}
Figure FDA0002413122910000021
wherein d is the degree of similarity, wjFor the jth width of the box, hjFor the jth height of the box, WiIs the width of the ith cluster center, HiIs the height of the ith cluster center.
8. The method for detecting gate-through fare evasion behavior based on target detection as claimed in claim 6, wherein in step 3), the calculation expression for recalculating the cluster center of each type of labeled box is as follows:
Figure FDA0002413122910000022
Figure FDA0002413122910000023
in the formula, Wi' is the width of the ith cluster center after recalculation, Hi' is the height of the ith cluster center after recalculation, NiNumber of label boxes classified in the ith cluster center, wjFor the jth width of the box, hjThe height of the box is marked for the jth.
9. The gate ticket evasion behavior detection method based on object detection as claimed in claim 5, wherein the computational expression of the loss function is:
Figure FDA0002413122910000031
wherein S × S is a grid of the crowd image divided into S rows and S columns,
Figure FDA0002413122910000032
for the abscissa offset of the jth preset box in the ith grid,
Figure FDA0002413122910000033
the vertical coordinate offset of the jth preset frame in the ith grid,
Figure FDA0002413122910000034
the scaling ratio for the width of the jth preset box in the ith grid,
Figure FDA0002413122910000035
for the height scaling of the jth preset box in the ith grid,
Figure FDA0002413122910000036
the horizontal coordinate offset of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure FDA0002413122910000037
the vertical coordinate offset of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure FDA0002413122910000038
the width scaling ratio of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure FDA0002413122910000039
the height scaling ratio of the real target center point of the ith grid relative to the upper left corner of the grid where the point is located,
Figure FDA00024131229100000310
is the first flag bit, when the jth preset frame in the ith grid is responsible for the detected target,
Figure FDA00024131229100000311
otherwise
Figure FDA00024131229100000312
Figure FDA00024131229100000313
For the confidence of the jth preset box in the ith grid,
Figure FDA00024131229100000314
is the second flag bit, when the jth preset frame in the ith grid is not responsible for the detected target,
Figure FDA00024131229100000315
otherwise
Figure FDA00024131229100000316
Figure FDA00024131229100000317
The confidence value of the jth preset frame in the ith grid is the true value, namely whether the ith grid has the preset frame to detect the target or not, if so,
Figure FDA00024131229100000318
otherwise
Figure FDA00024131229100000319
PiIs the ith netThe probability of the predicted category of the bin,
Figure FDA00024131229100000320
is the true probability that the ith mesh belongs to a certain class, classes is the set of prediction classes, c is the prediction class, lambdanoobjFor the first coordinate error correction parameter, λcoordAnd each grid comprises M preset frames for the second coordinate error correction parameter.
10. The gate fare evasion behavior detection method based on object detection as claimed in claim 4, wherein the detection result of the Yolo V3 network includes detection boxes of each person; in step S3, the gate abnormal behavior determination rule is specifically that when only one person is detected to pass through the gate, no alarm is given; when more than one person is detected to pass through the gate, judging the coincidence degree of the size of the detection frame and the size of the preset frame, and when a small detection frame is arranged in the detection frame, and the size of the small detection frame is similar to the size of the preset frame output by y3, not giving an alarm; when the two detection boxes do not coincide and the sizes of the two detection boxes are similar to the preset box size output by y1 or y2, an alarm is issued.
CN202010182708.9A 2020-03-16 2020-03-16 Gate ticket-evading behavior detection method based on target detection Active CN111460924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010182708.9A CN111460924B (en) 2020-03-16 2020-03-16 Gate ticket-evading behavior detection method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010182708.9A CN111460924B (en) 2020-03-16 2020-03-16 Gate ticket-evading behavior detection method based on target detection

Publications (2)

Publication Number Publication Date
CN111460924A true CN111460924A (en) 2020-07-28
CN111460924B CN111460924B (en) 2023-04-07

Family

ID=71685296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010182708.9A Active CN111460924B (en) 2020-03-16 2020-03-16 Gate ticket-evading behavior detection method based on target detection

Country Status (1)

Country Link
CN (1) CN111460924B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111064925A (en) * 2019-12-04 2020-04-24 常州工业职业技术学院 Subway passenger ticket evasion behavior detection method and system
CN112200828A (en) * 2020-09-03 2021-01-08 浙江大华技术股份有限公司 Detection method and device for ticket evasion behavior and readable storage medium
CN113159009A (en) * 2021-06-25 2021-07-23 华东交通大学 Intelligent monitoring and identifying method and system for preventing ticket evasion at station
CN117392585A (en) * 2023-10-24 2024-01-12 广州广电运通智能科技有限公司 Gate traffic detection method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886085A (en) * 2019-01-03 2019-06-14 四川弘和通讯有限公司 People counting method based on deep learning target detection
CN110059554A (en) * 2019-03-13 2019-07-26 重庆邮电大学 A kind of multiple branch circuit object detection method based on traffic scene
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN109886085A (en) * 2019-01-03 2019-06-14 四川弘和通讯有限公司 People counting method based on deep learning target detection
CN110059554A (en) * 2019-03-13 2019-07-26 重庆邮电大学 A kind of multiple branch circuit object detection method based on traffic scene
CN110532852A (en) * 2019-07-09 2019-12-03 长沙理工大学 Subway station pedestrian's accident detection method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
牟总斌;: "基于人体头肩部与步态检测的闸机通行逻辑" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111064925A (en) * 2019-12-04 2020-04-24 常州工业职业技术学院 Subway passenger ticket evasion behavior detection method and system
CN111064925B (en) * 2019-12-04 2021-05-04 常州工业职业技术学院 Subway passenger ticket evasion behavior detection method and system
CN112200828A (en) * 2020-09-03 2021-01-08 浙江大华技术股份有限公司 Detection method and device for ticket evasion behavior and readable storage medium
CN113159009A (en) * 2021-06-25 2021-07-23 华东交通大学 Intelligent monitoring and identifying method and system for preventing ticket evasion at station
CN117392585A (en) * 2023-10-24 2024-01-12 广州广电运通智能科技有限公司 Gate traffic detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111460924B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111460924B (en) Gate ticket-evading behavior detection method based on target detection
US10223597B2 (en) Method and system for calculating passenger crowdedness degree
CN108416250B (en) People counting method and device
CN102147851B (en) Device and method for judging specific object in multi-angles
CN102496001B (en) Method of video monitor object automatic detection and system thereof
CN101980245B (en) Adaptive template matching-based passenger flow statistical method
CN109508715A (en) A kind of License Plate and recognition methods based on deep learning
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN105787466B (en) A kind of fine recognition methods and system of type of vehicle
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN103324955A (en) Pedestrian detection method based on video processing
CN104978567A (en) Vehicle detection method based on scenario classification
CN105303191A (en) Method and apparatus for counting pedestrians in foresight monitoring scene
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN103208008A (en) Fast adaptation method for traffic video monitoring target detection based on machine vision
CN111553201A (en) Traffic light detection method based on YOLOv3 optimization algorithm
CN109241938A (en) Congestion in road detection method and terminal
CN111292432A (en) Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection
CN101950448B (en) Detection method and system for masquerade and peep behaviors before ATM (Automatic Teller Machine)
CN103679214A (en) Vehicle detection method based on online area estimation and multi-feature decision fusion
CN103971100A (en) Video-based camouflage and peeping behavior detection method for automated teller machine
CN106384089A (en) Human body reliable detection method based on lifelong learning
CN113014870A (en) Subway gate passage ticket evasion identification method based on passenger posture rapid estimation
CN113177439A (en) Method for detecting pedestrian crossing road guardrail
CN116630989A (en) Visual fault detection method and system for intelligent ammeter, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant