CN114979558A - Tracking system and tracking method for risk figure tracking based on face recognition - Google Patents

Tracking system and tracking method for risk figure tracking based on face recognition Download PDF

Info

Publication number
CN114979558A
CN114979558A CN202110304146.5A CN202110304146A CN114979558A CN 114979558 A CN114979558 A CN 114979558A CN 202110304146 A CN202110304146 A CN 202110304146A CN 114979558 A CN114979558 A CN 114979558A
Authority
CN
China
Prior art keywords
risk
camera
tracking
information
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110304146.5A
Other languages
Chinese (zh)
Inventor
王吉星
道炜
沈泳龙
曹雪松
李杰明
肖建武
熊军
周琳滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tisson Regaltec Communications Tech Co Ltd
Original Assignee
Tisson Regaltec Communications Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tisson Regaltec Communications Tech Co Ltd filed Critical Tisson Regaltec Communications Tech Co Ltd
Priority to CN202110304146.5A priority Critical patent/CN114979558A/en
Publication of CN114979558A publication Critical patent/CN114979558A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07BTICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
    • G07B15/00Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points
    • G07B15/02Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points taking into account a variable factor such as distance or time, e.g. for passenger transport, parking systems or car rental systems
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/38Individual registration on entry or exit not involving the use of a pass with central registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

A tracking system and a tracking method for risk figure tracking based on face recognition are provided. The tracking system comprises a background system, a face recognition machine gate, a camera and a face recognition library. The tracking method comprises the following steps: brushing the face of a passenger through a face recognition machine gate and obtaining the identity and account information of the passenger; opening the gate after the fee deduction is successful; carrying out face blacklist search on face information of passengers; judging whether the passenger is a risk figure or not; when the passenger is judged to be a risk figure, controlling a camera to capture image information of the risk figure and executing a target tracking camshift algorithm to track the risk figure; and continuously transmitting the trace of the risk person to the public security server.

Description

Tracking system and tracking method for risk figure tracking based on face recognition
Technical Field
The invention relates to a tracking technology, in particular to a tracking system and an ensemble following method for tracking risk figures based on face recognition.
Background
Under the environment of current epidemic situation, people go out and all need wear the gauze mask, this has brought certain degree of difficulty and pressure for the work that the policeman is carrying out face identification and tracking to the risk personage.
However, the face payment function has been introduced to the public transportation system in some cities, and the passenger only needs to take off the mask, can brush the face and pay and enter the bus station, and the city bus can pass through each corner in city, and this has brought certain facility for the risk personage safety monitoring purpose of public security.
Therefore, how to identify the risk people in time after the bus station swipes the face and enters the station and track the risk people in real time after wearing the mask is a project which needs to be developed in the technical field.
In the prior art, although target tracking can be performed through a target tracking algorithm, the result of the target tracking algorithm is often inaccurate due to factors such as occlusion. In addition, at present, as the people in epidemic situations wear the mask, the tracking difficulty of the target tracking algorithm on the risk people is improved. That is, the prior art has not developed a detailed mining of the object tracking needs in a crowd-intensive environment such as a bus stop.
Disclosure of Invention
The invention mainly aims to provide a tracking system and a tracking method for tracking risk persons based on face recognition, which can be used for recognizing the risk persons and tracking the risk persons in real time by virtue of a public transportation system and reporting the track of the risk persons back to a public security server.
To achieve the object, the tracking system of the present invention includes:
the face recognition machine gate is fixedly arranged at the bus station and is provided with a face brushing area and a gate;
the first camera is fixedly arranged in the bus station;
the face recognition library is used for storing a plurality of face pictures, and each face picture corresponds to one account information;
the background system is electrically connected with the face recognition machine gate, the first camera and the face recognition library, controls the face brushing area to obtain face information of a passenger, compares the face information of the passenger with the plurality of face pictures in the face recognition library to obtain the account information of the passenger, deducts fees according to the account information, and controls the gate to be opened after the fee deduction is successful;
the background system searches the face information in the face blacklist library to judge whether the passenger is a risk person, controls the first camera to capture image information including the risk person when the passenger is judged to be the risk person, executes a target tracking Camshift algorithm according to the image information to track the risk person, and sends the track of the risk person to a public security server.
In an embodiment, the tracking system further includes a database electrically connected to the background system, and storing a criminal behavior model, wherein the criminal behavior model is trained through a neural network to record image classifications corresponding to different criminal behaviors, and the background system compares the image information with the criminal behavior model to determine whether the risk person has an abnormal behavior.
In one embodiment, the tracking system further comprises:
the Wi-Fi information release equipment is electrically connected with the background system, is arranged in the bus station and sends Wi-Fi signals to the outside; and
the database is electrically connected with the background system and used for storing a fingerprint identification plane map, the fingerprint identification plane map records the plane map of the bus station, the positions of the plane map covered by the camera images of the cameras in the bus station, the positions of the plane map corresponding to the coordinates of each square pixel point in the camera images of the cameras, and the signal intensity of the mobile equipment for receiving the Wi-Fi signal at each position in the plane map;
the background system acquires mac information of the mobile device within an allowable range from the face recognition gateway when the fee deduction is successful so as to position the risk person, and tracks the risk person through the fingerprint recognition plane map and the signal strength of the mobile device relative to the Wi-Fi information issuing device.
In an embodiment, the tracking system further includes a second camera electrically connected to the background system and disposed closest to the first camera, and the background system submits the last position information of the risk person in the first camera to the second camera when the risk person is about to exceed the shooting range of the first camera, performs similarity retrieval on the risk person in the shooting picture of the second camera, and locates the risk person in the shooting range of the second camera in combination with the position information and continues to track the risk person.
In one embodiment, the tracking system further comprises:
the database is electrically connected with the background system and used for storing people flow speed statistical data which record the moving speed ranges corresponding to various people flow densities in the bus station; and
the second camera is electrically connected with the background system, wherein the second camera is a camera which is arranged in the plurality of cameras in the bus station and is closest to the first camera;
the background system calculates the current people flow density according to the camera shooting picture of the first camera, queries the database to obtain the corresponding moving speed range, and calculates the time required by the risk person to reach the camera shooting range of the second camera according to the moving speed range;
when the required time passes and the risk person disappears in the camera image of the first camera, the background system plans an interest range circle in the camera image of the second camera according to the required time and the moving speed range, and performs similarity retrieval of the risk person in the interest range circle and executes the target tracking Camshift algorithm so as to locate the risk person in the camera image of the second camera and continue to track the risk person.
To achieve the object, the tracking method of the present invention includes:
carrying out face brushing action on a passenger through a face brushing area of a face recognition machine brake of a bus station, and obtaining face information of the passenger;
comparing the face information with a face recognition library, and acquiring account information of the passenger, wherein a plurality of face pictures are recorded in the face recognition library, and the plurality of face pictures respectively correspond to one account information;
deducting fee according to the account information, and controlling the gate of the face recognition machine gate to open after the fee deduction is successful;
searching the face information in a face blacklist library, and judging whether the passenger is a risk figure or not;
when the passenger is judged to be a risk figure, controlling a first camera in the bus station to capture image information containing the risk figure;
executing a target tracking camshift algorithm according to the image information to track the risk person; and
and sending the trace of the risk person to a public security server.
In one embodiment, the tracking method further comprises the following steps:
when the passenger is judged to be a risk figure, reading the database to obtain a criminal behavior model, wherein the criminal behavior model is trained by a neural network to record image classifications corresponding to different criminal behaviors;
comparing the image information of the risk figure with the criminal behavior model to judge whether the risk figure has abnormal behavior; and
and sending the abnormal behavior of the risk person to a public security server.
In an embodiment, the step of comparing the image information of the risk person with the crime behavior model further includes:
when the passenger is judged to be a risk figure, acquiring criminal behaviors related to the risk figure from a public security server;
filtering the model content of the criminal behavior model according to the criminal behavior; and
and comparing the image information with the filtered criminal behavior model to judge whether the risk character has abnormal behavior.
In one embodiment, the step of deducting the fee according to the account information further includes obtaining mac information of the mobile device within an allowable range from the face brushing area after the fee deduction is successful so as to locate the passenger, and in the step of executing a target tracking camshift algorithm according to the image information, the risk person is tracked through a fingerprint identification plane map recorded in a database and signal strength of the mobile device relative to a Wi-Fi information distribution device in the bus stop.
In an embodiment, the bus station has the Wi-Fi information issuing device for sending out Wi-Fi signals, and the database stores the fingerprint identification plan, wherein the fingerprint identification plan records a plan of the bus station, positions of the camera images of the cameras in the bus station covering the plan, positions of each square pixel point in the camera images of the cameras corresponding to the plan, and signal strength of the mobile device receiving the Wi-Fi signals at each position in the plan.
In one embodiment, the tracking method further comprises the following steps:
judging whether the risk figures are about to exceed the shooting range of the first camera or not;
when the risk figure is about to exceed the shooting range of the first camera, submitting the last position information of the risk figure in the first camera to a second camera with a setting position closest to the first camera;
searching similarity of the risk figures in a camera shooting picture of the second camera;
and combining the similarity retrieval result and the position information, positioning the risk person in the shooting range of the second camera and continuously tracking the risk person.
In one embodiment, the tracking method further comprises the following steps:
calculating the current people stream density according to the camera image of the first camera;
inquiring the people flow speed statistical data according to the people flow density to obtain the corresponding moving speed range, wherein the people flow speed statistical data records the moving speed ranges corresponding to various people flow densities in the bus station respectively;
calculating the time required for the risk figure to reach the camera shooting range of a second camera with the set position closest to the first camera according to the moving speed range;
judging whether the required time passes or not, and judging whether the risk figures disappear in a camera shooting picture of the first camera or not;
when the required time passes and the risk figure disappears in the camera shooting picture of the first camera, planning an interesting range circle in the camera shooting picture of the second camera according to the required time and the moving speed range;
similarity retrieval of the risk figure is conducted within the range of interest circle and the target tracking camshift algorithm is executed to locate the risk figure within the camera range of the second camera and continue tracking the risk figure.
In one embodiment, the tracking method further comprises the following steps:
judging whether the risk figures disappear from the camera pictures of all cameras in the bus station or not;
acquiring the nearest entrance position of the bus stop before the risk figure disappears;
acquiring the time point when the risk figure disappears;
inquiring bus time data according to the entrance position of the bus station and the time point so as to confirm the bus taken by the risk figure;
inquiring the driving route and the license plate number of the bus;
judging the getting-off place of the risk figure according to the driving route and the license plate number; and
and a camera controlling the getting-off place captures the image information including the risk figure, carries out similarity retrieval of the risk figure according to the image information and executes the target tracking Camshift algorithm so as to locate the risk figure at the getting-off place and continuously track the risk figure.
In one embodiment, the tracking method further comprises the following steps:
obtaining the criminal behaviors related to the risk figures from a public security server;
obtaining the current location of the risk figure;
filtering the model content of the criminal behavior model according to the criminal behavior and the place where the criminal behavior model is located, wherein the criminal behavior model is trained through a neural network to record image classification corresponding to different criminal behaviors;
comparing the image information of the risk figure with the filtered criminal behavior model;
judging whether the risk figures have abnormal behaviors or not according to the comparison result; and
and reporting the public security server when judging that the risk figure has abnormal behavior.
Compared with the technical effect which can be achieved by the prior art, the method and the system can track the risk figures in real time, improve the accuracy of target tracking, know the directions of the risk figures and identify whether the risk figures have abnormal behaviors or not.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings. Like reference numerals refer to like parts throughout the drawings, and the drawings are not intended to be drawn to scale in actual dimensions, emphasis instead being placed upon illustrating the principles of the invention.
FIG. 1 is a first embodiment of a block diagram of the tracking system of the present invention.
Fig. 2 is a first embodiment of the trace flow diagram of the present invention.
Fig. 3 is a first embodiment of a cross-camera tracking flow diagram of the present invention.
Fig. 4 is a second embodiment of a cross-camera tracking flow diagram of the present invention.
Fig. 5 is a third embodiment of a cross-camera tracking flow diagram of the present invention.
Fig. 6 is a second embodiment of the tracking flow chart of the present invention.
Fig. 7 is a first embodiment of an abnormal behavior recognition flowchart of the present invention.
Reference numerals:
1 … tracking system
11 … background system
12 … face recognition machine brake
13 … camera
14 … face recognition library
15 … database
151 … crime behavior model
152 … fingerprint identification plane map
153 … statistical data of people stream speed
16 … Wi-Fi signal issuing equipment
2 … public security server
21 … face blacklist library
3 … mobile device
31 … mac information
S10-S26, S80-S92 … tracking steps
S30-S36, S40-S52 and S60-S74 … cross-camera tracking steps
S100-S110 … abnormal behavior identification step
Detailed Description
The following detailed description of the present invention is provided in connection with the accompanying drawings and specific embodiments for the purpose of better understanding and enabling those skilled in the art to practice the present invention, which are not intended to limit the present invention.
The invention discloses a tracking system for tracking risk figures based on face recognition, which combines the techniques of face recognition, a face information base, a target tracking algorithm, a face black list base and the like, identifies the identity of a passenger when the passenger enters into a bus and passes through a brake, finds out the risk figures in the face black list base as targets, tracks the targets in a public traffic system, identifies whether abnormal behaviors exist or not, and sends the identification result to a public security server so as to achieve a better security effect.
Fig. 1 discloses a first embodiment of a block diagram of the tracking system of the present invention.
The tracking system 1 is mainly arranged in a bus station and is used as a part of the bus system, in other words, the tracking system 1 is respectively arranged at different bus stations in the same city, or the tracking system 1 is respectively arranged at different cities, and the invention can also achieve cross-station tracking of targets. Moreover, the tracking system 1 of the present invention can be connected to the internet, and online with the public security server 2 through the internet, and transmits the tracking progress to the public security server 2 in real time.
As shown in fig. 1, the tracking system 1 of the present invention mainly includes a background system 11, and a face recognition gate 12, a plurality of cameras 13, and a face recognition library 14 electrically connected to the background system 11, which may be the existing components of a general public transportation system, and are modified by hardware or software and applied to the tracking system 1 of the present invention.
The face recognition machine gate 12 is arranged at a station entrance of a bus station and is provided with a face brushing area and a gate. When a passenger wants to enter a bus station, the passenger needs to take off the mask to brush the face and pay for entering the bus station.
The cameras 13 are fixedly arranged at various positions in the bus station, and the shooting pictures of the cameras 13 respectively cover different positions in the bus station. The present invention recognizes a target person by means of image information captured by a plurality of cameras 13, and tracks the target person in real time in the captured image of each camera 13.
The face recognition library 14 records a plurality of face pictures, each of which corresponds to the identity of a passenger and account information of the passenger in the public transportation system. The invention obtains the face information of the passenger who wants to get in the station by the face brushing technology, and the face information of the passenger is searched in the face recognition library 14, so that a face picture with the highest similarity (for example, the similarity is higher than a preset threshold) can be obtained, and therefore, the tracking system can obtain the identity and account information of the passenger by the face picture with the highest similarity.
The background system 11 is used to control various components in the tracking system 1 of the present invention, including the face recognition engine gate 12, the plurality of cameras 13, and the face recognition library 14.
When a passenger wants to enter the bus station, the background system 11 controls the face recognition gate 12 to brush the face of the passenger through the face brushing area to obtain the face information of the passenger, then the face information is lost into the face classification library 14 for retrieval, and at least one face picture with the similarity higher than the threshold value is returned. At this time, the back office system 11 may obtain the identity and account information of the passenger by the returned face picture, and deduct the account of the passenger according to the account information of the passenger. After the passenger account is successfully deducted, the background system 11 can control the gate of the face recognition machine gate 12 to open, so that the passenger can enter the bus station.
One of the technical features of the present invention is that after the fee deduction is successful, the background system 11 drops the face information of the passenger into the face blacklist library 21 for searching to determine whether the passenger is a risk person listed by the police. In the embodiment of fig. 1, the face blacklist library 21 is built in the public security server 2, and the backend system 11 connects to the public security server 2 through the internet to search the face blacklist library 21. In other embodiments, the face blacklist store 21 may be recorded directly in the tracking system 1 for ease of use.
One of the technical features of the present invention is that after a passenger is determined to be a risk person, the background system 11 immediately controls at least one camera 13 in the bus station to capture image information including an image of the risk person, and performs a target tracking algorithm according to the image information to track the risk person. And, the backend system 11 continuously transmits the whereabouts of the risk persons to the police server 2 through the internet. The whereabouts of the risk persons are not limited, and may be, for example, in which camera 13 the risk persons are currently located, at which station of the bus station, at which floor of the bus station, at which bus station, and the like.
Specifically, after the background system 11 finds that the passenger is a risk person, it first controls one or more cameras 13 capable of capturing the risk person to capture and store one or more image information of the risk person. The image information referred to here is a whole body image or a half body image of the risk person captured by each camera 13.
When the target detection is carried out (namely, whether the person in the image information is the target risk person is judged), single image information is adopted for carrying out the target detection; when the target tracking is carried out, the person in the multi-frame long-time image information is continuously tracked. Generally, the object detection is to detect the positions and classes of objects in a digital image, which creates a model, inputs a digital image into the model, and the output of the model can circle out the positions and classes of all objects in the image on the digital image. In the present invention, the output of the model is the risk figure (i.e. the class of object is person) and the location of the risk figure. The multi-angle whole body or half body of the risk person acquired by the object detection can be used as a starting point picture for the tracking performed by each camera 13.
In another embodiment, the tracking system 1 may also obtain the criminal behavior related to the risk person from the public security server 2 when confirming that the passenger is the risk person. For example, the tracking system 1 searches the face information of the passenger in the face blacklist library 21, and finds that the passenger is a suspect who is released by the passenger after the passenger steals a prison (that is, the criminal act is a theft act), and there is a need to pay attention to whether the suspect has a theft act or not. In this embodiment, the tracking system 1 of the present invention may further analyze and detect the motion of the risk person through the acquired image information to determine whether the risk person has an abnormal behavior (such as the theft behavior).
The above-mentioned criminal act also includes the time of opening the gate following the passenger in front, i.e. the act of ticket evasion. By identifying the identity of the risk person and judging the behavior of the risk person for the ticket evasion, the tracking system 1 of the present invention can also automatically deduct the fee from the account of the passenger when judging the behavior of the ticket evasion of the passenger once, so as to recover the loss caused by the behavior of the ticket evasion.
As described above, the invention mainly carries out face brushing and fee deduction on passengers when the passengers get in the station, and simultaneously identifies whether the passengers are risk persons or not. When the passenger is determined to be a risk person, the background system 11 first controls one or more cameras 13 whose camera ranges cover the position of the gate 12 (i.e. the cameras 13 located close to the gate) to capture image information of the risk person, and then executes a target tracking algorithm by using the image information.
Specifically, the invention mainly adopts a target tracking continuous Adaptive Mean Shift (Camshift) algorithm, which is an improvement on the Mean Shift algorithm. Specifically, the target tracking Camshift algorithm can automatically adjust the size of a search window to adapt to the size of a target, and can track the target with changed size in a video.
The basic idea of the target tracking Camshift algorithm is that color information of moving objects in a video image is used as characteristics, a Mean-Shift algorithm is respectively carried out on each frame of an input image, and the target center and the size (kernel function bandwidth) of a search window of a previous frame are used as the initial values of the center and the size of the search window of the Mean-Shift algorithm of a next frame. And iterating in such a way, tracking the target can be realized.
Because the position and size of the search window is set to the position and size of the current center of the tracked target before each search, and the tracked target is usually near the area, the search time is shortened. In addition, the color change is not large in the process of moving along the tracked target, so that the Camshift algorithm for tracking the target has good robustness.
As shown in fig. 1, the tracking system 1 of the present invention further includes a database 15 electrically connected to the back-end system 11, and at least a pre-trained criminal behavior model 151 is stored in the database 15. One of the technical features of the present invention is that the tracking system 1 can perform a behavior detection algorithm by using the image information to determine whether the risk person has an abnormal behavior, in addition to tracking the risk person by using the image information acquired by the camera 13.
The criminal behavior model 151 is a model trained by a Convolutional Neural Network (CNN) in advance, and records image classifications corresponding to different criminal behaviors. The behavior detection algorithm may be implemented with a Two-Stream network (Two-Stream conditional Networks) that includes a Spatial Stream network (Spatial Stream network) that uses still frame pictures as input and a Temporal Stream network (Temporal Stream network) that uses dense optical Stream sequences between successive frames as input.
Specifically, in performing behavior detection, the background system 11 samples a given piece of video (i.e., a continuous plurality of pieces of image information) for a certain number of frames, with the same time interval between frames. Then, the background system 11 performs random cropping (crop) and flipping (flip) to obtain multiple inputs (10 inputs for example) of the network for each frame. For each network input, the back-office system 11 takes the RGB image as input to the spatial stream network, and extracts a dense optical flow sequence between subsequent L consecutive frames for the RGB image as input to the temporal stream network. Finally, the background system 11 calculates the average of the classification scores obtained for all the input images as the classification score of the entire video.
In the invention, the background system 1 captures continuous image information of the risk person through the camera 13 and compares the image information with the pre-trained criminal behavior model 151, so that whether abnormal behaviors (such as attack, theft, ticket evasion, etc.) occur in the risk person can be judged, and an alarm is given or the public security server 2 is notified when the abnormal behaviors occur.
As shown in fig. 1, the tracking system 1 of the present invention further includes a Wi-Fi information distribution device 16 electrically connected to the backend system 11. The Wi-Fi information issuing equipment 16 is arranged in the bus station and used for sending Wi-Fi signals to the outside. And, the number of Wi-Fi information distribution devices 16 may be one or more. As shown in fig. 1, the database 15 further stores a fingerprint identification plan map 152. The fingerprint refers not to a fingerprint on a physiological characteristic of the passenger, but refers to a fixed characteristic (e.g., Media Access Control (MAC) information) of the mobile device 3 carried by the passenger as an identification characteristic directly associated with the passenger, and is unique as the fingerprint, and thus named.
In the present invention, the fingerprint identification plane map 152 corresponds to a bus station, and records a plane map of the bus station (the plane map records a structure of the entire bus station, such as a map with information of multiple inbound stations, road widths, and heights), positions of the plane map covered by the camera images of multiple cameras 13 in the bus station, a position of the plane map corresponding to coordinates of each square pixel in the camera images of multiple cameras 13, and a signal intensity of the mobile device 3 receiving the Wi-Fi signal of the Wi-Fi signal distribution device 16 at each position in the plane map.
In order to more accurately track the risk people, the tracking system 1 of the present invention can also combine the target tracking camshift algorithm and mobile device fingerprint localization through mac information.
The technical principle of the mobile device fingerprinting is to associate a location in the actual environment with a certain fingerprint of the mobile device 3 (e.g. the mac information) to generate a unique location fingerprint (i.e. a location corresponds to a unique fingerprint). This fingerprint may be one or more dimensions, such as information being received or transmitted by the mobile device to be located, and may be a characteristic or characteristics of this information or signal (e.g., signal strength).
If the mobile device to be positioned is transmitting signals, some fixed receiving devices sense the signals or information of the mobile device to be positioned and then position it, which is usually called remote positioning or network positioning. If the mobile device to be positioned receives signals or information of some fixed sending devices, and then estimates the position of the mobile device according to the detected characteristics, the method can be called self-positioning. A mobile device to be positioned may communicate the features it detects to a server node in the network, which may use all of the information it can obtain to estimate the location of the mobile device, a manner known as hybrid positioning.
In the tracking system 1 of the invention, the mobile device 3 carried by the risk person can be positioned by the Wi-Fi signal issuing device 16 arranged in the bus station and the default fingerprint identification plane map 152, so as to improve the tracking accuracy.
Specifically, the face recognition machine gate 12 is fixedly disposed in the bus station and has a fixed position. In the present invention, when the identity of the passenger is identified by the face recognition gateway 12 and the fee is successfully deducted, and the passenger is judged to be a risk figure, the backend system 11 can directly obtain the mac information 31 of the mobile device 3 within an allowable range from the face recognition gateway 12 by using the mobile device fingerprint positioning technology, so as to position the risk figure, that is, the position of the mobile device 3 is regarded as the position of the risk figure. It should be noted that the risky person does not necessarily carry the mobile device 3, and if the backend system 11 cannot obtain the mac information 31 in the above manner, the mac address is empty as a default.
After the mac information 31 of the mobile device 3 is successfully obtained, the backend system 11 can continuously track the risk person by fingerprinting the flat map 152 and the signal strength of the mobile device 3. Specifically, the backend system 11 continuously transmits Wi-Fi signals through the Wi-Fi signal distribution device 16 and determines the signal strength of the Wi-Fi signals received by the mobile device 3, and queries the fingerprint recognition plan map 152 according to the signal strength to infer the current location of the mobile device 3 (i.e., the risk person), thereby continuously tracking the mobile device 3.
It should be noted that the tracking system 1 of the present invention may further store a corresponding matching library (not shown) of the face and the mac information 31, and the corresponding matching library is used for screening, so as to prevent the situation that an approaching passenger approaches the gate, so that the system tracks an incorrect risk figure.
Specifically, the backend system 11 can synchronously acquire the mac information 31 of the passenger's mobile device 3 and save to the database 15 each time the passenger swipes his face through the gate. When the number of times a specific passenger and a specific mac message 31 appear simultaneously exceeds a preset number, the background system 11 defaults that the pair of the face information of the passenger and the mac message 31 is correct and writes the face information into the corresponding matching library. From there, as soon as the face information is recognized, the backend system 11 automatically acquires the mac information 31 and tracks the mobile device 3. In this way, tracking errors do not occur.
Referring to fig. 2, fig. 2 discloses a first embodiment of the tracking flow chart of the present invention. The invention further discloses a tracking method which is applied in a bus station by the tracking system 1 shown in fig. 1.
First, the tracking system 1 performs a face brushing operation on the passenger through the face brushing area of the face recognition gate 12 provided at the entrance of the bus station, and obtains the face information of the passenger (step S10). Next, the face information is compared with the face recognition library 14 to obtain the identity and account information of the passenger (step S12). As described above, the face recognition library 14 records a plurality of face pictures, and the plurality of face pictures respectively correspond to the identities and account information of different passengers.
Next, the back office system 11 deducts the fee from the passenger' S account based on the obtained account information, and determines whether the fee deduction operation is successful (step S14). If the deduction is unsuccessful, for example, the balance in the account is insufficient, the background system 11 will not open the gate, and the passenger cannot enter the station.
When the deduction operation is successful, the background system 11 controls the gate of the face recognition machine gate 12 to open (step S16). Meanwhile, the back office system 11 searches the face information obtained by face brushing in the face blacklist library 21 (step S18) to determine whether the passenger is a risk person (step S20). If the currently inbound passenger is determined not to be a risk person (i.e. the face information of the passenger is not in the face blacklist library 21), the background system 11 may not track the passenger, so as to reduce the burden of the system.
If it is determined in step S20 that the currently inbound passenger is a risk person (i.e. the face information is successfully compared in the face blacklist library 21), the back-end system 11 controls at least one camera 13 in the bus station to capture image information of the risk person, and executes a target tracking Camshift algorithm according to the image information to track the risk person in the image captured by the at least one camera 13 (step S22). Specifically, in step S22, the back-office system 11 controls the at least one camera 13 to capture images covering the gate, and the at least one camera 13 continuously captures continuous image information of the risk person.
In other embodiments, the background system 11 further determines whether the risk person has abnormal behavior by using the continuous image information of the risk person (step S24). Finally, the backend system 11 transmits the whereabouts and the abnormal behaviors of the risk person to the police server 2 via the internet (step S26). By such technical means, the tracking system 1 of the present invention can achieve a better security effect.
It should be noted that, when the background system 11 finds that the passenger is a risk person in step S20, the database 15 may be synchronously read to obtain the crime behavior model 151, where the crime behavior model 151 is a model trained by a neural network in advance, and image classifications corresponding to different crime behaviors are recorded. In step S24, the back-end system 11 compares the image information of the risk person captured by the camera 13 with the criminal behavior model 151 to determine whether the risk person has abnormal behavior. In step S26, the backend system 11 sends the abnormal behavior to the public security server 2 when the risk person has the abnormal behavior.
In one embodiment, after the background system 11 finds that the passenger is a risk person in step S20, in addition to reading the criminal behavior model 151 from the database 15, the criminal behavior related to the risk person, such as theft, injury, ticket evasion, etc., can be obtained from the public security server 2. With the acquired specific criminal act, the background system 11 can filter the criminal act models 151 stored in the database 15, and only leave the contents of the models related to the risk persons. Also, in step S24, the back office system 11 compares the image information of the risk person with the filtered criminal behavior model 151 to determine whether the risk person has abnormal behavior.
By the technical means of filtering the criminal behaviors, the tracking method can be used for carrying out key detection on the criminal behaviors possibly related to risk figures and carrying out targeted identification. When the recognized action range becomes smaller, unnecessary detection can be omitted, and the number of action classifications is reduced, thereby improving the classification accuracy and the system operation performance.
In order to correctly locate the risk person after identifying that a passenger is the risk person, the back office system 11 can obtain the mac information 31 of the mobile device 3 within an allowable range from the face recognition gateway 12 by using the mobile device fingerprint location technology to locate the risk person after successfully deducting the fee in the aforementioned step S14 (i.e. the location of the mobile device 3 meeting the condition is taken as the location of the risk person). And, in said step S22, the back office system 11 may combine the mobile device fingerprint positioning technology to execute a target tracking camshift algorithm according to the image information of the risk person, and simultaneously track the risk person through the fingerprint identification plan map 152 stored in the database 15 and the signal strength of the mobile device 3 relative to the Wi-Fi information distribution device 16 in the bus stop.
The mobile device fingerprinting positioning technology, the fingerprinting planar map 152 and the Wi-Fi information distribution device 16 are as described above and will not be described herein.
The tracking method of the present invention performs positioning and tracking by combining mac information 31 of the mobile device 3 and a target tracking camshift algorithm, in order to improve accuracy, but not every passenger may carry the mobile device 3. Therefore, in addition to the mobile device fingerprint positioning technology, the tracking method of the invention also refers to the people stream density and the people stream speed in the bus station to position and track the target.
The density of people streams within a bus stop can be presumably obtained by the crowdnet algorithm. Specifically, the image information/face information in the bus station can be acquired through the camera 13 in the bus station, and the output people stream quantity can be obtained by inputting the image information/face information into a preset crowdnet algorithm. The flow rate of the stream of people can be calculated by counting the flow rates of the stream of people at various densities of the stream of people in the same bus station, and then stored in the database 15. By means of the pre-counted and stored comparison relationship between the people flow density and the people flow speed, the background system 11 of the present invention can estimate the possible moving speed range of the risk person according to the current people flow density, thereby locating and tracking the risk person.
Referring to fig. 3, fig. 3 discloses a first embodiment of a cross-camera tracking flow chart of the present invention. As shown in fig. 3, when it is determined that one passenger is an at risk person and the at risk person is to be located and tracked, the back office system 11 determines whether the mac information 31 of the mobile device 3 of the at risk person can be obtained by the above-described technical means (step S30).
If the backend system 11 successfully obtains the mac information 31 of the mobile device 3 of the risk person, the representative tracking system 1 may use the aforementioned mobile device fingerprint location technology. In this embodiment, the back office system 11 performs object tracking by the object tracking camshift algorithm and continues tracking by the mobile device fingerprint location technique when the at risk person crosses the camera range of the different cameras 13 (step S32). When tracking is performed, the backend system 11 continuously transmits the whereabouts and the abnormal behaviors of the risk person to the police server 2 (step S36).
If the backend system 11 cannot obtain the mac information 31 of the mobile device 3 of the risk person, it may be that the mobile device 3 is not carried on behalf of the risk person. In this embodiment, the background system 11 performs target tracking through a target tracking camshift algorithm, and calculates the Region range that the risk person may reach through the person flow velocity, and when the risk person crosses the image range of different cameras 13, the risk person may be re-grabbed and tracked continuously through the planning of the Region of Interest (ROI) (step S34). Likewise, the background system 11 continuously transmits the whereabouts and abnormal behaviors of the risk person to the police server 2 while performing tracking (step S36).
How the tracking method of the present invention realizes cross-camera tracking in combination with mobile device fingerprint positioning technology will be described below with reference to the drawings.
If there are multiple cameras 13 (hereinafter, the first camera and the second camera are taken as an example) in the bus station, when the risk person is about to exceed the camera range of the first camera and enter the camera range of the adjacent second camera, the back-end system 11 may perform cross-camera capture on the risk person through the camera position relationship in the planar map. Specifically, the back-end system 11 determines that the second camera is closest to the first camera in geographic position through the planar map, and controls the second camera to start capturing image information.
In this embodiment, since the risk person moves across the cameras, the risk person may have a relatively large change in the image information acquired by the second camera, and if tracking is continued according to the image information finally acquired by the first camera at this time, a situation of tracking error is likely to occur. Therefore, the tracking method of the invention combines the fingerprint positioning technology of the mobile equipment to reduce the incidence rate of tracking errors.
Referring to fig. 4, fig. 4 discloses a second embodiment of the cross-camera tracking flow chart of the present invention.
As shown in fig. 4, after determining one of the risk persons, the back office system 11 controls a first camera of the plurality of cameras 13 to continuously capture image information of the risk persons, and performs a target tracking camshift algorithm according to the image information to track the risk persons within the camera range of the first camera (step S40). Then, the back office system 11 continuously determines whether or not the risk person is about to leave the imaging range of the first camera (step S42).
If the risk person does not leave the shooting range of the first camera, the background system 11 continuously controls the first camera to track the risk person.
If the risk person is about to leave the imaging range of the first camera, the back-office system 11 submits the last position information of the risk person in the imaging screen of the first camera to the second camera of the plurality of cameras 13 (step S44). Specifically, the location information refers to the location information determined by performing a mobile device fingerprint location technique with the mac information 31 of the mobile device 3 carried by the at risk person. The second camera refers to a camera that is closest to the first camera in the arrangement position on the plan map among the plurality of cameras 13 in the bus station.
Next, the back-end system 1 controls the second camera to capture image information, and compares the image information with the face information of the risk person to perform similarity search of the risk person within the imaging range of the second camera (step S46). Next, the back office system 11 aligns the position information (i.e., the corresponding position of the mobile device 3 of the at-risk person in the fingerprinting planar map 152) with the position pixel of the second camera (step S48), and then locates the at-risk person according to the similarity result, the position information, and the corresponding weight (step S50). And after step S50, the tracking system 1 may continue to track the at-risk person within the imaging range of the second camera (step S52).
For example, after performing similarity search by using the image information captured by the second camera and the face information of the risk person, the backend system 11 may find three candidates suspected to be the risk person, wherein the similarity thresholds of the face information of the first, second and third candidates and the risk person are 0.6, 0.7 and 0.8, respectively. At this time, the background system 11 acquires the position information of the risk person last acquired by the first camera, and determines that the distances between the current positions of the first, second and third candidates and the position information are 0.5 m, 0.4 m and 0.3 m, respectively. At this point, the back office system 11 may determine that the third candidate is a risk person and continue tracking the third candidate (i.e., the risk person) within the imaging range of the second camera.
As described above, if the mac information 31 of the mobile device 3 of the risk person is not available, the background system 11 can still improve the tracking accuracy by referring to the flow velocity. How the tracking method of the present invention realizes the tracking across the cameras in combination with the speed of people stream will be described below with reference to the drawings.
Referring to fig. 5, fig. 5 discloses a third embodiment of the cross-camera tracking flow chart of the present invention. As shown in fig. 5, after determining one of the risk persons, the back office system 11 controls the first camera of the plurality of cameras 13 to continuously capture image information of the risk persons, and tracks the risk persons within the imaging range of the first camera through the target tracking camshift algorithm (step S60). Meanwhile, the back-end system 11 calculates the current people stream density in the bus station according to the image information captured by the first camera (step S62). Also, the back office system 11 queries the crowd speed statistic data 153 stored in the database 15 in advance based on the current crowd density to obtain a corresponding moving speed range (step S64).
The traffic speed statistic 153 records the moving speed ranges of the people at the bus station under different traffic densities. In the present invention, the public transport provider can calculate the speed in advance for various traffic densities in the bus station, and store the calculated speed as the traffic speed statistical data 153 in the database 15.
Next, the back office system 11 calculates the time required for the risk person to move from the current position in the first camera to the image capturing range of the second camera closest to the first camera among the plurality of cameras 13, based on the obtained moving speed range (step S66). Also, the back office system 11 continuously calculates whether the required time has elapsed (step S68), and determines whether the risk person has disappeared within the imaging range of the first camera (step S70). If the required time has not elapsed or the risk person has not disappeared from the shooting range of the first camera, the back-office system 11 returns to step S60 to continue executing the target tracking camshift algorithm through the image information captured by the first camera, so as to continue tracking the risk person in the shooting range of the first camera.
One of the technical features of the present invention is that the background system 11 does not start the target tracking monitoring action of the second camera before the required time has not elapsed or the risk person has not disappeared from the shooting range of the first camera, so as to reduce the operation burden of the system. Moreover, after the required time passes and the risk person disappears from the shooting range of the first camera, the background system 11 improves the accuracy of identifying the risk person in the shooting range of the second camera by means of the construction of the interest range ring.
Specifically, when the required time elapses and the risk person disappears from the imaging range of the first camera, the back-office system 11 plans a range circle of interest in the imaging range of the second camera according to the required time and the movement speed range (step S72), and performs similarity retrieval and target tracking of the risk person only in the range circle of interest, so as to locate and track the risk person within the imaging range of the second camera according to the image information captured by the second camera (step S74).
The interest range ring is a reasonable range which is estimated to be possibly appeared by the risk person according to the moving speed and the time of the risk person. In this embodiment, the background system 11 does not perform similarity search of the risk persons in the areas outside the interesting range circle, so the operation burden of the tracking system 1 can be effectively reduced. If the background system 11 performs similarity search within the interest range and finds multiple candidates with similarity exceeding the threshold, the background system 11 regards the candidate with the highest similarity as a risk person in this embodiment.
As described above, the tracking method of the present invention can effectively improve the accuracy in cross-camera tracking by combining the target tracking camshift algorithm and the mobile device fingerprint positioning technology, combining the target tracking camshift algorithm and the calculation of the density of people flow and the speed of people flow, or simultaneously combining the target tracking camshift algorithm, the mobile device fingerprint positioning technology and the calculation of the density of people flow and the speed of people flow.
It should be noted that, by the tracking system 1 and the tracking method of the present invention, in addition to tracking the risk person in the bus station, the risk person can also continue to be tracked when leaving the bus station. Specific technical features are described in the following with reference to the figures.
Referring to fig. 6, fig. 6 is a tracking flow chart according to a second embodiment of the present invention. The risk person enters the bus station and is supposed to take the bus, and when the risk person gets on the bus, the risk person disappears in the image of all the cameras 13 in the bus station. Therefore, in the present invention, the back office system 11 continuously determines whether the risk person disappears in the captured images of all the cameras 13 in the bus station (step S80).
When the back office system 11 determines that the risk person disappears in the captured images of all the cameras 13, the entry position of the bus stop closest to the entrance position before the risk person disappears is further acquired (step S82), and the time point when the risk person disappears is also acquired (step S84). With the entrance position and the time point, the background system 11 can query the bus time data to determine what the bus stopped at the station at the time (step S86), i.e., to determine the bus on which the risk person is seated. The back office system 11 further obtains the travel route and the license plate number of the bus (step S88).
Specifically, the information of the entrance position and the time point of the bus stop, whether the bus stops, the running route of the bus, the license plate number and the like can be obtained through the internet.
In addition, since the cameras 13 are installed at the boarding and alighting station positions of each bus station, the tracking system 1 of the present invention can determine the alighting point of the risk person according to the driving route and the license plate number of the bus on which the risk person is seated, by combining the above-described technical scheme of cross-camera tracking (step S90). In addition, the tracking system 1 may control the camera 13 at the exit point to capture image information of the risk persons, and perform similarity search of the risk persons and a target tracking camshift algorithm based on the image information, thereby locating the risk persons at the exit point and continuing the tracking (step S92).
Specifically, the camera 13 in step S92 may include a camera in the public transportation system and a snow project camera near the get-off point. By activating these cameras, the tracking system 1 of the present invention can continuously track the risk people and detect abnormal behaviors by the above-mentioned technical solution, and continuously transmit the tracks and abnormal behaviors of the risk people to the police server 2.
In addition, in order to more accurately identify the abnormal behavior of the risk figure and save the operation resources of the system, the tracking system 1 of the present invention may only perform the emphasis detection on the behavior of the risk figure.
Referring to fig. 7, fig. 7 discloses a first embodiment of an abnormal behavior recognition flowchart according to the present invention. In the present invention, the tracking system 1 can obtain the criminal behavior related to the risk person from the public security server 2 after identifying the risk person (step S100), and confirm the current location of the risk person according to the above technical means after the risk person leaves the bus station (step S102). Further, the tracking system 1 may filter the model content of the criminal behavior model 151 according to the criminal behavior and the location point (step S104). In this embodiment, the criminal behavior model 151 is not necessarily stored in the database 15 of the public transportation system.
After step S104, the tracking system 1 may compare the image information of the risk person obtained by the camera 13 with the filtered criminal behavior model 151 (step S106) to determine whether the risk person has abnormal behavior (step S108). The abnormal behavior here refers to the criminal behavior that the system speculates that the risk person may present.
If it is determined in step S108 that the risk person does not have the abnormal behavior, the tracking system 1 returns to step S106 to continue tracking the risk person through the image information and to continue determining whether the risk person has the abnormal behavior. If it is determined in step S108 that the risk person has an abnormal behavior (that is, the image information and the filtered criminal behavior model 151 have a combination of matching matches after comparison), the tracking system 1 regards the content of the matching behavior as the abnormal behavior of the risk person and notifies the public security server 2 (step S110).
For example, if the risk person has been violent behavior, the crime behavior model 151 filtered in step S104 retains the model content corresponding to the behavior. If the current location of the risk person belongs to the remote area, the robbery behavior may occur in the remote area, and therefore the crime behavior model 151 filtered in step S104 retains the model content corresponding to the robbery behavior. If the current location of the risk person belongs to a prosperous zone, the theft may possibly occur in the prosperous zone, and therefore the crime behavior model 151 filtered in step S104 retains the model content corresponding to the theft, and so on.
According to the invention, the criminal behavior model 151 is filtered through the information, so that the risk figures can be identified in a targeted manner. When the recognition range is smaller, the number of action classification can be reduced, so that the classification accuracy and the system operation performance are improved.
The invention constructs a face recognition library in a public traffic system, requires to remove a mask when a passenger enters the public traffic system, obtains complete face information, and searches in a public security face blacklist library. When the passenger is judged to be the risk figure, the invention combines the technical characteristics of a target tracking algorithm, mac fingerprint positioning, pedestrian flow density in the bus, multi-camera switching and the like to continue the tracking of the risk figure, so that the tracking of the risk figure in the bus system is more accurate.
Moreover, because the public transportation system is adopted, the directions of the risk persons can be known, particularly the directions of the remote areas can be reported to the public security server in real time.
After the risk figures are locked, the method can also be used for carrying out key judgment on the abnormal actions of the risk figures by combining the mac information of the mobile equipment carried by the risk figures and the criminal behaviors related to the risk figures instead of judging all image information and videos, so that a large amount of computing resources of a public transport system are saved.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention. All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.

Claims (10)

1. A tracking system for risk figure tracking based on face recognition, comprising:
the face recognition machine gate is fixedly arranged at the bus station and is provided with a face brushing area and a gate;
the first camera is fixedly arranged in the bus station;
the face recognition library is used for storing a plurality of face pictures, and each face picture corresponds to one account information;
the background system is electrically connected with the face recognition machine gate, the first camera and the face recognition library, controls the face brushing area to obtain face information of a passenger, compares the face information of the passenger with the plurality of face pictures in the face recognition library to obtain the account information of the passenger, deducts fees according to the account information, and controls the gate to be opened after the fee deduction is successful;
the background system searches the face information in the face blacklist library to judge whether the passenger is a risk person, controls the first camera to capture image information including the risk person when the passenger is judged to be the risk person, executes a target tracking Camshift algorithm according to the image information to track the risk person, and sends the track of the risk person to a public security server.
2. The tracking system of claim 1, further comprising a database electrically connected to the background system for storing a criminal behavior model, wherein the criminal behavior model is trained by a neural network to record image classifications corresponding to different criminal behaviors, and the background system compares the image information with the criminal behavior model to determine whether the risk person has abnormal behavior.
3. The tracking system of claim 1, further comprising:
the Wi-Fi information release equipment is electrically connected with the background system, is arranged in the bus station and sends Wi-Fi signals to the outside; and
the database is electrically connected with the background system and used for storing a fingerprint identification plane map, the fingerprint identification plane map records the plane map of the bus station, the positions of the plane map covered by the camera images of the cameras in the bus station, the positions of the plane map corresponding to the coordinates of each square pixel point in the camera images of the cameras, and the signal intensity of the mobile equipment for receiving the Wi-Fi signal at each position in the plane map;
the background system acquires mac information of the mobile equipment within an allowable range from the face recognition gateway when the fee deduction is successful so as to position the risk figure, and tracks the risk figure through the fingerprint recognition plane map and the signal strength of the mobile equipment relative to the Wi-Fi information issuing equipment;
preferably, the tracking system further includes a second camera electrically connected to the backend system and disposed at a position closest to the first camera, and the backend system submits the last position information of the risk person in the first camera to the second camera when the risk person is about to exceed the shooting range of the first camera, performs similarity retrieval on the risk person in a shooting picture of the second camera, and locates the risk person in the shooting range of the second camera in combination with the position information and continues to track the risk person.
4. The tracking system of claim 1, further comprising:
the database is electrically connected with the background system and used for storing people flow speed statistical data which record the moving speed ranges corresponding to various people flow densities in the bus station; and
the second camera is electrically connected with the background system, wherein the second camera is a camera which is arranged in the plurality of cameras in the bus station and is closest to the first camera;
the background system calculates the current people flow density according to the camera shooting picture of the first camera, queries the database to obtain the corresponding moving speed range, and calculates the time required by the risk person to reach the camera shooting range of the second camera according to the moving speed range;
when the required time passes and the risk person disappears in the camera image of the first camera, the background system plans an interest range circle in the camera image of the second camera according to the required time and the moving speed range, and performs similarity retrieval of the risk person in the interest range circle and executes the target tracking Camshift algorithm so as to locate the risk person in the camera image of the second camera and continue to track the risk person.
5. A tracking method for carrying out risk figure tracking based on face recognition is applied to a bus station and comprises the following steps:
carrying out face brushing action on a passenger through a face brushing area of a face recognition machine brake of a bus station, and obtaining face information of the passenger;
comparing the face information with a face recognition library, and acquiring account information of the passenger, wherein a plurality of face pictures are recorded in the face recognition library, and the plurality of face pictures respectively correspond to one account information;
deducting fee according to the account information, and controlling the gate of the face recognition machine gate to open after the fee deduction is successful;
searching the face information in a face blacklist library, and judging whether the passenger is a risk figure or not;
when the passenger is judged to be a risk figure, controlling a first camera in the bus station to capture image information containing the risk figure;
executing a target tracking camshift algorithm according to the image information to track the risk person; and
and sending the trace of the risk person to a public security server.
6. The tracking method according to claim 5, further comprising:
when the passenger is judged to be a risk figure, reading the database to obtain a criminal behavior model, wherein the criminal behavior model is trained by a neural network to record image classifications corresponding to different criminal behaviors;
comparing the image information of the risk figure with the criminal behavior model to judge whether the risk figure has abnormal behavior; and
sending the abnormal behavior of the risk figure to a public security server;
preferably, the step of comparing the image information of the risk person with the crime behavior model further comprises:
when the passenger is judged to be a risk person, acquiring criminal behaviors related to the risk person from a public security server;
filtering the model content of the criminal behavior model according to the criminal behavior; and
and comparing the image information with the filtered criminal behavior model to judge whether the risk character has abnormal behavior.
7. The tracking method according to claim 5, wherein in the step of deducting the fee according to the account information, the step of obtaining mac information of the mobile device within an allowable range from the face brushing area after the fee deduction is successful to locate the passenger, and in the step of performing a target tracking camshift algorithm according to the image information, the at-risk person is also tracked by a fingerprint identification plan map recorded in a database and signal strength of the mobile device with respect to a Wi-Fi information distribution device in the bus stop.
8. The tracking method according to claim 7, wherein the bus station has the Wi-Fi information distribution device for sending Wi-Fi signals to the outside, and the database stores the fingerprint identification plan, wherein the fingerprint identification plan records the plan of the bus station, the positions of the camera images of the cameras in the bus station covering the plan, the coordinates of each square pixel in the camera images of the cameras corresponding to the positions in the plan, and the signal strength of the Wi-Fi signals received by the mobile device at each position in the plan;
preferably, the tracking method further includes:
judging whether the risk figures are about to exceed the shooting range of the first camera or not;
when the risk figure is about to exceed the shooting range of the first camera, submitting the last position information of the risk figure in the first camera to a second camera with a setting position closest to the first camera;
searching similarity of the risk figures in a camera shooting picture of the second camera;
and combining the similarity retrieval result and the position information, positioning the risk person in the shooting range of the second camera and continuously tracking the risk person.
9. The tracking method according to claim 5, further comprising:
calculating the current people stream density according to the camera image of the first camera;
inquiring the people flow speed statistical data according to the people flow density to obtain the corresponding moving speed range, wherein the people flow speed statistical data records the moving speed ranges corresponding to various people flow densities in the bus station respectively;
calculating the time required for the risk figure to reach the shooting range of a second camera with the setting position closest to the first camera according to the moving speed range;
judging whether the required time passes or not, and judging whether the risk figures disappear in a camera shooting picture of the first camera or not;
when the required time passes and the risk character disappears in the camera shooting picture of the first camera, planning an interesting range ring in the camera shooting picture of the second camera according to the required time and the moving speed range;
similarity retrieval of the risk figure is conducted within the range of interest circle and the target tracking camshift algorithm is executed to locate the risk figure within the camera range of the second camera and continue tracking the risk figure.
10. The tracking method according to claim 5, further comprising:
judging whether the risk figures disappear from the camera pictures of all cameras in the bus station or not;
acquiring the nearest entrance position of the bus stop before the risk figure disappears;
acquiring the time point when the risk figure disappears;
inquiring bus time data according to the entrance position of the bus stop and the time point so as to confirm the bus taken by the risk figure;
inquiring the driving route and the license plate number of the bus;
judging the getting-off place of the risk figure according to the driving route and the license plate number; and
the camera controlling the getting-off place captures the image information including the risk figures, carries out similarity retrieval of the risk figures according to the image information and executes the target tracking camshift algorithm so as to locate the risk figures at the getting-off place and continuously track the risk figures;
preferably, the tracking method further includes:
obtaining the criminal behaviors related to the risk figures from a public security server;
obtaining the current location of the risk figure;
filtering the model content of the criminal behavior model according to the criminal behavior and the place where the criminal behavior model is located, wherein the criminal behavior model is trained through a neural network to record image classification corresponding to different criminal behaviors;
comparing the image information of the risk figure with the filtered criminal behavior model;
judging whether the risk figures have abnormal behaviors or not according to the comparison result; and
and reporting the public security server when judging that the risk figure has abnormal behavior.
CN202110304146.5A 2021-03-22 2021-03-22 Tracking system and tracking method for risk figure tracking based on face recognition Pending CN114979558A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110304146.5A CN114979558A (en) 2021-03-22 2021-03-22 Tracking system and tracking method for risk figure tracking based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110304146.5A CN114979558A (en) 2021-03-22 2021-03-22 Tracking system and tracking method for risk figure tracking based on face recognition

Publications (1)

Publication Number Publication Date
CN114979558A true CN114979558A (en) 2022-08-30

Family

ID=82972835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110304146.5A Pending CN114979558A (en) 2021-03-22 2021-03-22 Tracking system and tracking method for risk figure tracking based on face recognition

Country Status (1)

Country Link
CN (1) CN114979558A (en)

Similar Documents

Publication Publication Date Title
US10977917B2 (en) Surveillance camera system and surveillance method
CN109686109B (en) Parking lot safety monitoring management system and method based on artificial intelligence
CN106384513B (en) A kind of fake-licensed car capture system and method based on intelligent transportation
CN109784162B (en) Pedestrian behavior recognition and trajectory tracking method
CN107305627B (en) Vehicle video monitoring method, server and system
TWI459332B (en) Method and system for integrating multiple camera images to track vehicle
US6747687B1 (en) System for recognizing the same vehicle at different times and places
US7088846B2 (en) Video surveillance system that detects predefined behaviors based on predetermined patterns of movement through zones
US7499571B1 (en) Video surveillance system with rule-based reasoning and multiple-hypothesis scoring
US20050104961A1 (en) Video surveillance system in which trajectory hypothesis spawning allows for trajectory splitting and/or merging
WO2003003309A1 (en) Method for monitoring a moving object and system regarding same
KR20130004247A (en) Multi-sensor location and identification
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN101295405A (en) Portrait and vehicle recognition alarming and tracing method
CN113326719A (en) Method, equipment and system for target tracking
CN106529401A (en) Vehicle anti-tracking method, vehicle anti-tracking device and vehicle anti-tracking system
US20050105764A1 (en) Video surveillance system with connection probability computation that is a function of object size
CN110852148A (en) Visitor destination verification method and system based on target tracking
US20050104959A1 (en) Video surveillance system with trajectory hypothesis scoring based on at least one non-spatial parameter
KR101542564B1 (en) system for managing traffic based on zone classified architecture
CN106327876B (en) A kind of fake-licensed car capture system and method based on automobile data recorder
KR102162130B1 (en) Enforcement system of illegal parking using single camera
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN114979558A (en) Tracking system and tracking method for risk figure tracking based on face recognition
KR101453386B1 (en) Vehicle Intelligent Search System and Operating Method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination