CN110516600A - A kind of bus passenger flow detection method based on Face datection - Google Patents

A kind of bus passenger flow detection method based on Face datection Download PDF

Info

Publication number
CN110516600A
CN110516600A CN201910799538.6A CN201910799538A CN110516600A CN 110516600 A CN110516600 A CN 110516600A CN 201910799538 A CN201910799538 A CN 201910799538A CN 110516600 A CN110516600 A CN 110516600A
Authority
CN
China
Prior art keywords
face
passenger flow
face detection
bus
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910799538.6A
Other languages
Chinese (zh)
Inventor
金鹏
王杰
谢小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Law Orange Electronic Technology Co Ltd
Original Assignee
Hangzhou Law Orange Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Law Orange Electronic Technology Co Ltd filed Critical Hangzhou Law Orange Electronic Technology Co Ltd
Priority to CN201910799538.6A priority Critical patent/CN110516600A/en
Publication of CN110516600A publication Critical patent/CN110516600A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06MCOUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
    • G06M1/00Design features of general application
    • G06M1/27Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum
    • G06M1/272Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum using photoelectric means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The bus passenger flow detection method based on Face datection that the invention discloses a kind of, comprising the following steps: S1, obtain opening signal;S2, video acquisition;S3, Face datection;S4, target following;S5, passenger flow counting;S6, door signal is obtained.Bus passenger flow detection method based on Face datection of the invention further includes that the result of passenger flow counting and recognition of face image are uploaded to cloud platform.One aspect of the present invention can accurately identify bus passenger face and facial image is uploaded to cloud platform, and compared with the image library being pre-stored in cloud platform carries out human face similarity degree, suspect etc. is detected to automatically analyze, so that public transport combination security protection is added face identification functions, promotes the intelligence of public transport;On the other hand, accurate counting is carried out to passenger flow of getting on or off the bus, and combines public transport reservation information, so that bus section flow is further calculated and ridership that each station is got on or off the bus, to play public transport road monitoring effect.

Description

Bus passenger flow detection method based on face detection
Technical Field
The invention relates to the technical field of bus passenger flow detection, in particular to a bus passenger flow detection method based on face detection.
Background
Traffic is an important foundation for the production and life of human society and the development of urban economy. The public transportation system is an indispensable component in urban traffic. With the development of artificial intelligence technology in recent years, smart buses are more and more concerned by people and governments. The wisdom public transit is the brand-new product that fuses bus and smart machine, mobile interconnection theory mutually, can improve trip efficiency greatly, lets the people fully enjoy the intellectuality city life that the informatization brought. The bus passenger flow volume statistical system can help a manager to know the running condition of a vehicle at any time, improve the operation and maintenance efficiency, achieve the purposes of cost reduction and efficiency improvement, and simultaneously provide congestion degree information in the vehicle for passengers, so that the vehicle can be reasonably selected, and the blind waiting time is reduced. Therefore, the application of the public transport passenger flow statistics technology in the intelligent public transport is very necessary.
Currently, passenger flow statistics methods can be technically divided into two categories: one is a sensor-based passenger flow statistics method and the other is a passenger flow statistics method based on image recognition. There are many sensor-based passenger flow statistics, among which infrared sensing counting statistics and pressure sensing counting statistics are the mainstream. The sensor-based method has a good statistical effect when the pedestrian flow is sparse, but has a large statistical error when the crowd is dense and the passenger is seriously shielded, so that the pedestrian flow quantity cannot be accurately judged. The traditional passenger flow statistical method based on image recognition is mainly a passenger flow detection technology based on passenger head detection, the technology is easy to falsely detect spherical false targets such as basketball, and is easy to miss detection for confusing targets such as hats and headscarfs, so that the passenger head statistics cannot achieve an accurate effect, and the method cannot obtain face information, and is not beneficial to subsequent intelligentization and promotion of a public traffic system. The existing passenger flow detection technology has the defects of low intelligent degree, and waste of power consumption of the whole system caused by the fact that a passenger flow detection system is always in an operating state.
Disclosure of Invention
The invention aims to solve the problems and provides a bus passenger flow detection method based on face detection, which has high statistical accuracy, saves power consumption and accurately obtains bus passenger flow information.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a bus passenger flow detection method based on face detection comprises the following steps:
s1, acquiring a door opening signal, and acquiring the door opening signal and transmitting the door opening signal to a main control computer arranged on the bus by a signal acquisition module;
s2, video acquisition, wherein the main control computer controls the cameras to acquire video signals of front and rear doors of the bus according to the switching signals and converts the video signals into image signals;
s3, detecting human faces, namely, carrying out human face detection based on deep learning on the image signals in the step S2 to obtain human face detection results;
s4, tracking the target, and tracking the target according to the face detection result in the step S3 to obtain a face motion track;
s5, counting passenger flow, judging the entrance and exit of passengers according to the human face motion track in the step S4, and counting passenger flow;
s6, acquiring a door closing signal, wherein the signal acquisition module in step S1 acquires the door closing signal and transmits the door closing signal to the main control computer in step S1, and the main control computer controls to end step S2 according to the door closing signal; if the signal acquisition module does not acquire the door closing signal, the process continues to step S2.
In the technical scheme, the front door and the rear door of the bus are respectively provided with a signal acquisition module for detecting door opening or door closing, when the signal acquisition module acquires a door opening signal, the signal acquisition module transmits the door opening signal to the main control computer, so that the main control computer controls the step S2 according to the door opening signal, and when the signal acquisition module acquires a door closing signal, the main control computer finishes the step S2, so that the function of automatically starting or stopping passenger flow detection according to the door opening and closing signal of the bus is realized, the passenger flow detection is stopped in the door closing state of the bus, and the power consumption of the whole passenger flow detection system is reduced. Furthermore, cameras for acquiring video signals of the front door and the rear door of the bus are arranged at the front door and the rear door of the bus, the front door camera is installed on the upper side of a driver seat of the bus, and a camera with the focal length of 6MM and the shooting angle of about 60 degrees is adopted to shoot the front door in the getting-on area of a passenger; the back door camera is installed directly over the bus back door, adopts the focus to be 2.8MM, shoots the wide angle camera that the angle is about 120, shoots inwards towards the regional slope of back door passenger getting off to can clearly shoot the passenger of getting on or off the bus. In step S2, the camera captures video information of the front and rear doors, and converts the acquired video information into an image signal. In step S4, the result of the face detection is tracked by the target tracker, and the face movement trajectory of the passenger during getting on or off the vehicle is obtained. The method cuts the model of the GOTURN according to the requirement of tracking the face in the bus scene, so that the target tracker can run on embedded equipment arranged on the bus. The target tracker makes the detection result more stable and accurate.
Preferably, the method for detecting bus passenger flow based on face detection further includes uploading the result of passenger flow counting in step S5 to a cloud platform. In the technical scheme, related workers can visually look up the passenger flow counting result according to the cloud platform. The main control computer still acquires bus locating information through the locating module of establishing in the main control computer to with the locating information of bus through establishing the communication module upload to the cloud platform in the main control computer, the cloud platform obtains the passenger volume of getting on or off the bus of bus section flow and each station according to passenger flow count result and locating information, thereby plays the public transit road monitoring effect, thereby relevant staff can set up bus interval time, the distribution condition of bus station etc. according to the public transit road monitoring condition. The positioning module comprises a GPS positioning module, and the communication module comprises a 4G communication module.
Preferably, in step S4, when performing target tracking according to the face detection result in step S3, the camera captures a face image, and uploads the face image to the cloud platform through the host computer. In the technical scheme, in the process of getting on or off a passenger, a camera shoots a face image corresponding to the passenger, and the main control computer uploads the face image to the cloud platform through the 4G communication module. The cloud platform carries out face identification to the face image who receives to convert a face image into a face eigenvector, carry out people's face similarity with the image storehouse of prestoring in the cloud platform and compare, prestore the image storehouse in the cloud platform including the personnel picture of criminal presidential branch of academic or vocational study or criminal suspect image, thereby automatic analysis detects out criminal suspect etc. makes public transport combine security protection and face identification function, promotes public transport's intellectuality. Furthermore, face similarity comparison is respectively carried out on the face images corresponding to the front door and the rear door, duplication of the face images obtained by distribution of the front door and the rear door is removed, so that only one face image is reserved for each passenger to get on or off the bus, and then similarity comparison is carried out on the only one face image of each passenger and the picture library in the cloud platform.
Preferably, in step S3, the face detection is implemented by a face detection model, and a face detection result is obtained, and model training is performed by face classification, bounding box regression, and face feature positioning, and the face detection model is fitted. In the above technical solution, the face detection model in the present invention includes an improved model based on an MTCNN model, which is a multitask convolutional neural network, and the improvement method of the improved model is as follows: combining the convolution layer with the step length of 1 and the pooling layer with the step length of 2 in the MTCNN model into a convolution layer with the step length of 2, and then retraining the model to obtain a first model with a smaller volume and a higher operation speed; further, the number of filters in a network layer of the first model is cut based on the data characteristics of the face of a bus passenger, so that the first model is more suitable for the scene, the model computation amount is reduced, specifically, the number of filters of a first layer convolution layer in the P-net of the first model is reduced from 10 to 8, and the number of filters of a second layer convolution layer and a third layer convolution layer in the P-net is unchanged; the number of filters of a first layer of convolutional layers in the R-net of the first model is reduced from 28 to 16, the number of filters of a second layer of convolutional layers in the R-net is reduced from 48 to 32, and the number of filters of a third layer of convolutional layers in the R-net is unchanged; the number of the filters of the second layer of the convolution layer in the O-net of the first model is reduced from 64 to 32, and the number of the filters of other convolution layers is unchanged, so that the improved model is obtained, is more suitable for detecting the bus passenger flow and has the advantages of smaller volume and faster operation speed. Further, the present invention utilizes three tasks to train the improved model: face classification, bounding box regression and face feature point positioning, so that face detection is more accurate.
Preferably, in the face classification, the main control computer controls the camera to collect a sample image, the sample image is defined as a binary classification variable, and whether the collected sample image is a face signal is judged according to a cross entropy loss function, so as to fit the face detection model, wherein the cross entropy loss function isWherein, PiFor the probability that the acquired sample image is a face signal,in order to label the image of the specimen,and the two classification variables are face and non-face classification variables.
Preferably, in the bounding box regression, the face detection model is further fitted according to a first Euclidean distance loss function, wherein the first Euclidean distance loss function isWherein,is the loss value of the regression of the bounding box,the boundary frame coordinates obtained after the face image collected by the camera is processed by deep learning,is positiveAnd determining the coordinates of the marked bounding box. In the above technical solution, for the candidate frame corresponding to each bounding box, the euclidean distance between the candidate frame and the real frame is first calculated, and the euclidean distance formula is used as a loss function, that is, a first euclidean distance loss function, so as to further fit the face detection model. The correctly labeled bounding box coordinates are correctly labeled manually.
Preferably, the bounding box coordinates include coordinates of four dimensions, namely an abscissa of the position of the upper left corner of the bounding box, an ordinate of the position of the upper left corner, a height and a width. In the technical scheme, the boundary frame can be accurately positioned by the four dimensional coordinates, so that the regression loss value of the boundary frame can be accurately obtained, and the accuracy of face detection is further improved.
Preferably, in the face feature localization, the face feature points are located according to a second Euclidean distance loss function, so as to further fit the face detection model, wherein the second Euclidean distance loss function isWherein,loss values for locating the facial feature points,the characteristic point coordinates obtained after the face image collected by the camera is processed by deep learning,the feature point coordinates are correctly labeled. In the above technical solution, the face feature location is defined as a regression problem, and similar to the bounding box regression, the euclidean distance formula is used as a loss function, i.e. a second euclidean distance loss function, and the face feature points are located by minimizing the second euclidean distance loss function, so as to further fit the model and obtain a more accurate face detection model. The correctly marked feature point coordinates are manually and correctly marked.
Preferably, the feature point coordinates include left eye position coordinates, right eye position coordinates, nose position coordinates, left mouth angle position coordinates, and right mouth angle position coordinates of the face image. The technical scheme carries out feature point identification aiming at the feature position of the face, thereby further accurately obtaining a face detection model and improving the accuracy of bus passenger flow detection.
The invention has the beneficial effects that:
according to the bus passenger flow detection method based on face detection, on one hand, the faces of bus passengers can be accurately identified, the face images are uploaded to the cloud platform and compared with the face similarity of the passengers in an image library prestored in the cloud platform, so that criminal suspects and the like are automatically analyzed and detected, public transportation is combined with security and protection and a face identification function, and the intellectualization of the public transportation is improved; on the other hand, the bus passenger flow is accurately counted, and the bus section flow and the passenger volume of the bus getting on and off at each station are further calculated by combining the bus positioning information, so that the bus road monitoring effect is achieved.
Drawings
FIG. 1 is a flow chart of steps of a bus passenger flow statistics method of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and embodiments.
The bus passenger flow detection method based on the face detection comprises the following steps: s1, acquiring a door opening signal, and acquiring the door opening signal and transmitting the door opening signal to a main control computer arranged on the bus by a signal acquisition module; s2, video acquisition, wherein the main control computer controls the cameras to acquire video signals of front and rear doors of the bus according to the door opening signals and converts the video signals into image signals; s3, detecting human faces, namely, carrying out human face detection based on deep learning on the image signals in the step S2 to obtain human face detection results; s4, tracking the target, and tracking the target according to the face detection result in the step S3 to obtain a face motion track; s5, counting passenger flow, judging the entrance and exit of passengers according to the human face motion track in the step S4, and counting passenger flow; s6, acquiring a door closing signal, wherein the signal acquisition module in step S1 acquires the door closing signal and transmits the door closing signal to the main control computer in step S1, and the main control computer controls to end step S2 according to the door closing signal; if the signal acquisition module does not acquire the door closing signal, the process continues to step S2. As shown in fig. 1.
In this embodiment, the method for detecting bus passenger flow based on face detection further includes uploading the result of passenger flow counting in step S5 to a cloud platform. As shown in fig. 1.
In this embodiment, in step S4, when performing target tracking according to the face detection result in step S3, the camera captures a face image, and uploads the face image to the cloud platform through the host computer. As shown in fig. 1.
In this embodiment, in step S3, the face detection is implemented by the face detection model, and a face detection result is obtained, and model training is performed by face classification, bounding box regression, and face feature positioning, and the face detection model is fitted.
In this embodiment, in the face classification, the main control computer controls the camera to collect a sample image, defines the sample image as a binary variable, and determines whether the collected sample image is a face signal according to a cross entropy loss function, so as to fit the face detection model, where the cross entropy loss function isWherein, PiFor the probability that the acquired sample image is a face signal,in order to label the image of the specimen,and the two classification variables are face and non-face classification variables.
In this embodiment, in the bounding box regression, the face detection model is further fitted according to a first euclidean distance loss function, where the first euclidean distance loss function isWherein,is the loss value of the regression of the bounding box,the boundary frame coordinates obtained after the face image collected by the camera is processed by deep learning,the bounding box coordinates are correctly labeled.
In this embodiment, the coordinates of the bounding box include coordinates of four dimensions, i.e., an abscissa of the position of the upper left corner of the bounding box, an ordinate of the position of the upper left corner, a height, and a width.
In this embodiment, in the face feature location, the face feature points are located according to a second euclidean distance loss function, so as to further fit the face detection model, where the second euclidean distance loss function isWherein,loss values for locating the facial feature points,the characteristic point coordinates obtained after the face image collected by the camera is processed by deep learning,the feature point coordinates are correctly labeled.
In this embodiment, the feature point coordinates include a left eye position coordinate, a right eye position coordinate, a nose position coordinate, a left mouth angle position coordinate, and a right mouth angle position coordinate of the face image.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A bus passenger flow detection method based on face detection is characterized by comprising the following steps:
s1, acquiring a door opening signal, and acquiring the door opening signal and transmitting the door opening signal to a main control computer arranged on the bus by a signal acquisition module;
s2, video acquisition, wherein the main control computer controls the cameras to acquire video signals of front and rear doors of the bus according to the door opening signals and converts the video signals into image signals;
s3, detecting human faces, namely, carrying out human face detection based on deep learning on the image signals in the step S2 to obtain human face detection results;
s4, tracking the target, and tracking the target according to the face detection result in the step S3 to obtain a face motion track;
s5, counting passenger flow, judging the entrance and exit of passengers according to the human face motion track in the step S4, and counting passenger flow;
s6, acquiring a door closing signal, wherein the signal acquisition module in step S1 acquires the door closing signal and transmits the door closing signal to the main control computer in step S1, and the main control computer controls to end step S2 according to the door closing signal; if the signal acquisition module does not acquire the door closing signal, the process continues to step S2.
2. The method for detecting bus passenger flow based on human face detection as claimed in claim 1, further comprising uploading the result of passenger flow counting in step S5 to a cloud platform.
3. The method for detecting bus passenger flow based on human face detection according to claim 1 or 2, characterized by further comprising in step S4, when performing target tracking according to the human face detection result in step S3, capturing a human face image by a camera, and uploading the human face image to a cloud platform by a main control computer.
4. The method for detecting bus passenger flow based on face detection as claimed in claim 3, wherein in step S3, face detection is implemented through a face detection model, and a face detection result is obtained, model training is performed through face classification, bounding box regression and face feature positioning, and the face detection model is fitted.
5. The method for detecting bus passenger flow based on face detection according to claim 4, characterized in that in face classification, the master control machine controls the camera to collect sample images, the sample images are defined as binary classification variables, and whether the collected sample images are face signals is judged according to a cross entropy loss function, so as to fit a face detection model, wherein the cross entropy loss function isWherein, PiFor the probability that the acquired sample image is a face signal,in order to label the image of the specimen,and the two classification variables are face and non-face classification variables.
6. The method as claimed in claim 4 or 5, wherein in the bounding box regression, the rootFurther fitting the face detection model according to a first Euclidean distance loss function, wherein the first Euclidean distance loss function isWherein,is the loss value of the regression of the bounding box,the boundary frame coordinates obtained after the face image collected by the camera is processed by deep learning,the bounding box coordinates are correctly labeled.
7. The method as claimed in claim 6, wherein the coordinates of the bounding box include coordinates of four dimensions, namely, an abscissa of the position of the upper left corner of the bounding box, an ordinate of the position of the upper left corner, a height and a width.
8. The method as claimed in claim 4, 5 or 7, wherein in the face feature location, the face feature points are located according to a second Euclidean distance loss function, so as to further fit the face detection model, wherein the second Euclidean distance loss function isWherein,loss values for locating the facial feature points,performing depth science on face images acquired by cameraThe coordinates of the feature points obtained after the processing are learned,the feature point coordinates are correctly labeled.
9. The method as claimed in claim 8, wherein the feature point coordinates include left eye position coordinates, right eye position coordinates, nose position coordinates, left mouth corner position coordinates, and right mouth corner position coordinates of the face image.
CN201910799538.6A 2019-08-28 2019-08-28 A kind of bus passenger flow detection method based on Face datection Pending CN110516600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910799538.6A CN110516600A (en) 2019-08-28 2019-08-28 A kind of bus passenger flow detection method based on Face datection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910799538.6A CN110516600A (en) 2019-08-28 2019-08-28 A kind of bus passenger flow detection method based on Face datection

Publications (1)

Publication Number Publication Date
CN110516600A true CN110516600A (en) 2019-11-29

Family

ID=68627254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910799538.6A Pending CN110516600A (en) 2019-08-28 2019-08-28 A kind of bus passenger flow detection method based on Face datection

Country Status (1)

Country Link
CN (1) CN110516600A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111290899A (en) * 2020-03-12 2020-06-16 杭州律橙电子科技有限公司 Debugging method of bus passenger flow detection terminal
CN111680638A (en) * 2020-06-11 2020-09-18 深圳北斗应用技术研究院有限公司 Passenger path identification method and passenger flow clearing method based on same
CN111950499A (en) * 2020-08-21 2020-11-17 湖北民族大学 Method for detecting vehicle-mounted personnel statistical information
CN113743205A (en) * 2021-07-30 2021-12-03 郑州天迈科技股份有限公司 Method for acquiring travel origin-destination information of bus passengers

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net
CN108564052A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on MTCNN and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512720A (en) * 2015-12-15 2016-04-20 广州通达汽车电气股份有限公司 Public transport vehicle passenger flow statistical method and system
CN107609512A (en) * 2017-09-12 2018-01-19 上海敏识网络科技有限公司 A kind of video human face method for catching based on neutral net
CN108564052A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on MTCNN and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111290899A (en) * 2020-03-12 2020-06-16 杭州律橙电子科技有限公司 Debugging method of bus passenger flow detection terminal
CN111680638A (en) * 2020-06-11 2020-09-18 深圳北斗应用技术研究院有限公司 Passenger path identification method and passenger flow clearing method based on same
CN111950499A (en) * 2020-08-21 2020-11-17 湖北民族大学 Method for detecting vehicle-mounted personnel statistical information
CN113743205A (en) * 2021-07-30 2021-12-03 郑州天迈科技股份有限公司 Method for acquiring travel origin-destination information of bus passengers

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
WO2021170030A1 (en) Method, device, and system for target tracking
CN110516600A (en) A kind of bus passenger flow detection method based on Face datection
CN104303193B (en) Target classification based on cluster
CN101465033B (en) Automatic tracking recognition system and method
CN106541968B (en) The recognition methods of the subway carriage real-time prompt system of view-based access control model analysis
CN201278180Y (en) Automatic tracking recognition system
CN105931467B (en) A kind of method and device tracking target
Kumar et al. Study of robust and intelligent surveillance in visible and multi-modal framework
CN106778655B (en) Human body skeleton-based entrance trailing entry detection method
CN103986910A (en) Method and system for passenger flow statistics based on cameras with intelligent analysis function
WO2004042673A2 (en) Automatic, real time and complete identification of vehicles
CN102945603A (en) Method for detecting traffic event and electronic police device
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN102170563A (en) Intelligent person capture system and person monitoring management method
CN105844659A (en) Moving part tracking method and device
CN106570490A (en) Pedestrian real-time tracking method based on fast clustering
CN107315993A (en) A kind of peephole system and its face identification method based on recognition of face
CN109508659A (en) A kind of face identification system and method for crossing
CN113362374A (en) High-altitude parabolic detection method and system based on target tracking network
CN111814510A (en) Detection method and device for remnant body
CN105227918B (en) A kind of intelligent control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191129