CN114332184B - Passenger statistical identification method and device based on monocular depth estimation - Google Patents

Passenger statistical identification method and device based on monocular depth estimation Download PDF

Info

Publication number
CN114332184B
CN114332184B CN202111437485.7A CN202111437485A CN114332184B CN 114332184 B CN114332184 B CN 114332184B CN 202111437485 A CN202111437485 A CN 202111437485A CN 114332184 B CN114332184 B CN 114332184B
Authority
CN
China
Prior art keywords
passenger
target
depth estimation
statistical
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111437485.7A
Other languages
Chinese (zh)
Other versions
CN114332184A (en
Inventor
朱旭光
周金明
赵丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xingzheyi Intelligent Transportation Technology Co ltd
Original Assignee
Nanjing Xingzheyi Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xingzheyi Intelligent Transportation Technology Co ltd filed Critical Nanjing Xingzheyi Intelligent Transportation Technology Co ltd
Priority to CN202111437485.7A priority Critical patent/CN114332184B/en
Publication of CN114332184A publication Critical patent/CN114332184A/en
Application granted granted Critical
Publication of CN114332184B publication Critical patent/CN114332184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a passenger statistical identification method and device based on monocular depth estimation, wherein the method comprises the steps of firstly, collecting RGB image frames on a monocular camera, and carrying out real-time processing on each collected frame image; secondly, passenger target detection is carried out on each frame of RGB image: performing depth estimation on each frame of RGB image, and performing target detection on the basis of the depth estimation: thirdly, carrying out fusion calculation on the detection result of each frame to obtain a final target detection result of each frame; fourth, constructing a passenger target motion trail and simultaneously counting passenger motion directions and quantity; fifthly, extracting track characteristics of passengers, and carrying out door entry or vehicle entry and exit matching to obtain identification results. According to the method, depth estimation is carried out on the monocular camera, and the estimated 3D depth information is fused with the original 2D information, so that the effect of a passenger statistical recognition system based on the monocular camera is improved.

Description

Passenger statistical identification method and device based on monocular depth estimation
Technical Field
The invention relates to the field of intelligent traffic and computer vision research, in particular to a passenger statistical identification method and device based on monocular depth estimation.
Background
As machine vision technology advances, there is an increasing use of cameras for object recognition, and related applications of behavior recognition, wherein cameras are classified into normal cameras (2D) and depth cameras (also called 3D cameras). Common cameras, also known as monocular cameras when put together with 3D cameras, are the most common computer vision sensors. In physical principle, a monocular camera does not collect spatial depth information. Passenger statistics and identification are a type of technology applied to the public transportation field (such as buses, passenger transport and the like) and used for identifying and counting the number, characteristics and travel rules of passengers. At present, the following methods exist in the aspect of passenger statistical identification: (1) A passenger counting system constructed by non-visual sensors such as infrared sensors; (2) A passenger statistical recognition system is built by a visual sensor such as a monocular camera; (3) Passenger statistical recognition systems built with visual sensors such as binocular or other 3D cameras.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art: from the effect of passenger statistical recognition, binocular or 3D camera type methods > monocular camera type > infrared, etc. non-visual type. From the cost point of view, it is also: binocular or 3D camera methods > monocular cameras > infrared, etc. non-visual classes. The passenger statistics and identification system is designed to grasp the travel rule of passengers so as to optimize the resource allocation of public transportation. The effect and implementation cost of passenger statistics and identification directly determine the scale and accuracy of travel rule acquisition. The large-scale industrial application needs to design a more effective and more cost-effective method, and the method is close to 3D cameras from the aspect of effect; in terms of cost, a price comparable to that of the monocular camera type is realized.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a passenger statistical recognition method and device based on monocular depth estimation. The technical proposal is as follows:
the invention provides a passenger statistical identification method based on monocular depth estimation, which mainly comprises the following steps:
first, RGB image frames are acquired on a monocular camera, and each acquired frame of image is processed in real time.
Secondly, detecting a target;
passenger target detection is performed for each frame of RGB image:
the RGB image frames are scaled from the original resolution to a proper resolution for each frame and then input to the passenger target detector, which outputs the target position of each passenger and the target position of the non-passenger on the scaled frame and restores the positions to the passenger target position and the target position R2 of the non-passenger with respect to the original resolution according to the ratio with the original resolution.
Performing depth estimation on each frame of RGB image, and performing target detection on the basis of the depth estimation:
scaling RGB image frames to proper resolution, inputting the RGB image frames to a depth estimation model, outputting a depth map corresponding to the scaled frames by the depth estimation model, detecting a passenger target on the depth estimation map of each frame, filtering the depth estimation map according to a depth threshold set by a scene, removing invalid contents, identifying a passenger target position by identifying a region with a maximum gray value, and then restoring the identified passenger target position to a target position R3 relative to the original resolution according to a ratio with the original resolution.
Thirdly, carrying out fusion calculation on the detection result of each frame to obtain a final target detection result of each frame;
combining the passenger target position R2 and the passenger target position R3 in the second step, if the set overlapping ratio is met, the positions overlap each other and are recorded as a target position, and finally, the combined passenger target position R23 is obtained; and (3) carrying out superposition ratio on each target position in the passenger target positions R23 and each target position of the non-passenger target positions in the second step, if the set superposition ratio is met, removing the target position in the R23 from the R23, and finally obtaining a final passenger target detection position result R23'.
Fourth, constructing a passenger target motion trail and simultaneously counting passenger motion directions and quantity;
if N target results are detected in the third step, M motion tracks currently exist.
For each target result in the N target results and the tail target result of the M motion tracks, M.N matching nodes are constructed, and each matching node records the index of the corresponding target result and the index of the motion track, and the OVERLAP value of the target result and the tail target result.
Inserting M-N matching nodes into a list in a descending order, and for each matching node in the list, if the OVERLAP value of the matching node is greater than a preset threshold value, successfully matching a target result and a motion track corresponding to the matching node, and deleting the node with the same index as the matching node in the list; if the threshold value is smaller than the preset threshold value, deleting; and looping until the list is empty.
And for the target result without matching, establishing a new motion trail, and for the motion trail without matching, checking whether an ending condition is triggered.
Fifthly, extracting track characteristics of passengers, and carrying out door entry or vehicle entry and exit matching to obtain identification results;
and extracting characteristic values from a target result in a passenger target motion track through a characteristic extraction model, calculating the average value of all characteristic values in the track, taking the average value as a track characteristic value of the motion track, adding the track characteristic value into a matching queue if the motion track is in a door-in or car-on direction, calculating the distance between the track characteristic value and the track characteristic value of each door-in or car-on direction in the matching queue if the motion track is in a door-out or car-off direction, and removing the characteristic average value of the matched door-in or car-on direction from the queue if the distance value is larger than a set threshold value.
Preferably, the angle of view of the monocular camera in the first step when capturing RGB image frames is designed to be a vertically downward angle of view in plan view.
Preferably, the method further comprises: sixth, storing and reporting the statistical recognition result;
for each passenger target, the statistical recognition result is related to the time of entering or getting on and the time of exiting or getting off matched with the statistical recognition result, and the statistical recognition result is stored and reported through the existing storage and communication mechanism of the system.
Compared with the prior art, one of the technical schemes has the following beneficial effects: the invention provides a passenger statistical recognition method based on monocular depth estimation, which carries out depth estimation on a monocular camera, adopts fusion of a depth estimation result and an RGB detection result, namely fusion of estimated 3D depth information and original 2D information, supplements the problem of target omission of RGB detection easy to go out through the depth estimation result, and effectively eliminates the defect that the depth estimation result is indistinguishable from non-target class through RGB non-target class detection. Meanwhile, the moving direction of the passenger target is matched with the entering and exiting of the door or the entering and exiting of the car, so that the number of passengers is counted, and the travel rule is identified. The effect reaches a scheme close to a 3D camera in a passenger identification scene, so that higher cost performance is obtained, and large-scale deployment of the system is possible.
Detailed Description
In order to clarify the technical scheme and working principle of the present invention, the following describes the embodiments of the present disclosure in further detail. Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
The terms "first step," "second step," "third step," and the like in the description and in the claims are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those described herein.
The monocular in the invention refers to a common RGB video camera (camera, video camera) which can acquire RGB color images at a certain frame rate.
First aspect: the embodiment of the disclosure provides a passenger statistical identification method based on monocular depth estimation, which mainly comprises the following steps:
firstly, acquiring RGB image frames on a monocular camera, and carrying out real-time processing on each acquired frame of image;
preferably, in order to avoid the influence of mutual shielding of the acquired information by the passenger targets and to avoid the acquisition of privacy information such as the face of the passenger targets, the monocular camera is designed to look down at a vertically downward viewing angle to acquire RGB image frames.
Secondly, detecting a target;
passenger target detection is performed for each frame of RGB image:
each frame of RGB image frames is scaled from the original resolution to the appropriate resolution and then input to the passenger object detector. The passenger target detector outputs a target position of each passenger and a target position of a non-passenger on the scaled frame, and restores the positions to a passenger target position and a non-passenger target position R2 with respect to the original resolution according to a ratio to the original resolution.
Performing depth estimation on each frame of RGB image, and performing target detection on the basis of the depth estimation:
and scaling the RGB image frame to a proper resolution, inputting the RGB image frame to a depth estimation model, and outputting a depth map corresponding to the scaled frame by the depth estimation model. Passenger target detection is performed on the depth estimation map of each frame. First, according to a depth threshold set by a scene, a depth estimation map is filtered to remove invalid content. The passenger target position is identified by identifying the region where the gray value is extremely large. The identified passenger target position is then restored to a target position R3 relative to the original resolution according to the ratio to the original resolution.
Thirdly, carrying out fusion calculation on the detection result of each frame to obtain a final target detection result of each frame;
and merging the passenger target position R2 and the passenger target position R3 in the second step, and if the set coincidence ratio is met, registering the positions as a target position, and finally obtaining the merged passenger target position R23. And (3) carrying out superposition ratio on each target position in the passenger target positions R23 and each target position of the non-passenger target positions in the second step, if the set superposition ratio is met, removing the target position in the R23 from the R23, and finally obtaining a final passenger target detection position result R23'.
Fourth, constructing a passenger target motion trail and simultaneously counting passenger motion directions and quantity;
if N target results are detected in the third step, M motion tracks currently exist.
And constructing M.N matching nodes for each target result in the N target results and the tail target results of the M motion tracks. Each matching node records the index of the corresponding target result, the index of the motion trail and the OVERLAP values of the target result and the tail target result.
And inserting M-N matching nodes into the list in a descending order, and for each matching node in the list, if the OVERLAP value of the matching node is greater than a preset threshold value, successfully matching the target result and the motion trail corresponding to the matching node, and deleting the node with the same index as the matching node in the list. And if the threshold value is smaller than the preset threshold value, deleting. And looping until the list is empty.
And for the target result without matching, establishing a new motion trail, and for the motion trail without matching, checking whether an ending condition is triggered.
Fifthly, extracting track characteristics of passengers, and carrying out door entry or vehicle entry and exit matching to obtain identification results;
and extracting characteristic values from a target result in a passenger target motion track through a characteristic extraction model, and calculating the average value of all the characteristic values in the track to be used as the track characteristic value of the motion track. And if the motion track is in the entering or getting-on direction, adding the track characteristic value into a matching queue. If the motion track is in the exiting or entering direction, calculating the distance between the track characteristic value and the track characteristic value of each entering or entering direction in the matching queue, and if the distance value is larger than a set threshold value, successfully matching, and removing the characteristic average value of the matched entering or entering direction from the queue.
Preferably, the method further comprises: sixth, storing and reporting the statistical recognition result;
for each passenger target, the statistical recognition result is related to the time of entering or getting on and the time of exiting or getting off matched with the statistical recognition result, and the statistical recognition result is stored and reported through the existing storage and communication mechanism of the system. Of course, there are cases where entry and exit or up and down cannot be matched, and at this time, the entry corresponding to the entry in the statistics result is empty.
In a second aspect, embodiments of the present disclosure provide an apparatus for passenger statistical identification based on monocular depth estimation
Based on the same technical idea, the device can implement or execute a method for passenger statistical recognition based on monocular depth estimation in any one of all possible implementation modes.
Preferably, the device comprises an acquisition unit, a detection unit, a fusion unit, a first statistics unit and a second statistics unit;
the acquisition unit is used for executing the first step of the passenger statistical identification method based on monocular depth estimation in any one of all possible implementation modes;
-the detection unit for performing the second step of a method of passenger statistical identification based on monocular depth estimation as claimed in any one of all possible implementations;
the fusion unit is used for executing the third step of the passenger statistical identification method based on monocular depth estimation according to any one of all possible implementation modes;
a first statistical unit for performing the fourth step of a method of passenger statistical identification based on monocular depth estimation as claimed in any one of all possible implementations;
the second statistical unit is configured to perform the fifth step of the method for passenger statistical identification based on monocular depth estimation according to any one of all possible implementation manners.
Preferably, the apparatus further comprises a reporting unit, configured to perform the step six of the method for passenger statistical identification based on monocular depth estimation according to any one of all possible implementation manners.
It should be noted that, when the apparatus for passenger statistics and identification based on monocular depth estimation provided in the foregoing embodiment performs a method for passenger statistics and identification based on monocular depth estimation, only the division of the foregoing functional modules is used for illustration, and in practical application, the foregoing functional allocation may be performed by different functional modules, i.e. the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the device for passenger statistics and identification based on monocular depth estimation provided in the foregoing embodiment and the method embodiment for passenger statistics and identification based on monocular depth estimation belong to the same concept, and detailed implementation processes of the device are shown in the method embodiment, which is not described herein.
While the invention has been described above by way of example, it is evident that the invention is not limited to the particular embodiments described above, but rather, it is intended to provide various insubstantial modifications, both as to the method concepts and technical solutions of the invention; or the above conception and technical scheme of the invention are directly applied to other occasions without improvement and equivalent replacement, and all are within the protection scope of the invention.

Claims (6)

1. A method of passenger statistical identification based on monocular depth estimation, the method comprising the steps of:
firstly, acquiring RGB image frames on a monocular camera, and carrying out real-time processing on each acquired frame of image;
secondly, detecting a target;
passenger target detection is performed for each frame of RGB image:
scaling each frame of RGB image frame from original resolution to proper resolution and inputting to passenger target detector, passenger target detector outputting target position of each passenger and target position of non-passenger on scaled frame, and restoring the position to passenger target position and target position R2 of non-passenger relative to original resolution according to ratio of original resolution;
performing depth estimation on each frame of RGB image, and performing target detection on the basis of the depth estimation:
scaling RGB image frames to proper resolution, inputting the RGB image frames to a depth estimation model, outputting a depth estimation image corresponding to the scaled frames by the depth estimation model, detecting a passenger target on the depth estimation image of each frame, filtering the depth estimation image according to a depth threshold set by a scene, removing invalid contents, identifying a passenger target position by identifying a region with a maximum gray value, and then restoring the identified passenger target position to a target position R3 relative to the original resolution according to the ratio of the passenger target position to the original resolution;
thirdly, carrying out fusion calculation on the detection result of each frame to obtain a final target detection result of each frame;
combining the passenger target position R2 and the passenger target position R3 in the second step, if the set overlapping ratio is met, the positions overlap each other and are recorded as a target position, and finally, the combined passenger target position R23 is obtained; combining each target position in the passenger target positions R23 with each target position of the non-passenger target positions in the second step, if the set coincidence ratio is met, removing the target position in the R23 from the R23, and finally obtaining a final passenger target detection position result R23';
fourth, constructing a passenger target motion trail and simultaneously counting passenger motion directions and quantity;
if N target results are detected in the third step, M motion tracks currently exist;
for each target result in the N target results and the tail target result of the M motion tracks, M x N matching nodes are constructed, and each matching node records the index of the corresponding target result and the index of the motion track, and the OVERLAP value of the target result and the tail target result;
inserting M-N matching nodes into a list in a descending order according to the size of the OVERLAP value, if the OVERLAP value of each matching node in the list is larger than a preset threshold value, matching the target result and the motion trail corresponding to the matching node successfully, and deleting the nodes with the same index as the matching node in the list; if the threshold value is smaller than the preset threshold value, deleting; cycling until the list is empty;
for the target result without matching, a new motion trail is established, and for the motion trail without matching, whether an ending condition is triggered or not is checked;
fifthly, extracting track characteristics of passengers, and carrying out door entry or vehicle entry and exit matching to obtain identification results;
and extracting characteristic values from a target result in a passenger target motion track through a characteristic extraction model, calculating the average value of all characteristic values in the track, taking the average value as a track characteristic value of the motion track, adding the track characteristic value into a matching queue if the motion track is in a door-in or car-on direction, calculating the distance between the track characteristic value and the track characteristic value of each door-in or car-on direction in the matching queue if the motion track is in a door-out or car-off direction, and removing the characteristic average value of the matched door-in or car-on direction from the queue if the distance value is larger than a set threshold value.
2. A method of statistical passenger identification based on monocular depth estimation according to claim 1, wherein the viewing angle at which the monocular camera captures RGB image frames in the first step is designed as a top-down vertically downward viewing angle.
3. A method of passenger statistical identification based on monocular depth estimation according to any one of claims 1 or 2, further comprising: sixth, storing and reporting the statistical recognition result;
for each passenger target, the statistical recognition result is related to the time of entering or getting on and the time of exiting or getting off matched with the statistical recognition result, and the statistical recognition result is stored and reported through the existing storage and communication mechanism of the system.
4. A device for statistical recognition of passengers based on monocular depth estimation, characterized in that it can implement or perform a method for statistical recognition of passengers based on monocular depth estimation as claimed in any one of claims 1-3.
5. The device for passenger statistical recognition based on monocular depth estimation according to claim 4, wherein the device comprises an acquisition unit, a detection unit, a fusion unit, a first statistical unit, a second statistical unit;
-said acquisition unit for performing the steps of the first step of a method of passenger statistical identification based on monocular depth estimation according to any one of claims 1-3;
-said detection unit for performing the step of the second step of a method of passenger statistical identification based on monocular depth estimation according to any one of claims 1-3;
the fusion unit for performing the third step of a method of passenger statistical identification based on monocular depth estimation according to any one of claims 1-3;
the first statistical unit for performing the fourth step of a method of passenger statistical identification based on monocular depth estimation according to any one of claims 1-3;
the second statistical unit is configured to perform the step of the fifth step of a method of passenger statistical identification based on monocular depth estimation according to any one of claims 1-3.
6. The apparatus for passenger statistical recognition based on monocular depth estimation according to claim 5, further comprising a reporting unit for performing the step of the sixth step of the method for passenger statistical recognition based on monocular depth estimation according to claim 3.
CN202111437485.7A 2021-11-30 2021-11-30 Passenger statistical identification method and device based on monocular depth estimation Active CN114332184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111437485.7A CN114332184B (en) 2021-11-30 2021-11-30 Passenger statistical identification method and device based on monocular depth estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111437485.7A CN114332184B (en) 2021-11-30 2021-11-30 Passenger statistical identification method and device based on monocular depth estimation

Publications (2)

Publication Number Publication Date
CN114332184A CN114332184A (en) 2022-04-12
CN114332184B true CN114332184B (en) 2023-05-02

Family

ID=81047440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111437485.7A Active CN114332184B (en) 2021-11-30 2021-11-30 Passenger statistical identification method and device based on monocular depth estimation

Country Status (1)

Country Link
CN (1) CN114332184B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108241844A (en) * 2016-12-27 2018-07-03 北京文安智能技术股份有限公司 A kind of public traffice passenger flow statistical method, device and electronic equipment
CN108446611A (en) * 2018-03-06 2018-08-24 深圳市图敏智能视频股份有限公司 A kind of associated binocular image bus passenger flow computational methods of vehicle door status
CN110516602A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of public traffice passenger flow statistical method based on monocular camera and depth learning technology

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231755B (en) * 2007-01-25 2013-03-06 上海遥薇(集团)有限公司 Moving target tracking and quantity statistics method
US11138751B2 (en) * 2019-07-06 2021-10-05 Toyota Research Institute, Inc. Systems and methods for semi-supervised training using reprojected distance loss
CN110633671A (en) * 2019-09-16 2019-12-31 天津通卡智能网络科技股份有限公司 Bus passenger flow real-time statistical method based on depth image
CN112541374B (en) * 2019-09-20 2024-04-30 南京行者易智能交通科技有限公司 Deep learning-based passenger attribute acquisition method, device and model training method
CN110969131B (en) * 2019-12-04 2022-10-04 大连理工大学 Subway people flow counting method based on scene flow
CN111368829B (en) * 2020-02-28 2023-06-30 北京理工大学 Visual semantic relation detection method based on RGB-D image
CN111882586B (en) * 2020-06-23 2022-09-13 浙江工商大学 Multi-actor target tracking method oriented to theater environment
CN112613370A (en) * 2020-12-15 2021-04-06 浙江大华技术股份有限公司 Target defect detection method, device and computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108241844A (en) * 2016-12-27 2018-07-03 北京文安智能技术股份有限公司 A kind of public traffice passenger flow statistical method, device and electronic equipment
CN108446611A (en) * 2018-03-06 2018-08-24 深圳市图敏智能视频股份有限公司 A kind of associated binocular image bus passenger flow computational methods of vehicle door status
CN110516602A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of public traffice passenger flow statistical method based on monocular camera and depth learning technology

Also Published As

Publication number Publication date
CN114332184A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN109784162B (en) Pedestrian behavior recognition and trajectory tracking method
Choudhury et al. Vehicle detection and counting using haar feature-based classifier
CN111429484B (en) Multi-target vehicle track real-time construction method based on traffic monitoring video
CN109541583B (en) Front vehicle distance detection method and system
Pang et al. A novel method for resolving vehicle occlusion in a monocular traffic-image sequence
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
EP1796043B1 (en) Object detection
CN105702048B (en) Highway front truck illegal road occupation identifying system based on automobile data recorder and method
CN107305627A (en) A kind of automobile video frequency monitoring method, server and system
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN102759347B (en) Online in-process quality control device and method for high-speed rail contact networks and composed high-speed rail contact network detection system thereof
US20040131233A1 (en) System and method for vehicle detection and tracking
CN107886055A (en) A kind of retrograde detection method judged for direction of vehicle movement
CN111460938B (en) Vehicle driving behavior real-time monitoring method and device
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
Ghahremannezhad et al. Real-time accident detection in traffic surveillance using deep learning
Shukla et al. Speed determination of moving vehicles using Lucas-Kanade algorithm
CN111814510A (en) Detection method and device for remnant body
CN105761507B (en) A kind of vehicle count method based on three-dimensional track cluster
CN114926422A (en) Method and system for detecting boarding and alighting passenger flow
CN106960193A (en) A kind of lane detection apparatus and method
CN114332184B (en) Passenger statistical identification method and device based on monocular depth estimation
CN110889347B (en) Density traffic flow counting method and system based on space-time counting characteristics
Abdagic et al. Counting traffic using optical flow algorithm on video footage of a complex crossroad
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant