CN113674329A - Vehicle driving behavior detection method and system - Google Patents

Vehicle driving behavior detection method and system Download PDF

Info

Publication number
CN113674329A
CN113674329A CN202110934856.6A CN202110934856A CN113674329A CN 113674329 A CN113674329 A CN 113674329A CN 202110934856 A CN202110934856 A CN 202110934856A CN 113674329 A CN113674329 A CN 113674329A
Authority
CN
China
Prior art keywords
vehicle
detection
lane
detection module
driving behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110934856.6A
Other languages
Chinese (zh)
Inventor
杨大为
宋世唯
周强
陈强
侍淳博
刘子涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Thermosphere Information Technology Co ltd
Original Assignee
Shanghai Stratosphere Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Stratosphere Intelligent Technology Co ltd filed Critical Shanghai Stratosphere Intelligent Technology Co ltd
Priority to CN202110934856.6A priority Critical patent/CN113674329A/en
Publication of CN113674329A publication Critical patent/CN113674329A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method and a system for detecting vehicle driving behaviors, wherein the system comprises the following steps: the camera device is used for acquiring a road image; the end processor is used for reading the road image acquired by the camera device and detecting the vehicle behavior in the road image; the 4G router is used for realizing signal transmission between the end processor and the background server; and the background server is used for reading and storing the detection data uploaded by the end processor for query and analysis. The vehicle behavior detection analysis realized by the method comprises detection and identification of vehicle speed, driving lanes, license plates and vehicle lamp states. The method and the device can reduce consumption of network flow, reduce application cost, provide more accurate analysis results, and optimize planning management and operation scheduling of the vehicle.

Description

Vehicle driving behavior detection method and system
Technical Field
The application belongs to the technical field of traffic management, and particularly relates to a vehicle driving behavior detection method and a vehicle driving behavior detection system.
Background
In recent years, advances in computer image processing techniques, particularly in deep learning techniques based on neural networks, have driven rapid development of video detection techniques. Enterprises and scientific research institutions at home and abroad carry out extensive research on the traffic video detection technology. In the prior art, one scheme is that some camera devices are customized according to security and traffic requirements, and a certain algorithm is built in the camera devices, so that functions of line crossing, overspeed, no courtesy of pedestrians and the like are realized. The camera devices have the characteristics of single function, one machine and one use, and the price of the camera devices far surpasses that of common security camera devices. The other scheme is to provide a customized platform, access the video of the camera device and perform unified calculation on a cloud or a back-end server. The scheme needs a server with super-strong calculation capacity, and is expensive and high in later maintenance cost. Meanwhile, real-time access of videos is needed, a large amount of network flow is consumed, and operation cost is increased. Therefore, how to develop a new type of vehicle driving behavior detection system based on the prior art to overcome the above problems of the same type of products is a direction that needs to be studied by those skilled in the art.
Disclosure of Invention
The application aims to provide a vehicle driving behavior detection method, which can reduce consumption of network flow, reduce application cost, provide more accurate analysis results and optimize planning management and operation scheduling of vehicles.
The technical scheme is as follows:
a vehicle driving behavior detection method, comprising the steps of:
step 1: collecting a road image;
step 2: reading the road image, detecting vehicles in the road image and generating detection data;
and step 3: and uploading the detection data to a background server for storage so as to be queried and analyzed.
By adopting the technical scheme: and arranging equipment for executing detection operation on each local node, and returning a final detection result to the cloud platform only. Thereby saving network bandwidth significantly. Meanwhile, the cloud platform can only store the returned data in a classified manner without strong computing power, so that the cost is saved. The user can inquire data information in the background and can also analyze the data, thereby providing more scientific and effective basis for planning management and operation scheduling.
Preferably, in the vehicle driving behavior detection method, the step 2 includes:
step 21: respectively acquiring lane lines in each frame of highway image;
step 22: acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
step 23: matching the vehicle detection frame with the tracking frame based on a Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
step 24: calculating the relative position relation between the moving track of the vehicle and each lane line in real time, and judging whether the lane where the vehicle is located and the vehicle change the lane or not;
step 25: calculating the moving speed of the vehicle;
step 26: detecting a lamp of each vehicle;
step 27: and identifying license plate information of each vehicle.
By adopting the technical scheme: the detection of vehicle information, vehicle moving track, vehicle speed, road lane line and vehicle lane changing condition is realized on the end processor respectively, so that most detection requirements are met on local equipment.
More preferably, in the above vehicle driving behavior detection method, the step 21 includes:
step 211: marking interest areas in the road images and extracting interest area edges in the interest areas;
step 212: extracting left and right edges of the potential lane in the road image based on the gray values;
step 213: filtering invalid left edges and right edges based on preset lane width and lane line gray threshold values;
step 214: and combining the continuous lane lines based on DFS search.
By adopting the technical scheme: compared with the traditional Hough line detection, the scheme can be used for further detecting the curved lane line, and the accuracy of detecting the lane line is expanded. Specifically, the method comprises the following steps: hough line detection is a linear detection algorithm, and linear equations are assumed and equation parameters are fitted. The method does not have any assumption, does not adopt a method of fitting an equation, and combines continuous lines through discrete points.
More preferably, in the above vehicle driving behavior detection method, the step 26 includes:
step 261: selecting a vehicle detection frame corresponding to a vehicle to be detected;
step 262: intercepting the lower half area of the vehicle detection frame obtained in the step 261 as an interest area;
step 263: and identifying the vehicle lamp classification as a bright lamp or a dark lamp in the interest area based on the Mobilnet-SSD model.
By adopting the technical scheme: because the car lamps are arranged at two corners of the car head, the car lamps are detected only in the lower half area of the car head by reducing the interest area, the force calculation requirement is reduced, and the detection accuracy is improved.
More preferably, in the above vehicle driving behavior detection method, the step 27 includes:
step 271: selecting a vehicle detection frame corresponding to a vehicle to be detected as an interest area;
step 272: detecting the position and the type of the license plate through a convolutional neural network;
step 273: affine transformation is carried out on the license plate through a space conversion network to form a license plate right image;
step 274: identifying the license plate facing image by using LPRNet and outputting a result vector of 18x 68;
step 275: and decoding and filtering the result vector.
By adopting the technical scheme: after the license plate position is obtained, because the scene is not fixed, the angle position of the license plate in the camera device is uncertain, and the license plate is affine transformed to the front side through the STN, so that the detection accuracy is improved. Since LPRNet outputs a fixed-length fixed-character probability array, the result needs to be decoded. Here we set a length of 18 and a number of characters of 68 which contain a provincial abbreviation, numbers, letters and a space symbol. During decoding, the invalid result is directly filtered by using a license plate number rule (for example, the first place of a license plate is called province and city for short), and only the probability of the valid license plate is accumulated. Therefore, the license plate recognition and the misjudgment filtering are combined, and the probability of outputting the correct license plate is improved.
More preferably, in the vehicle driving behavior detection method, step 275 includes:
step 2751: calculating the probability of the first character, storing five characters with the maximum probability as character strings and respectively recording the probabilities;
step 2752: calculating the character probability of the next bit, and taking five characters with the maximum probability as newly-added characters to be respectively added to each character string to form five new character strings;
step 2753: multiplying the probability of the original character string and the probability of the newly added character as the probability of the new character string;
step 2754: steps 2752-2753 are performed in a loop until each bit in the result vector is traversed;
step 2755: combining the same character strings and filtering invalid character strings;
step 2755: and taking the character string with the highest probability as a license plate recognition result to be output.
More preferably, in the above vehicle driving behavior detection method, the step 25 includes:
step 251: marking two preset lines in the road image;
step 252: obtaining the time difference of the vehicle passing through the two preset lines;
step 253: and obtaining the moving speed of the vehicle based on the distance between the two preset lines and the time difference of the vehicle passing through the two preset lines.
In order to realize the vehicle driving behavior detection method, the application also discloses a vehicle driving behavior detection system, which adopts the following technical scheme:
a vehicle driving behavior detection system, characterized by comprising:
the camera device is used for acquiring a road image;
the end processor is used for reading the road image acquired by the camera device and detecting the vehicle behavior in the road image;
the 4G router is used for realizing signal transmission between the end processor and the background server;
and the background server is used for storing the detection data uploaded by the end processor.
Preferably, in the vehicle driving behavior detection system:
the end processor includes: the system comprises a vehicle detection module, a tracking detection module, a vehicle speed detection module, a lane change detection module, a vehicle lamp detection module and a license plate detection module;
the lane detection module is used for respectively acquiring lane lines in each frame of highway image;
the vehicle detection module is used for acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
the tracking detection module is used for matching the vehicle detection frame with the tracking frame based on Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
the lane change detection module is used for calculating the position relation between the moving track of the vehicle and a lane line in real time and judging whether the vehicle changes lanes or not based on the calculation result;
the vehicle speed detection module is used for calculating the moving speed of the vehicle;
the car lamp detection module is used for detecting the car lamps of all the vehicles;
the license plate detection module is used for identifying license plate information of each vehicle.
More preferably, in the vehicle driving behavior detection system: the camera device is a rotatable camera.
Compared with the prior art, the technical scheme of the application disperses the calculation force to a plurality of end devices by adopting the calculation end. A large amount of network bandwidth is saved, and different detection functions are convenient to add in the future. Meanwhile, the requirements that a plurality of end devices are accessed into the device management platform and send information such as detection results, detection records, device states and the like to the management platform at regular time or according to requests are met. The docking development amount of the platform is reduced.
Drawings
The present application will now be described in further detail with reference to the following detailed description and accompanying drawings:
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a flow chart of the present invention.
The correspondence between each reference numeral and the part name is as follows:
1. a camera device; 2. an end processor; 3. a 4G router; 4. and a background server.
Detailed Description
In order to more clearly illustrate the technical solutions of the present application, the following will be further described with reference to various embodiments.
As shown in fig. 1-2:
embodiment 1, a vehicle driving behavior detection system, comprising: the system comprises a camera device 1, an end processor 2, a 4G router 3 and a background server 4.
The camera device 1 is used for collecting road images; the end processor 2 is used for reading the road image acquired by the camera device 1 and detecting the vehicle behavior in the road image; the 4G router 3 is used for realizing signal transmission between the end processor 2 and the background server 4; and the background server 4 is used for storing the detection data uploaded by the end processor 2.
Wherein the end processor 2 comprises: the system comprises a vehicle detection module, a tracking detection module, a vehicle speed detection module, a lane change detection module, a vehicle lamp detection module and a license plate detection module; the lane detection module is used for respectively acquiring lane lines in each frame of highway image; the vehicle detection module is used for acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network; the tracking detection module is used for matching the vehicle detection frame with the tracking frame based on Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle; the lane change detection module is used for calculating the position relation between the moving track of the vehicle and a lane line in real time and judging whether the vehicle changes lanes or not based on the calculation result; the vehicle speed detection module is used for calculating the moving speed of the vehicle; the car lamp detection module is used for detecting the car lamps of all the vehicles; the license plate detection module is used for identifying license plate information of each vehicle. The camera device 1 is a rotatable camera.
In practice, the working process is as follows:
step 1: the camera device 1 collects road images;
step 2: the end processor 2 is connected with the camera device 1 through the 4G router 3, reads the road image output by the camera device 1, detects vehicles in the road image and generates detection data; the step 2 specifically comprises:
step 21: respectively obtaining lane lines in each frame of highway image, wherein the step 21 specifically comprises the following steps:
step 211: marking interest areas in the road images and extracting interest area edges in the interest areas;
step 212: extracting left and right edges of the potential lane in the road image based on the gray values;
step 213: filtering invalid left edges and right edges based on preset lane width and lane line gray threshold values;
step 214: and combining the continuous lane lines based on DFS search.
Step 22: acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
step 23: matching the vehicle detection frame with the tracking frame based on a Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
step 24: calculating the relative position relation between the moving track of the vehicle and each lane line in real time, and judging whether the lane where the vehicle is located and the vehicle change the lane or not; specifically, when the moving track of the vehicle extends from the current lane to the adjacent lane, it is determined that the vehicle is changing lanes at the time.
Step 25: the moving speed of the vehicle is calculated. Specifically, step 25 includes:
step 251: marking two preset lines in the road image, setting an area between the two lines as an area where the vehicle speed is expected to be measured, and measuring the distance between the two lines to be 14 meters;
step 252: obtaining the time difference of the vehicle passing through the two preset lines; calculating the time difference to be 2.5 seconds;
step 253: obtaining the average speed per hour of the vehicle in a speed measuring area, namely 14m/2.5 s-20.16 km/h, based on the distance between the two preset lines and the time difference of the vehicle passing through the two preset lines;
step 26: detecting the lamps of each vehicle, and the step 26 specifically includes:
step 261: selecting a vehicle detection frame corresponding to a vehicle to be detected;
step 262: intercepting the lower half area of the vehicle detection frame obtained in the step 261 as an interest area;
step 263: and inputting the intercepted image into a Mobilene-SSD model network, judging whether the image is a left side lamp or a right side lamp according to the position of the vehicle lamp frame in the vehicle frame, and judging the brightness of the lamp according to the type of the output lamp.
Step 27: and identifying license plate information of each vehicle. Specifically, step 271 includes:
step 271: selecting a vehicle detection frame corresponding to a vehicle to be detected as an interest area;
step 272: detecting the position and the type of the license plate through a convolutional neural network;
step 273: affine transformation is carried out on the license plate through a space conversion network to form a license plate right image;
step 274: identifying the license plate facing image by using LPRNet and outputting a result vector of 18x 68;
step 275: and decoding and filtering the result vector.
Step 2751: calculating the probability of the first character, storing five characters with the maximum probability as character strings and respectively recording the probabilities;
step 2752: calculating the character probability of the next bit, and taking five characters with the maximum probability as newly-added characters to be respectively added to each character string to form five new character strings;
step 2753: multiplying the probability of the original character string and the probability of the newly added character as the probability of the new character string;
step 2754: steps 2752-2753 are performed in a loop until each bit in the result vector is traversed;
step 2755: combining the same character strings and filtering invalid character strings;
step 2755: and taking the character string with the highest probability as a license plate recognition result to be output.
And step 3: and uploading the detection data to a background server 4 for storage so as to be queried and analyzed.
The above description is only for the specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application are intended to be covered by the scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A vehicle driving behavior detection method characterized by comprising the steps of:
step 1: collecting a road image;
step 2: reading the road image, detecting vehicles in the road image and generating detection data;
and step 3: and uploading the detection data to a background server for storage.
2. The vehicle driving behavior detection method according to claim 1, characterized in that the step 2 includes:
step 21: respectively acquiring lane lines in each frame of highway image;
step 22: acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
step 23: matching the vehicle detection frame with the tracking frame based on a Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
step 24: calculating the relative position relation between the moving track of the vehicle and each lane line in real time, and judging whether the lane where the vehicle is located and the vehicle change the lane or not;
step 25: calculating the moving speed of the vehicle;
step 26: detecting a lamp of each vehicle;
step 27: and identifying license plate information of each vehicle.
3. The vehicle driving behavior detection method according to claim 2, characterized in that the step 21 includes:
step 211: marking interest areas in the road images and extracting interest area edges in the interest areas;
step 212: extracting left and right edges of the potential lane in the road image based on the gray values;
step 213: filtering invalid left edges and right edges based on preset lane width and lane line gray threshold values;
step 214: and combining the continuous lane lines based on DFS search.
4. The vehicle driving behavior detection method according to claim 2, characterized in that the step 26 includes:
step 261: selecting a vehicle detection frame corresponding to a vehicle to be detected;
step 262: intercepting the lower half area of the vehicle detection frame obtained in the step 261 as an interest area;
step 263: and identifying the vehicle lamp classification as a bright lamp or a dark lamp in the interest area based on the Mobilnet-SSD model.
5. The vehicle driving behavior detection method according to claim 2, characterized in that the step 27 includes:
step 271: selecting a vehicle detection frame corresponding to a vehicle to be detected as an interest area;
step 272: detecting the position and the type of the license plate through a convolutional neural network;
step 273: affine transformation is carried out on the license plate through a space conversion network to form a license plate right image;
step 274: identifying the license plate facing image by using LPRNet and outputting a result vector of 18x 68;
step 275: and decoding and filtering the result vector.
6. The vehicle driving behavior detection method of claim 5, wherein the step 275 comprises:
step 2751: calculating the probability of the first character, storing five characters with the maximum probability as character strings and respectively recording the probabilities;
step 2752: calculating the character probability of the next bit, and taking five characters with the maximum probability as newly-added characters to be respectively added to each character string to form five new character strings;
step 2753: multiplying the probability of the original character string and the probability of the newly added character as the probability of the new character string;
step 2754: steps 2752-2753 are performed in a loop until each bit in the result vector is traversed;
step 2755: combining the same character strings and filtering invalid character strings;
step 2755: and taking the character string with the highest probability as a license plate recognition result to be output.
7. The vehicle driving behavior detection method according to claim 2, characterized in that the step 25 includes:
step 251: marking two preset lines in the road image;
step 252: obtaining the time difference of the vehicle passing through the two preset lines;
step 253: and obtaining the moving speed of the vehicle based on the distance between the two preset lines and the time difference of the vehicle passing through the two preset lines.
8. A vehicle driving behavior detection system, characterized by comprising:
the camera device is used for acquiring a road image;
the end processor is used for reading the road image acquired by the camera device and detecting the vehicle behavior in the road image;
the 4G router is used for realizing signal transmission between the end processor and the background server;
and the background server is used for storing the detection data uploaded by the end processor.
9. The vehicle driving behavior detection system according to claim 7, characterized in that:
the end processor includes: the system comprises a vehicle detection module, a tracking detection module, a vehicle speed detection module, a lane change detection module, a vehicle lamp detection module and a license plate detection module;
the lane detection module is used for respectively acquiring lane lines in each frame of highway image;
the vehicle detection module is used for acquiring a vehicle detection frame of the vehicle from each frame of road image based on a YOLOV3 convolutional neural network;
the tracking detection module is used for matching the vehicle detection frame with the tracking frame based on Hungarian algorithm; tracking the motion state of the vehicle based on a Kalman filtering algorithm and acquiring the moving track of the vehicle;
the lane change detection module is used for calculating the position relation between the moving track of the vehicle and a lane line in real time and judging whether the vehicle changes lanes or not based on the calculation result;
the vehicle speed detection module is used for calculating the moving speed of the vehicle;
the car lamp detection module is used for detecting the car lamps of all the vehicles;
the license plate detection module is used for identifying license plate information of each vehicle.
10. The vehicle driving behavior detection system according to claim 6, characterized in that: the camera device is a rotatable camera.
CN202110934856.6A 2021-08-13 2021-08-13 Vehicle driving behavior detection method and system Pending CN113674329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110934856.6A CN113674329A (en) 2021-08-13 2021-08-13 Vehicle driving behavior detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110934856.6A CN113674329A (en) 2021-08-13 2021-08-13 Vehicle driving behavior detection method and system

Publications (1)

Publication Number Publication Date
CN113674329A true CN113674329A (en) 2021-11-19

Family

ID=78542874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110934856.6A Pending CN113674329A (en) 2021-08-13 2021-08-13 Vehicle driving behavior detection method and system

Country Status (1)

Country Link
CN (1) CN113674329A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240435A (en) * 2022-09-21 2022-10-25 广州市德赛西威智慧交通技术有限公司 AI technology-based vehicle illegal driving detection method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2132515A1 (en) * 1992-03-20 1993-09-21 Glen William Auty An object monitoring system
CN104239309A (en) * 2013-06-08 2014-12-24 华为技术有限公司 Video analysis retrieval service side, system and method
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN112215233A (en) * 2020-10-10 2021-01-12 深圳市华付信息技术有限公司 Method for detecting and identifying license plate and handheld terminal
CN112455445A (en) * 2020-12-04 2021-03-09 苏州挚途科技有限公司 Automatic driving lane change decision method and device and vehicle
CN112699781A (en) * 2020-12-29 2021-04-23 上海眼控科技股份有限公司 Vehicle lamp state detection method and device, computer equipment and readable storage medium
CN112712703A (en) * 2020-12-09 2021-04-27 上海眼控科技股份有限公司 Vehicle video processing method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2132515A1 (en) * 1992-03-20 1993-09-21 Glen William Auty An object monitoring system
CN104239309A (en) * 2013-06-08 2014-12-24 华为技术有限公司 Video analysis retrieval service side, system and method
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN112215233A (en) * 2020-10-10 2021-01-12 深圳市华付信息技术有限公司 Method for detecting and identifying license plate and handheld terminal
CN112455445A (en) * 2020-12-04 2021-03-09 苏州挚途科技有限公司 Automatic driving lane change decision method and device and vehicle
CN112712703A (en) * 2020-12-09 2021-04-27 上海眼控科技股份有限公司 Vehicle video processing method and device, computer equipment and storage medium
CN112699781A (en) * 2020-12-29 2021-04-23 上海眼控科技股份有限公司 Vehicle lamp state detection method and device, computer equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240435A (en) * 2022-09-21 2022-10-25 广州市德赛西威智慧交通技术有限公司 AI technology-based vehicle illegal driving detection method and device

Similar Documents

Publication Publication Date Title
CN111540201B (en) Vehicle queuing length real-time estimation method and system based on roadside laser radar
Huttunen et al. Car type recognition with deep neural networks
Cai et al. Deep learning-based video system for accurate and real-time parking measurement
CN100573618C (en) A kind of traffic intersection four-phase vehicle flow detection method
CN102867417B (en) Taxi anti-forgery system and taxi anti-forgery method
US20120166080A1 (en) Method, system and computer-readable medium for reconstructing moving path of vehicle
Rabbouch et al. Unsupervised video summarization using cluster analysis for automatic vehicles counting and recognizing
WO2023109099A1 (en) Charging load probability prediction system and method based on non-intrusive detection
CN113160575A (en) Traffic violation detection method and system for non-motor vehicles and drivers
CN105513349A (en) Double-perspective learning-based mountainous area highway vehicle event detection method
CN104318327A (en) Predictive parsing method for track of vehicle
CN112562330A (en) Method and device for evaluating road operation index, electronic equipment and storage medium
Liu et al. A large-scale benchmark for vehicle logo recognition
CN113674329A (en) Vehicle driving behavior detection method and system
Gothankar et al. Circular hough transform assisted cnn based vehicle axle detection and classification
Han et al. A novel loop closure detection method with the combination of points and lines based on information entropy
Hou et al. Reidentification of trucks in highway corridors using convolutional neural networks to link truck weights to bridge responses
Ali et al. IRUVD: a new still-image based dataset for automatic vehicle detection
CN105335758A (en) Model identification method based on video Fisher vector descriptors
Annirudh et al. IoT based intelligent parking management system
CN115909241A (en) Lane line detection method, system, electronic device and storage medium
CN112347953B (en) Recognition device for road condition irregular obstacles of unmanned vehicle
Yang Novel traffic sensing using multi-camera car tracking and re-identification (MCCTRI)
CN112434601A (en) Vehicle law violation detection method, device, equipment and medium based on driving video
Wu et al. Real-time vehicle detection system for intelligent transportation using machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240228

Address after: 5 / F, 277 Huqingping Road, Minhang District, Shanghai, 201100

Applicant after: Shanghai Thermosphere Information Technology Co.,Ltd.

Country or region after: China

Address before: Building C, No.888, Huanhu West 2nd Road, Lingang New District, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant before: Shanghai stratosphere Intelligent Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right