CN111291722A - Vehicle weight recognition system based on V2I technology - Google Patents
Vehicle weight recognition system based on V2I technology Download PDFInfo
- Publication number
- CN111291722A CN111291722A CN202010162040.1A CN202010162040A CN111291722A CN 111291722 A CN111291722 A CN 111291722A CN 202010162040 A CN202010162040 A CN 202010162040A CN 111291722 A CN111291722 A CN 111291722A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- local
- vehicles
- features
- rsu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle re-identification system based on a V2I technology, which relates to the technical field of vehicle identification.A RSU device fuses local features of a first vehicle extracted from angles of other second vehicles around the first vehicle with local features of the first vehicle extracted from an intersection camera to obtain more comprehensive global feature representation of the first vehicle, and then utilizes a plurality of third vehicles to perform vehicle re-identification according to the global features of the first vehicle, so that the defect of re-identification performed at a single intersection visual angle can be effectively overcome, the probability of re-identification misjudgment at a single visual angle is greatly reduced, and the accuracy of vehicle re-identification is improved.
Description
Technical Field
The invention relates to the technical field of vehicle identification, in particular to a vehicle weight identification system based on a V2I technology.
Background
Vehicle weight identification techniques refer to identifying a target vehicle in an existing sequence of images of potential sources and non-overlapping camera views. Along with the increase of the vehicle keeping amount year by year, the incidence rate of vehicle accidents and violation events is higher and higher, and the escape of drivers causing the accidents is frequent, so that a traffic police department needs to spend a large amount of time for searching the vehicles causing the accidents from the video monitoring of the intersection, and the searching modes are time-consuming, inefficient and difficult to adapt to the requirements of intelligent traffic. The vehicle weight recognition technology can quickly extract a target vehicle from an image to be detected, and has very important practical significance for public safety and criminal investigation.
Most of vehicle re-identification technologies in the current traffic scene only rely on cameras installed at each intersection, and because each camera at each intersection possibly has different directions when shooting vehicles, the same vehicle has larger difference under different cameras, so that the features extracted by different cameras are inconsistent, the misjudgment probability during re-identification is increased, and the effect is poor.
Disclosure of Invention
The inventor proposes a vehicle weight recognition system based on the V2I technology in order to solve the above problems and technical requirements, and the technical solution of the invention is as follows:
a vehicle weight recognition system based on V2I technology comprises a plurality of vehicles, intersection cameras installed at intersections and RSU equipment installed at roadside, wherein vehicle-mounted cameras are installed around each vehicle, each vehicle is provided with OBU equipment and is in communication connection with the RSU equipment in a communication range through the OBU equipment, and the intersection cameras are also in communication connection with the RSU equipment;
the method comprises the steps that RSU equipment of a current road section receives a first local feature of a first vehicle sent by an intersection camera, wherein the first local feature is obtained by acquiring an image of the first vehicle and extracting the image feature through the intersection camera;
the RSU equipment of the current road section acquires second local features of the first vehicle sent by each second vehicle, the second vehicles are vehicles within a preset range of the first vehicles, and the second local features are obtained by the second vehicles through vehicle-mounted cameras at corresponding positions to acquire images of the first vehicles and perform image feature extraction;
the RSU equipment of the current road section performs feature combination on the first local features and each second local feature to form global features of the first vehicle, the global features of the first vehicle are sent to the RSU equipment of the next road section, and the RSU equipment of the next road section broadcasts the global features of the first vehicle to all third vehicles in a communication range;
and each third vehicle acquires images of surrounding vehicles through the vehicle-mounted camera, extracts image features to obtain local features, identifies the vehicles according to the similarity between the extracted local features and the global features of the first vehicle, and tracks the first vehicle by the RSU equipment of the next road section according to the identification results of a plurality of third vehicles.
The second vehicle performs feature extraction on the acquired image of the first vehicle by using a convolutional neural network to obtain CNN features;
the second vehicle extracts HOG characteristics from the acquired image of the first vehicle;
and the second vehicle inputs the CNN characteristic and the HOG characteristic into the full-connection layer to perform characteristic fusion to obtain a second local characteristic of the first vehicle.
The further technical scheme is that the RSU equipment of the current road segment performs feature splicing on the first local features and each second local feature to form global features of the first vehicle, and the method comprises the following steps:
inputting the first local features and each second local feature into corresponding kernel functions respectively;
and combining the kernel functions to construct a mixed kernel function to obtain the global feature of the first vehicle.
The further technical scheme is that the method comprises the steps of determining the similarity between the extracted local features and the global features of the first vehicle for vehicle identification, and tracking the first vehicle by the RSU equipment of the next road section according to the identification results of a plurality of third vehicles, and comprises the following steps:
calculating Euclidean distances between the extracted local features and the global features of the first vehicle by each third vehicle, and if the calculated Euclidean distances are smaller than a preset distance threshold, determining to identify the suspected vehicle and feeding back an identification result to the RSU equipment of the next road section;
and marking the suspected vehicles identified by the third vehicles by the RSU equipment of the next road section according to the received identification result, and determining the suspected vehicles as first vehicles and tracking the first vehicles when the marked times of the same suspected vehicles are greater than a preset threshold value.
The method comprises the following steps that an intersection camera acquires a driving image, abnormal event detection is carried out on the driving image to determine the position of a first vehicle, second vehicle information corresponding to the first vehicle is determined in a local map corresponding to the intersection camera according to the position of the first vehicle, the second vehicle information is sent to RSU equipment of a current road section, and the second vehicle information comprises second vehicles in a preset range of the first vehicle and relative positions of the second vehicles and the first vehicle;
the RSU device in the current road segment obtains a second local feature of the first vehicle transmitted by each second vehicle, including:
the method comprises the following steps that images of surrounding vehicles are obtained through vehicle-mounted cameras in the driving process of the vehicles, image features are extracted to obtain local features, and the local features are sent to RSU equipment of a current road section;
and the RSU equipment in the current road section screens out local features extracted from the images of all the received local features of each second vehicle acquired by the vehicle-mounted camera in the corresponding position according to the information of the second vehicle, wherein the local features are second local features, and the vehicle-mounted camera in the corresponding position is matched with the relative positions of the second vehicle and the first vehicle.
The beneficial technical effects of the invention are as follows:
the application discloses recognition system is heavily considered to vehicle based on V2I technique, this system fuses its local feature that other vehicle angles around the vehicle drawed and its local feature that the crossing camera drawed and obtains more comprehensive global feature of this vehicle and show, can effectively solve the not enough that single crossing visual angle carries out heavy discernment, improves the accuracy that vehicle heavy discernment.
Drawings
Fig. 1 is a vehicle weight recognition flowchart of the vehicle weight recognition system disclosed in the present application.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
The application discloses heavy identification system of vehicle based on V2I technique, this system include a plurality of vehicles, install the crossing camera at the crossing and install the RSU equipment at the roadside, the crossing camera establishes communication connection with the RSU equipment. Each vehicle is provided with an OBU device and establishes communication connection with RSU devices in a communication range through the OBU device. The process of the system for vehicle weight recognition is as follows, please refer to the flow chart shown in fig. 1:
first, extraction of features for different angles of a first vehicle.
1. Firstly, the intersection camera acquires driving images of intersection vehicles in real time and detects abnormal events to determine the position of the first vehicle, and the abnormal events can be configured in a user-defined mode in advance, for example, the abnormal events are configured to be red light running or traffic accidents and the like. After detecting the first vehicle with the abnormal event, the intersection camera may determine the position of the first vehicle in its local map (for example, at a certain position of a certain lane in the local map), and determine each second vehicle in the local map within a predetermined range around the first vehicle, and at the same time determine the relative positions of the second vehicle and the first vehicle, where the predetermined range may also be in a predetermined configuration. Such as determining the second vehicle to be a vehicle behind in the same lane as the first vehicle, etc. This part of the content is a common technology in the existing road condition sensing abnormal event detection technology, so this application will not describe this specific technology implementation in detail.
2. After determining the first vehicle and the corresponding second vehicle information, the intersection camera sends the second vehicle information to the RSU equipment of the current road section. Meanwhile, the intersection camera acquires the image of the first vehicle, extracts the image characteristics to obtain first local characteristics, and sends the first local characteristics to the RSU equipment of the current road section.
In the method, in order to improve the robustness of the extracted local features, local feature extraction is performed in a mode of fusing a low-level visual feature and a high-level visual feature so as to reduce interference of illumination and vehicle postures on the feature extraction, firstly, a crossing camera performs feature extraction on an obtained image of a first vehicle by using a convolutional neural network to obtain CNN features, then, HOG features are extracted from the obtained image of the first vehicle to obtain features such as color textures, and the CNN features and the HOG features are input into a full-connection layer to perform feature fusion to obtain the first local features of the first vehicle.
3. In the running process of all vehicles, images of surrounding vehicles are acquired through the vehicle-mounted cameras, local features are obtained through image feature extraction, and the images are sent to the RSU equipment of the current road section through the OBU equipment. Similarly, the vehicle respectively extracts the CNN feature and the HOG feature of the image and fuses the CNN feature and the HOG feature to obtain local features, so that the robustness of the extracted local features is improved.
The RSU equipment of the current road section receives local features obtained by extracting images acquired by all vehicles through all vehicle-mounted cameras, then local features extracted by the images acquired by all the second vehicles through the vehicle-mounted cameras in the corresponding positions are screened out from all the received local features according to second vehicle information, the local features are second local features, the vehicle-mounted cameras in the corresponding positions are matched with the relative positions of the second vehicles and the first vehicles, and therefore the second local features are fusion features of the CNN features and the HOG features. For example, if the second vehicle is a vehicle behind the first vehicle, the front camera of the second vehicle is the vehicle-mounted camera at the corresponding position. The specific method for the RSU device to perform information screening according to the second vehicle information is also a common technology in the existing road condition sensing abnormal event detection technology, and therefore, the detailed technical implementation thereof is not described in detail in this application.
Second, fusion of features for different angles of the first vehicle.
And the RSU equipment of the current road section performs characteristic combination on the first local characteristic and each second local characteristic to form a global characteristic of the first vehicle. The local feature fusion is performed by adopting a multi-kernel learning method, namely, the first local feature and each second local feature are respectively input into corresponding kernel functions, and finally, each kernel function is combined to construct a mixed kernel function to obtain the global feature of the first vehicle. The obtained global features are comprehensive and global multi-angle feature representation of the first vehicle, and the problem of poor vehicle weight recognition effect caused by different shooting angles of a camera at a single intersection, different vehicle postures and the like can be effectively solved.
And thirdly, identifying the weight of the first vehicle.
After the RSU device of the current road section obtains the global feature of the first vehicle, the global feature is sent to the RSU device of the next road section, and the RSU device of the next road section broadcasts the global feature of the first vehicle to all third vehicles in the communication range of the first vehicle.
Similarly, the third vehicle acquires images of surrounding vehicles through the vehicle-mounted camera during driving, performs image feature extraction to obtain local features, and then performs vehicle identification according to the similarity between the extracted local features and the global features of the first vehicle.
The RSU device in the next road segment receives the identification results that the suspected vehicles are identified by the plurality of third vehicles, then labels the suspected vehicles identified by each third vehicle, and determines that the suspected vehicle is the first vehicle and tracks the first vehicle when the number of times that the same suspected vehicle is labeled is greater than a preset threshold value.
What has been described above is only a preferred embodiment of the present application, and the present invention is not limited to the above embodiment. It is to be understood that other modifications and variations directly derivable or suggested by those skilled in the art without departing from the spirit and concept of the present invention are to be considered as included within the scope of the present invention.
Claims (5)
1. A vehicle weight recognition system based on a V2I technology is characterized by comprising a plurality of vehicles, intersection cameras installed at intersections and RSU equipment installed at roadside, wherein vehicle-mounted cameras are installed around each vehicle, each vehicle is provided with OBU equipment and is in communication connection with the RSU equipment in a communication range through the OBU equipment, and the intersection cameras are also in communication connection with the RSU equipment;
the RSU equipment of the current road section receives a first local feature of the first vehicle, which is sent by the intersection camera, wherein the first local feature is obtained by acquiring an image of the first vehicle and extracting image features through the intersection camera;
the RSU equipment of the current road section acquires second local features of the first vehicle, which are sent by each second vehicle, wherein the second vehicles are vehicles within a preset range of the first vehicle, and the second local features are obtained by acquiring images of the first vehicle through vehicle-mounted cameras at corresponding positions of the second vehicles and extracting image features;
the RSU equipment of the current road section performs characteristic combination on the first local characteristics and each second local characteristic to form global characteristics of the first vehicle, and sends the global characteristics of the first vehicle to RSU equipment of a next road section, and the RSU equipment of the next road section broadcasts the global characteristics of the first vehicle to all third vehicles in a communication range;
and each third vehicle acquires the image of the surrounding vehicle through a vehicle-mounted camera, extracts the image characteristics to obtain local characteristics, identifies the vehicles according to the similarity between the extracted local characteristics and the global characteristics of the first vehicle, and tracks the first vehicle according to the identification results of a plurality of third vehicles by the RSU equipment of the next road section.
2. The system of claim 1,
the second vehicle performs feature extraction on the acquired image of the first vehicle by using a convolutional neural network to obtain CNN features;
the second vehicle extracts HOG characteristics from the acquired image of the first vehicle;
and the second vehicle inputs the CNN characteristic and the HOG characteristic into a full connection layer to perform characteristic fusion to obtain a second local characteristic of the first vehicle.
3. The system of claim 1, wherein the RSU device of the current road segment feature-stitching the first local feature and each of the second local features into a global feature of the first vehicle comprises:
inputting the first local features and each second local feature into corresponding kernel functions respectively;
and combining the kernel functions to construct a mixed kernel function to obtain the global feature of the first vehicle.
4. The system of claim 1, wherein determining similarity between the extracted local features and the global features of the first vehicle for vehicle identification, and the RSU device of the next road segment tracking the first vehicle according to the identification results of a plurality of third vehicles comprises:
calculating Euclidean distances between the extracted local features and the global features of the first vehicle by the third vehicles, and if the calculated Euclidean distances are smaller than a preset distance threshold, determining to identify a suspected vehicle and feeding back an identification result to the RSU equipment of the next road section;
and the RSU equipment of the next road section marks the suspected vehicles identified by the third vehicles according to the received identification result, and when the number of times that the same suspected vehicle is marked is greater than a preset threshold value, the suspected vehicle is determined to be the first vehicle and the first vehicle is tracked.
5. The system according to any one of claims 1 to 4,
the intersection camera acquires a driving image, detects an abnormal event to determine the position of the first vehicle, determines second vehicle information corresponding to the first vehicle in a local map corresponding to the intersection camera according to the position of the first vehicle, and sends the second vehicle information to the RSU equipment of the current road section, wherein the second vehicle information comprises second vehicles in a preset range of the first vehicle and relative positions of each second vehicle and the first vehicle;
the RSU device of the current road segment obtains a second local feature of the first vehicle sent by each second vehicle, including:
the vehicle acquires images of surrounding vehicles through each vehicle-mounted camera in the driving process, extracts image characteristics to obtain local characteristics and sends the local characteristics to the RSU equipment of the current road section;
and the RSU equipment of the current road section screens out local characteristics extracted by each second vehicle from all the received local characteristics through the images acquired by the vehicle-mounted cameras at the corresponding positions from all the received local characteristics according to the second vehicle information, wherein the vehicle-mounted cameras at the corresponding positions are matched with the relative positions of the second vehicle and the first vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010162040.1A CN111291722A (en) | 2020-03-10 | 2020-03-10 | Vehicle weight recognition system based on V2I technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010162040.1A CN111291722A (en) | 2020-03-10 | 2020-03-10 | Vehicle weight recognition system based on V2I technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111291722A true CN111291722A (en) | 2020-06-16 |
Family
ID=71024968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010162040.1A Pending CN111291722A (en) | 2020-03-10 | 2020-03-10 | Vehicle weight recognition system based on V2I technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111291722A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881321A (en) * | 2020-07-27 | 2020-11-03 | 广元量知汇科技有限公司 | Smart city safety monitoring method based on artificial intelligence |
CN117012032A (en) * | 2023-09-28 | 2023-11-07 | 深圳市新城市规划建筑设计股份有限公司 | Intelligent traffic management system and method based on big data |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102087786A (en) * | 2010-02-09 | 2011-06-08 | 陈秋和 | Information fusion-based intelligent traffic information processing method and system for people, vehicle and road |
CN107729818A (en) * | 2017-09-21 | 2018-02-23 | 北京航空航天大学 | A kind of multiple features fusion vehicle recognition methods again based on deep learning |
CN109063768A (en) * | 2018-08-01 | 2018-12-21 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
US20190132709A1 (en) * | 2018-12-27 | 2019-05-02 | Ralf Graefe | Sensor network enhancement mechanisms |
CN110070716A (en) * | 2019-04-29 | 2019-07-30 | 深圳成谷科技有限公司 | A kind of two visitors, one danger vehicle early warning method and system based on bus or train route coordination technique |
CN110446168A (en) * | 2019-08-14 | 2019-11-12 | 中国联合网络通信集团有限公司 | A kind of target vehicle method for tracing and system |
CN110659599A (en) * | 2019-09-19 | 2020-01-07 | 安徽七天教育科技有限公司 | Scanning test paper-based offline handwriting authentication system and using method thereof |
CN110866441A (en) * | 2019-09-29 | 2020-03-06 | 京东数字科技控股有限公司 | Vehicle identification and continuation tracking method and device and road side system |
-
2020
- 2020-03-10 CN CN202010162040.1A patent/CN111291722A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102087786A (en) * | 2010-02-09 | 2011-06-08 | 陈秋和 | Information fusion-based intelligent traffic information processing method and system for people, vehicle and road |
CN107729818A (en) * | 2017-09-21 | 2018-02-23 | 北京航空航天大学 | A kind of multiple features fusion vehicle recognition methods again based on deep learning |
CN109063768A (en) * | 2018-08-01 | 2018-12-21 | 北京旷视科技有限公司 | Vehicle recognition methods, apparatus and system again |
US20190132709A1 (en) * | 2018-12-27 | 2019-05-02 | Ralf Graefe | Sensor network enhancement mechanisms |
CN110070716A (en) * | 2019-04-29 | 2019-07-30 | 深圳成谷科技有限公司 | A kind of two visitors, one danger vehicle early warning method and system based on bus or train route coordination technique |
CN110446168A (en) * | 2019-08-14 | 2019-11-12 | 中国联合网络通信集团有限公司 | A kind of target vehicle method for tracing and system |
CN110659599A (en) * | 2019-09-19 | 2020-01-07 | 安徽七天教育科技有限公司 | Scanning test paper-based offline handwriting authentication system and using method thereof |
CN110866441A (en) * | 2019-09-29 | 2020-03-06 | 京东数字科技控股有限公司 | Vehicle identification and continuation tracking method and device and road side system |
Non-Patent Citations (1)
Title |
---|
范恒飞: "基于车联网的车辆运动轨迹追踪技术研究与实现", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881321A (en) * | 2020-07-27 | 2020-11-03 | 广元量知汇科技有限公司 | Smart city safety monitoring method based on artificial intelligence |
CN111881321B (en) * | 2020-07-27 | 2021-04-20 | 东来智慧交通科技(深圳)有限公司 | Smart city safety monitoring method based on artificial intelligence |
CN117012032A (en) * | 2023-09-28 | 2023-11-07 | 深圳市新城市规划建筑设计股份有限公司 | Intelligent traffic management system and method based on big data |
CN117012032B (en) * | 2023-09-28 | 2023-12-19 | 深圳市新城市规划建筑设计股份有限公司 | Intelligent traffic management system and method based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6442474B1 (en) | Vision-based method and apparatus for monitoring vehicular traffic events | |
US11734783B2 (en) | System and method for detecting on-street parking violations | |
US20170011625A1 (en) | Roadway sensing systems | |
US11836985B2 (en) | Identifying suspicious entities using autonomous vehicles | |
CN110619279B (en) | Road traffic sign instance segmentation method based on tracking | |
CN110738857B (en) | Vehicle violation evidence obtaining method, device and equipment | |
KR101742490B1 (en) | System for inspecting vehicle in violation by intervention and the method thereof | |
CN108932849B (en) | Method and device for recording low-speed running illegal behaviors of multiple motor vehicles | |
KR102089298B1 (en) | System and method for recognizing multinational license plate through generalized character sequence detection | |
US20190215491A1 (en) | Movement or topology prediction for a camera network | |
CN105046966A (en) | System and method for automatically detecting illegal parking behaviors in drop-off areas | |
CN113055823B (en) | Method and device for managing shared bicycle based on road side parking | |
CN113851017A (en) | Pedestrian and vehicle identification and early warning multifunctional system based on road side RSU | |
CN112381014A (en) | Illegal parking vehicle detection and management method and system based on urban road | |
CN111291722A (en) | Vehicle weight recognition system based on V2I technology | |
CN113903008A (en) | Ramp exit vehicle violation identification method based on deep learning and trajectory tracking | |
CN114648748A (en) | Motor vehicle illegal parking intelligent identification method and system based on deep learning | |
CN106530739A (en) | License plate recognition method, device and system thereof based on multiple camera device | |
CN113870551A (en) | Roadside monitoring system capable of identifying dangerous and non-dangerous driving behaviors | |
Malinovskiy et al. | Model‐free video detection and tracking of pedestrians and bicyclists | |
KR102484789B1 (en) | Intelligent crossroad integration management system with unmanned control and traffic information collection function | |
CN110659534B (en) | Shared bicycle detection method and device | |
CN113223276B (en) | Pedestrian hurdling behavior alarm method and device based on video identification | |
CN115394089A (en) | Vehicle information fusion display method, sensorless passing system and storage medium | |
CN114693722A (en) | Vehicle driving behavior detection method, detection device and detection equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200616 |