CN113887420A - AI-based intelligent detection and identification system for urban public parking spaces - Google Patents
AI-based intelligent detection and identification system for urban public parking spaces Download PDFInfo
- Publication number
- CN113887420A CN113887420A CN202111161989.0A CN202111161989A CN113887420A CN 113887420 A CN113887420 A CN 113887420A CN 202111161989 A CN202111161989 A CN 202111161989A CN 113887420 A CN113887420 A CN 113887420A
- Authority
- CN
- China
- Prior art keywords
- frame
- training
- motor vehicle
- video
- artificial intelligence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 49
- 238000012549 training Methods 0.000 claims abstract description 44
- 238000004458 analytical method Methods 0.000 claims abstract description 28
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 24
- 238000012544 monitoring process Methods 0.000 claims abstract description 6
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 4
- 238000012360 testing method Methods 0.000 claims description 21
- 238000000034 method Methods 0.000 claims description 19
- 238000012795 verification Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000002238 attenuated effect Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an AI-based intelligent detection and identification system for urban public parking spaces, which comprises a video access module, an artificial intelligence analysis module and an early warning output module; the video access module is responsible for accessing a monitoring video of a traffic intersection, converting a video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image; the artificial intelligence analysis module is responsible for analyzing and comparing whether the early warning information exists or not by adopting a convolutional neural network according to the single-frame video image; and the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures of vehicles entering the public parking spaces and exiting the public parking spaces for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments. The invention uses artificial intelligence technology, the efficiency is high; further training can be performed according to the user data, and the efficiency is improved.
Description
Technical Field
The invention relates to the technical field of intelligent transportation, in particular to an AI-based intelligent detection and identification system for urban public parking spaces.
Background
With the rapid increase of the number of motor vehicles and the number of drivers, the problem of urban parking gradually becomes a hot spot problem in each large and medium city in China. The traffic defects of difficult road parking, traffic jam and disordered order are also shown while the whole society experiences motorization to bring convenience to people going out.
At present, one of the important reasons for the good operation of urban traffic is disordered parking order. The main body is as follows: the phenomena of difficult parking, fee evasion, disordered toll collection, illegal and disordered road occupation parking and the like are generally encountered in cities. Aiming at the problem of urban road parking management, on the basis of research on the road parking problem, the AI-based urban public parking space intelligent detection and identification system is provided, large-scale popularization and application of intelligent parking are realized, the urban parking management capability is greatly improved, the labor cost is reduced, the fee escaping phenomenon is avoided, the public trip efficiency and experience are improved, the intelligent traffic bottleneck is opened, the problem of urban parking difficulty is solved rapidly and at low cost in a maximized mode, the smoothness and order of road traffic are guaranteed, the paradigm of supply side reform is established, and the image of a national civilized city is established. The Internet, intelligent traffic and intelligent parking are beneficial to relieving city congestion, standardizing parking management, saving energy, reducing emission and improving the quality of life of citizens.
In the vehicle identification process, many times, some vehicles cannot be identified or are difficult to identify due to objective reasons such as dark environment or direct sunlight. Moreover, for the traditional machine learning, the recognition rate is low due to the inherent deficiency of the algorithm. For some deep learning methods at the present stage, model construction of mass data has the disadvantages of large calculation amount, high consumption of calculation resources, high requirement on the operation environment, and inconvenience for integration and miniaturization.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an AI-based intelligent detection and identification system for urban public parking spaces, which is reasonable in design.
The technical scheme of the invention is as follows:
an AI-based intelligent detection and identification system for urban public parking spaces is characterized by comprising a video access module, an artificial intelligence analysis module and an early warning output module;
the video access module is responsible for accessing a monitoring video of a traffic intersection, converting a video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image;
the artificial intelligence analysis module is responsible for analyzing and comparing whether the early warning information exists or not by adopting a convolutional neural network according to the single-frame video image;
and the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures of vehicles entering the public parking spaces and exiting the public parking spaces for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments.
The AI-based urban public parking space intelligent detection and identification system is characterized in that the processing process of the artificial intelligence analysis module is as follows:
1) marking a public parking space area in a single-frame video image as a parking area, recording area coordinates of the parking area, setting the position information of the camera as a preset position, and judging whether a motor vehicle detection sample exists in the parking area;
2) acquiring the position information of a current camera, judging whether the position information is a preset position, if so, carrying out motor vehicle detection, carrying out target detection on the motor vehicle in a video image as an object to be detected, respectively carrying out target detection regression on the motor vehicle by adopting two network frame models, namely SSD and yolov3, under a pyrrch frame, and acquiring target detection regression mean values of the two models to obtain the height h, the width w and the center point coordinates (x, y) of the motor vehicle;
3) further extracting the color, shape, size and speed information of the motor vehicle, and tracking the speed of the motor vehicle and the height h, width w and center point coordinates (x, y) of the detected regression frame obtained in the step 2) by using a Kalman filtering state prediction equation so as to determine whether the motor vehicle is the same vehicle; when the motor vehicle enters the parking area, that is, the center coordinates (x, y) of the motor vehicle frame are within the parking area or the overlap ratio is greater than 1/2 of the detection frame, the calculation formula of the overlap ratio is:
IoU=(ca∩pa)/ca
where ca denotes the area of the motor vehicle detection frame and pa denotes the parking area;
recording the time of the license plate of the vehicle and the driving time; when the motor vehicle exits from the parking area, calculating the vehicle staying time according to the entering and exiting time; and if the staying time exceeds the set time of the parking area, judging that the early warning information needs to be output.
The AI-based intelligent detection and identification system for the urban public parking spaces is characterized in that in the step 2) of the artificial intelligent analysis module, the concrete processes of respectively performing target detection regression on the motor vehicles by adopting two network frame models, namely SSD and YOLOv3 under a pyrrch frame, are as follows:
2.1) constructing a vehicle detection data set, wherein the data set consists of a training set, a testing set and a verification set, the training set is mainly used for training SSD and YOLOv3 network models, the testing set is mainly used for testing the precision of the model obtained by training, and the verification set is mainly used for evaluating the generalization capability of the model obtained by training, and the construction process is as follows:
2.1.1) collecting driving videos under different traffic scene weather conditions, unframing the videos, adding an open source vehicle detection data set, performing HSL transformation, angle rotation or white noise simulation on the foggy weather condition, manually screening out images with clear vehicle outlines, constructing a new training set, increasing the diversity of samples, and dividing all the images into a training set, a testing set and a verification set according to the proportion;
2.1.2) utilizing marking software to mark the automobile target in the image to obtain the position information of the automobile target, wherein the position information comprises a central point coordinate (x, y), a boundary frame width and height (w, h) and a target type, and only detecting the automobile type and storing a marking file in a txt format;
2.2) defining a loss function, generating an anchor frame by a K-means clustering method, and training a vehicle detection model based on SSD and YOLOv3, wherein the specific training process comprises the following steps:
adopting Darknet model parameters pre-trained on ImageNet data set as the initialization weight of YOLOv3 to reduce training time; when a vehicle detection model based on the training is trained, setting the maximum iteration number as N times, storing a weight file of the model every training N times in the first N/5 times, and then storing the weight file every a x N times, wherein a is more than 1 and is an integer, until the training is finished; when training to 8 × N/10 times and 9 × N/10 times, respectively, the learning rate was attenuated by a factor of 10 on the previous basis.
And 2.3) testing the model, namely testing the trained model by using the prepared test picture or video, and judging the quality of the model by using an evaluation index.
The invention has the following beneficial effects:
1) artificial intelligence technology: the artificial intelligence technology is used, and the efficiency is high; further training can be performed according to the user data, and the efficiency is improved.
2) Solving the problem of composite application: the system is compatible with all video analysis and detection accesses, can utilize the existing public resource to monitor videos, does not need to erect a camera again, utilizes an artificial intelligence technology to analyze a calibrated monitoring area in the videos, detects whether the monitored area has early warning information through front and back comparison, records the early warning information according to the requirement, uploads the recorded early warning information to an upper-layer server, and pushes the recorded early warning information to relevant functional departments in real time.
3) Powerful system functions: and comparing and verifying the analysis result by adopting an artificial intelligence technology, detecting the change condition of the monitoring area, recording the information of site, time and the like as required, and outputting the information to a specified information platform.
4) Excellent product compatibility: the method of the invention can adopt national standard communication protocol and video decoding algorithm, improve product compatibility and be compatible with all domestic mainstream video snapshot systems and videos.
Drawings
FIG. 1 is a system diagram of the present invention;
FIG. 2 is a flow chart of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1-2, the system for intelligently detecting and identifying an urban public parking space based on AI comprises a video access module, an artificial intelligence analysis module and an early warning output module.
A video access module: the method is characterized by comprising the steps of packaging standard modules such as ONVIF and GB28181 and SDK access components publicly provided by various camera manufacturers, sampling, transcoding through operational amplifier, inputting to a GPU graphic analysis module for analysis, and judging and processing the fault function through sampling software. And the system is responsible for accessing the monitoring video of the parking area, converting the video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image.
An artificial intelligence analysis module: and the system is responsible for comparing and analyzing whether the early warning information exists or not by adopting a convolutional neural network according to the video image and processing the early warning information. Which comprises the following steps:
1) marking a public parking space area in the video image as a parking area, recording area coordinates of the parking area, setting the position information of the camera as a preset position, and judging whether a motor vehicle detection sample exists in the parking area;
2) acquiring the position information of the current camera, judging whether the position information is a preset position, and if so, detecting the motor vehicle; performing target detection on a motor vehicle serving as an object to be detected in a video image, performing target detection regression on the motor vehicle by adopting two network frame models, namely SSD and yolov3 under a pytorch frame, respectively, and obtaining target detection regression mean values of the SSD and yolov3 to obtain the height h, the width w and the central point coordinates (x, y) of the motor vehicle;
the specific process of respectively performing target detection regression on the motor vehicle by adopting two network frame models of SSD and YOLOv3 under the pytore frame is as follows:
2.1) constructing a vehicle detection data set, wherein the data set consists of a training set, a testing set and a verification set, the training set is mainly used for training SSD and YOLOv3 network models, the testing set is mainly used for testing the precision of the model obtained by training, and the verification set is mainly used for evaluating the generalization capability of the model obtained by training, and the construction process is as follows:
2.1.1) collecting driving videos under different traffic scene weather conditions, unframing the videos, adding an open source vehicle detection data set, performing HSL transformation, angle rotation or white noise simulation on the foggy weather condition, manually screening out images with clear vehicle outlines, constructing a new training set, increasing the diversity of samples, and dividing all the images into a training set, a testing set and a verification set according to the proportion;
2.1.2) utilizing marking software to mark the automobile target in the image to obtain the position information of the automobile target, wherein the position information comprises a central point coordinate (x, y), a boundary frame width and height (w, h) and a target type, and only detecting the automobile type and storing a marking file in a txt format;
2.2) defining a loss function, generating an anchor frame by a K-means clustering method, and training a vehicle detection model based on SSD and YOLOv3, wherein the specific training process comprises the following steps:
adopting Darknet model parameters pre-trained on ImageNet data set as the initialization weight of YOLOv3 to reduce training time; when training a vehicle detection model based on the method, setting the maximum iteration number to 10000 times, storing a weight file of the model every 200 times of training in the first 2000 times, and then storing the weight file every 1000 times until the training is finished; when training to 8000 and 9000 times, let the learning rate decay by a factor of 10 on the previous basis, respectively.
And 2.3) testing the model, namely testing the trained model by using the prepared test picture or video, and judging the quality of the model by using an evaluation index.
3) Further extracting information such as color, shape, size and speed of the motor vehicle, tracking the target by using a Kalman filtering state equation and prediction, specifically tracking the speed of the motor vehicle and the height h, width w and central point coordinates (x, y) of the detection regression frame obtained in the step 2, and determining the motor vehicle as the same vehicle; when the motor vehicle enters the parking area, namely the center coordinates (x, y) of the motor vehicle frame are in the parking area or the overlapping rate is more than half of the detection frame, the calculation formula of the overlapping rate is as follows:
IoU=(ca∩pa)/ca
where ca denotes the area of the motor vehicle detection frame and pa denotes the parking area.
Recording the time of the license plate of the vehicle and the driving time; when the motor vehicle exits the parking area, the vehicle dwell time is calculated from the entrance-exit time. If the stay time exceeds the set time (set according to the requirement of the client, if the set time is 20 minutes), the early warning information is judged to need to be output.
And the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures of vehicles entering the public parking spaces and exiting the public parking spaces for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments. Meanwhile, the functions of mobile phone APP client alarm, WeChat push alarm, mobile phone short message alarm, alarm picture composition and the like are expanded.
Claims (3)
1. An AI-based intelligent detection and identification system for urban public parking spaces is characterized by comprising a video access module, an artificial intelligence analysis module and an early warning output module;
the video access module is responsible for accessing a monitoring video of a traffic intersection, converting a video stream into a single-frame video image, and sending the single-frame video image to the artificial intelligence analysis module after acquiring the acquisition time of each frame of image;
the artificial intelligence analysis module is responsible for analyzing and comparing whether the early warning information exists or not by adopting a convolutional neural network according to the single-frame video image;
and the early warning output module synthesizes evidence data according to the analysis result of the artificial intelligence analysis module and the video data of the video access module, captures pictures of vehicles entering the public parking spaces and exiting the public parking spaces for evidence keeping, and uploads the pictures to the upper-layer server in time to inform relevant functional departments.
2. The AI-based intelligent detection and identification system for urban public spaces according to claim 1, wherein the process of the artificial intelligence analysis module is as follows:
1) marking a public parking space area in a single-frame video image as a parking area, recording the area coordinate of the parking area, setting the position information of the camera as a preset position, and judging whether a motor vehicle detection sample exists in the parking area;
2) acquiring the position information of a current camera, judging whether the position information is a preset position, if so, carrying out motor vehicle detection, carrying out target detection on the motor vehicle in a video image as an object to be detected, respectively carrying out target detection regression on the motor vehicle by adopting two network frame models, namely SSD and yolov3, under a pyrrch frame, and acquiring target detection regression mean values of the two models to obtain the height h, the width w and the center point coordinates (x, y) of the motor vehicle;
3) further extracting the color, shape, size and speed information of the motor vehicle, and tracking the speed of the motor vehicle and the height h, width w and center point coordinates (x, y) of the detected regression frame obtained in the step 2) by using a Kalman filtering state prediction equation so as to determine whether the motor vehicle is the same vehicle; when the motor vehicle enters the parking area, that is, the center coordinates (x, y) of the motor vehicle frame are within the parking area or the overlap ratio is greater than 1/2 of the detection frame, the calculation formula of the overlap ratio is:
IoU=(ca∩pa)/ca
where ca denotes the area of the motor vehicle detection frame and pa denotes the parking area;
recording the time of the license plate of the vehicle and the driving time; when the motor vehicle exits from the parking area, calculating the vehicle staying time according to the entering and exiting time; and if the staying time exceeds the set time of the parking area, judging that the early warning information needs to be output.
3. The AI-based intelligent detection and identification system for urban public parking spaces according to claim 2, wherein in step 2) of the artificial intelligence analysis module, the specific process of respectively performing target detection regression on the motor vehicle by using two network framework models, namely SSD and YOLOv3 under the pitcher framework, is as follows:
2.1) constructing a vehicle detection data set, wherein the data set consists of a training set, a testing set and a verification set, the training set is mainly used for training SSD and YOLOv3 network models, the testing set is mainly used for testing the precision of the model obtained by training, and the verification set is mainly used for evaluating the generalization capability of the model obtained by training, and the construction process is as follows:
2.1.1) collecting driving videos under different traffic scene weather conditions, unframing the videos, adding an open source vehicle detection data set, performing HSL transformation, angle rotation or white noise simulation on the foggy weather condition, manually screening out images with clear vehicle outlines, constructing a new training set, increasing the diversity of samples, and dividing all the images into a training set, a testing set and a verification set according to the proportion;
2.1.2) utilizing marking software to mark the automobile target in the image to obtain the position information of the automobile target, wherein the position information comprises a central point coordinate (x, y), a boundary frame width and height (w, h) and a target type, and only detecting the automobile type and storing a marking file in a txt format;
2.2) defining a loss function, generating an anchor frame by a K-means clustering method, and training a vehicle detection model based on SSD and YOLOv3, wherein the specific training process comprises the following steps:
adopting Darknet model parameters pre-trained on ImageNet data set as the initialization weight of YOLOv3 to reduce training time; when a vehicle detection model based on the training is trained, setting the maximum iteration number as N times, storing a weight file of the model every training N times in the first N/5 times, and then storing the weight file every a x N times, wherein a is more than 1 and is an integer, until the training is finished; when training to 8 × N/10 times and 9 × N/10 times, respectively, the learning rate was attenuated by a factor of 10 on the previous basis.
And 2.3) testing the model, namely testing the trained model by using the prepared test picture or video, and judging the quality of the model by using an evaluation index.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111161989.0A CN113887420A (en) | 2021-09-30 | 2021-09-30 | AI-based intelligent detection and identification system for urban public parking spaces |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111161989.0A CN113887420A (en) | 2021-09-30 | 2021-09-30 | AI-based intelligent detection and identification system for urban public parking spaces |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113887420A true CN113887420A (en) | 2022-01-04 |
Family
ID=79004965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111161989.0A Pending CN113887420A (en) | 2021-09-30 | 2021-09-30 | AI-based intelligent detection and identification system for urban public parking spaces |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113887420A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114267180A (en) * | 2022-03-03 | 2022-04-01 | 科大天工智能装备技术(天津)有限公司 | Parking management method and system based on computer vision |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130266188A1 (en) * | 2012-04-06 | 2013-10-10 | Xerox Corporation | Video-based method for detecting parking boundary violations |
US20130266190A1 (en) * | 2012-04-06 | 2013-10-10 | Xerox Corporation | System and method for street-parking-vehicle identification through license plate capturing |
CN106935035A (en) * | 2017-04-07 | 2017-07-07 | 西安电子科技大学 | Parking offense vehicle real-time detection method based on SSD neutral nets |
US20170256165A1 (en) * | 2016-03-04 | 2017-09-07 | Xerox Corporation | Mobile on-street parking occupancy detection |
CN109326124A (en) * | 2018-10-17 | 2019-02-12 | 江西洪都航空工业集团有限责任公司 | A kind of urban environment based on machine vision parks cars Activity recognition system |
KR102007140B1 (en) * | 2019-01-30 | 2019-08-02 | 장승현 | Integrated traffic information management system for smart city |
CN110796168A (en) * | 2019-09-26 | 2020-02-14 | 江苏大学 | Improved YOLOv 3-based vehicle detection method |
WO2020042489A1 (en) * | 2018-08-30 | 2020-03-05 | 平安科技(深圳)有限公司 | Authentication method and apparatus for illegal parking case, and computer device |
KR102162130B1 (en) * | 2020-03-04 | 2020-10-06 | (주)드림테크 | Enforcement system of illegal parking using single camera |
KR102174556B1 (en) * | 2020-05-21 | 2020-11-05 | 박상보 | Apparatus for monitoring image to control traffic information employing Artificial Intelligence and vehicle number |
CN113205028A (en) * | 2021-04-26 | 2021-08-03 | 河海大学 | Pedestrian detection method and system based on improved YOLOv3 model |
CN113255486A (en) * | 2021-05-13 | 2021-08-13 | 华设设计集团股份有限公司 | Parking space occupation detection method based on high-level video monitoring |
-
2021
- 2021-09-30 CN CN202111161989.0A patent/CN113887420A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130266188A1 (en) * | 2012-04-06 | 2013-10-10 | Xerox Corporation | Video-based method for detecting parking boundary violations |
US20130266190A1 (en) * | 2012-04-06 | 2013-10-10 | Xerox Corporation | System and method for street-parking-vehicle identification through license plate capturing |
US20170256165A1 (en) * | 2016-03-04 | 2017-09-07 | Xerox Corporation | Mobile on-street parking occupancy detection |
CN106935035A (en) * | 2017-04-07 | 2017-07-07 | 西安电子科技大学 | Parking offense vehicle real-time detection method based on SSD neutral nets |
WO2020042489A1 (en) * | 2018-08-30 | 2020-03-05 | 平安科技(深圳)有限公司 | Authentication method and apparatus for illegal parking case, and computer device |
CN109326124A (en) * | 2018-10-17 | 2019-02-12 | 江西洪都航空工业集团有限责任公司 | A kind of urban environment based on machine vision parks cars Activity recognition system |
KR102007140B1 (en) * | 2019-01-30 | 2019-08-02 | 장승현 | Integrated traffic information management system for smart city |
CN110796168A (en) * | 2019-09-26 | 2020-02-14 | 江苏大学 | Improved YOLOv 3-based vehicle detection method |
KR102162130B1 (en) * | 2020-03-04 | 2020-10-06 | (주)드림테크 | Enforcement system of illegal parking using single camera |
KR102174556B1 (en) * | 2020-05-21 | 2020-11-05 | 박상보 | Apparatus for monitoring image to control traffic information employing Artificial Intelligence and vehicle number |
CN113205028A (en) * | 2021-04-26 | 2021-08-03 | 河海大学 | Pedestrian detection method and system based on improved YOLOv3 model |
CN113255486A (en) * | 2021-05-13 | 2021-08-13 | 华设设计集团股份有限公司 | Parking space occupation detection method based on high-level video monitoring |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114267180A (en) * | 2022-03-03 | 2022-04-01 | 科大天工智能装备技术(天津)有限公司 | Parking management method and system based on computer vision |
CN114267180B (en) * | 2022-03-03 | 2022-05-31 | 科大天工智能装备技术(天津)有限公司 | Parking management method and system based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368687B (en) | Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation | |
CN109766769A (en) | A kind of road target detection recognition method based on monocular vision and deep learning | |
Yang et al. | iParking–a real-time parking space monitoring and guiding system | |
CN112512020B (en) | Traffic state weak signal perception studying and judging method based on multi-source data fusion | |
CN114898297B (en) | Target detection and target tracking-based non-motor vehicle illegal behavior judgment method | |
CN109993138A (en) | A kind of car plate detection and recognition methods and device | |
CN112668375B (en) | Tourist distribution analysis system and method in scenic spot | |
CN103679214B (en) | Vehicle checking method based on online Class area estimation and multiple features Decision fusion | |
CN113255552B (en) | Method and device for analyzing OD (optical density) of bus-mounted video passengers and storage medium | |
CN110674887A (en) | End-to-end road congestion detection algorithm based on video classification | |
CN111626382A (en) | Rapid intelligent identification method and system for cleanliness of vehicle on construction site | |
Tung et al. | Large-scale object detection of images from network cameras in variable ambient lighting conditions | |
CN112712707A (en) | Vehicle carbon emission monitoring system and method | |
CN115662113A (en) | Signalized intersection people-vehicle game conflict risk assessment and early warning method | |
CN113487877A (en) | Road vehicle illegal parking monitoring method | |
CN110796580A (en) | Intelligent traffic system management method and related products | |
CN113887420A (en) | AI-based intelligent detection and identification system for urban public parking spaces | |
Khan et al. | Cyber physical system for vehicle counting and emission monitoring | |
CN117456482B (en) | Abnormal event identification method and system for traffic monitoring scene | |
CN110428617A (en) | A kind of traffic object recognition methods based on 5G Portable intelligent terminal and MEC | |
Fei et al. | Adapting public annotated data sets and low-quality dash cameras for spatiotemporal estimation of traffic-related air pollution: A transfer-learning approach | |
CN110765900A (en) | DSSD-based automatic illegal building detection method and system | |
CN112633163B (en) | Detection method for realizing illegal operation vehicle detection based on machine learning algorithm | |
CN115857685A (en) | Perception algorithm data closed-loop method and related device | |
CN113158852B (en) | Traffic gate monitoring system based on face and non-motor vehicle cooperative identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |