CN109784306B - Intelligent parking management method and system based on deep learning - Google Patents
Intelligent parking management method and system based on deep learning Download PDFInfo
- Publication number
- CN109784306B CN109784306B CN201910089082.4A CN201910089082A CN109784306B CN 109784306 B CN109784306 B CN 109784306B CN 201910089082 A CN201910089082 A CN 201910089082A CN 109784306 B CN109784306 B CN 109784306B
- Authority
- CN
- China
- Prior art keywords
- parking space
- parking
- frame
- space frame
- vehicle position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007726 management method Methods 0.000 title claims abstract description 40
- 238000013135 deep learning Methods 0.000 title claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 27
- 238000012549 training Methods 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract description 8
- 238000001514 detection method Methods 0.000 description 15
- 238000013461 design Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000011161 development Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
技术领域technical field
本发明涉及智能停车管理技术领域,特别是涉及一种基于深度学习的智能停车管理方法及系统。The invention relates to the technical field of intelligent parking management, in particular to a deep learning-based intelligent parking management method and system.
背景技术Background technique
近年来,随着私家车的增长,大中小城市都面临着“车多位少”的困境,同时,停车资源的落后管理现状也使得这种困境日趋严重。传统的停车资源管理通过人工手动模式管理,不仅效率低下,而且极其耗费人力。In recent years, with the growth of private cars, large, medium and small cities are faced with the dilemma of "more cars and fewer seats". At the same time, the current situation of backward management of parking resources has also made this dilemma more and more serious. The traditional management of parking resources is managed in a manual mode, which is not only inefficient, but also extremely labor-intensive.
发明内容SUMMARY OF THE INVENTION
本发明的目的是提供一种基于深度学习的智能停车管理方法及系统,将深度学习方法应用于停车资源管理中,以解决传统的停车资源人工管理方式效率低下且极其耗费人力的问题。The purpose of the present invention is to provide an intelligent parking management method and system based on deep learning, and apply the deep learning method to parking resource management, so as to solve the problems of low efficiency and extremely labor-consuming traditional manual management of parking resources.
为实现上述目的,本发明提供了如下方案:For achieving the above object, the present invention provides the following scheme:
一种基于深度学习的智能停车管理方法,所述方法包括:An intelligent parking management method based on deep learning, the method includes:
获取停车场摄像头采集的视频帧;Get the video frame captured by the parking lot camera;
加载训练好的卷积神经网络模型;Load the trained convolutional neural network model;
将所述视频帧输入所述训练好的卷积神经网络模型,输出车辆位置预测框信息;所述车辆位置预测框信息包括车辆位置预测框的中心坐标以及宽、高;Inputting the video frame into the trained convolutional neural network model, and outputting vehicle position prediction frame information; the vehicle position prediction frame information includes the center coordinates, width and height of the vehicle position prediction frame;
获取车位框信息;所述车位框信息包括车位框的左上角坐标和右下角坐标;obtaining parking space frame information; the parking space frame information includes the coordinates of the upper left corner and the lower right corner of the parking space frame;
根据所述车辆位置预测框信息和所述车位框位置信息确定当前车位状态;Determine the current parking space state according to the vehicle position prediction frame information and the parking space frame position information;
根据所述当前车位状态确定停车费用。The parking fee is determined according to the current parking space state.
可选的,在所述加载训练好的卷积神经网络模型之前,还包括:Optionally, before loading the trained convolutional neural network model, the method further includes:
基于Darknet框架和YOLO算法建立卷积神经网络模型;所述卷积神经网络模型包括特征提取器和检测器;A convolutional neural network model is established based on the Darknet framework and the YOLO algorithm; the convolutional neural network model includes a feature extractor and a detector;
采用VOC数据集对所述卷积神经网络模型进行训练、调参,生成训练好的卷积神经网络模型。The VOC data set is used to train and adjust parameters of the convolutional neural network model to generate a trained convolutional neural network model.
可选的,所述根据所述车辆位置预测框信息和所述车位框位置信息确定当前车位状态,具体包括:Optionally, the determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information specifically includes:
根据所述车辆位置预测框信息计算车辆位置预测框的左上角坐标和右下角坐标;Calculate the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information;
根据所述车辆位置预测框的左上角坐标和右下角坐标以及所述车位框的左上角坐标和右下角坐标计算所述车辆位置预测框与所述车位框的重叠区域位置;所述重叠区域位置包括所述车辆位置预测框与所述车位框的重叠区域的左上角坐标和右下角坐标;The position of the overlapping area between the vehicle position prediction frame and the parking space frame is calculated according to the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame, and the coordinates of the upper left corner and the lower right corner of the parking space frame; the overlapping area position Including the coordinates of the upper left corner and the lower right corner of the overlapping area of the vehicle position prediction frame and the parking space frame;
根据所述重叠区域位置计算重叠区域面积;Calculate the overlapping area area according to the overlapping area position;
根据所述车位框位置计算车位框面积;Calculate the area of the parking space frame according to the position of the parking space frame;
计算所述重叠区域面积与所述车位框面积的比值作为车辆的置信度;Calculate the ratio of the area of the overlapping area to the area of the parking space frame as the confidence level of the vehicle;
判断所述车辆的置信度是否大于等于置信度阈值,获得第一判断结果;Determine whether the confidence of the vehicle is greater than or equal to a confidence threshold, and obtain a first judgment result;
若所述第一判断结果为所述车辆的置信度大于等于置信度阈值,确定当前车位状态为被占状态;If the first judgment result is that the confidence of the vehicle is greater than or equal to the confidence threshold, determine that the current parking space state is an occupied state;
若所述车辆的置信度小于置信度阈值,确定当前车位状态为空状态。If the confidence level of the vehicle is less than the confidence level threshold, it is determined that the current parking space state is an empty state.
可选的,所述根据所述当前车位状态确定停车费用,具体包括:Optionally, the determining the parking fee according to the current parking space state specifically includes:
获取所述当前车位状态由空状态变为被占状态的开始停车时间;Obtain the starting parking time when the current parking space state changes from an empty state to an occupied state;
获取所述当前车位状态由被占状态变为空状态的结束停车时间;Obtain the ending parking time when the current parking space state changes from the occupied state to the empty state;
根据所述开始停车时间及所述结束停车时间计算车辆的停车时间;Calculate the parking time of the vehicle according to the start parking time and the end parking time;
根据所述停车时间确定停车费用。The parking fee is determined according to the parking time.
一种基于深度学习的智能停车管理系统,所述系统包括:An intelligent parking management system based on deep learning, the system includes:
视频采集模块,用于获取停车场摄像头采集的视频帧;The video capture module is used to obtain the video frames captured by the parking lot camera;
模型加载模块,用于加载训练好的卷积神经网络模型;The model loading module is used to load the trained convolutional neural network model;
车辆位置预测模块,用于将所述视频帧输入所述训练好的卷积神经网络模型,输出车辆位置预测框信息;所述车辆位置预测框信息包括车辆位置预测框的中心坐标以及宽、高;The vehicle position prediction module is used to input the video frame into the trained convolutional neural network model, and output the vehicle position prediction frame information; the vehicle position prediction frame information includes the center coordinates of the vehicle position prediction frame and the width and height of the frame. ;
车位框获取模块,用于获取车位框信息;所述车位框信息包括车位框的左上角坐标和右下角坐标;a parking space frame acquisition module, used for obtaining parking space frame information; the parking space frame information includes the coordinates of the upper left corner and the lower right corner of the parking space frame;
当前车位状态判断模块,用于根据所述车辆位置预测框信息和所述车位框位置信息确定当前车位状态;The current parking space state judgment module is configured to determine the current parking space state according to the vehicle position prediction frame information and the parking space frame position information;
停车费用确定模块,用于根据所述当前车位状态确定停车费用。The parking fee determination module is used for determining the parking fee according to the current parking space state.
可选的,所述系统还包括模型建立模块,所述模型建立模块具体包括:Optionally, the system further includes a model establishment module, and the model establishment module specifically includes:
模型建立单元,用于基于Darknet框架和YOLO算法建立卷积神经网络模型;所述卷积神经网络模型包括特征提取器和检测器;A model establishment unit for establishing a convolutional neural network model based on the Darknet framework and the YOLO algorithm; the convolutional neural network model includes a feature extractor and a detector;
模型训练单元,用于采用VOC数据集对所述卷积神经网络模型进行训练、调参,生成训练好的卷积神经网络模型。The model training unit is used to train and adjust parameters of the convolutional neural network model by using the VOC data set to generate a trained convolutional neural network model.
可选的,所述当前车位状态判断模块具体包括:Optionally, the current parking space state judgment module specifically includes:
车辆位置计算单元,用于根据所述车辆位置预测框信息计算车辆位置预测框的左上角坐标和右下角坐标;a vehicle position calculation unit, configured to calculate the upper left corner coordinate and the lower right corner coordinate of the vehicle position prediction frame according to the vehicle position prediction frame information;
重叠区域位置计算单元,用于根据所述车辆位置预测框的左上角坐标和右下角坐标以及所述车位框的左上角坐标和右下角坐标计算所述车辆位置预测框与所述车位框的重叠区域位置;所述重叠区域位置包括所述车辆位置预测框与所述车位框的重叠区域的左上角坐标和右下角坐标;an overlapping area position calculation unit, configured to calculate the overlap between the vehicle position prediction frame and the parking space frame according to the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame and the coordinates of the upper left corner and the lower right corner of the parking space frame area position; the overlapping area position includes the coordinates of the upper left corner and the lower right corner of the overlapping area of the vehicle position prediction frame and the parking space frame;
重叠区域面积计算单元,用于根据所述重叠区域位置计算重叠区域面积;an overlapping area area calculation unit, configured to calculate the overlapping area area according to the overlapping area position;
车位框面积计算单元,用于根据所述车位框位置计算车位框面积;a parking space frame area calculation unit, configured to calculate the parking space frame area according to the position of the parking space frame;
置信度计算单元,用于计算所述重叠区域面积与所述车位框面积的比值作为车辆的置信度;a confidence degree calculation unit, configured to calculate the ratio of the area of the overlapping area to the area of the parking space frame as the confidence degree of the vehicle;
置信度判断单元,用于判断所述车辆的置信度是否大于等于置信度阈值,获得第一判断结果;a confidence level judgment unit, configured to judge whether the confidence level of the vehicle is greater than or equal to a confidence level threshold, and obtain a first judgment result;
被占状态判断单元,用于若所述第一判断结果为所述车辆的置信度大于等于置信度阈值,确定当前车位状态为被占状态;an occupied state judgment unit, configured to determine that the current parking space state is an occupied state if the first judgment result is that the confidence level of the vehicle is greater than or equal to a confidence level threshold;
空状态判断单元,用于若所述车辆的置信度小于置信度阈值,确定当前车位状态为空状态。An empty state judging unit, configured to determine that the current parking space state is an empty state if the confidence level of the vehicle is less than the confidence level threshold.
可选的,所述停车费用确定模块具体包括:Optionally, the parking fee determination module specifically includes:
开始停车时间记录单元,用于获取所述当前车位状态由空状态变为被占状态的开始停车时间;a parking start time recording unit, used to obtain the start parking time when the current parking space state changes from an empty state to an occupied state;
结束停车时间记录单元,用于获取所述当前车位状态由被占状态变为空状态的结束停车时间;an end parking time recording unit, used to obtain the end parking time when the current parking space state changes from an occupied state to an empty state;
停车时间计算单元,用于根据所述开始停车时间及所述结束停车时间计算车辆的停车时间;a parking time calculation unit, configured to calculate the parking time of the vehicle according to the start parking time and the end parking time;
停车费用计算单元,用于根据所述停车时间确定停车费用。A parking fee calculation unit, configured to determine the parking fee according to the parking time.
根据本发明提供的具体实施例,本发明公开了以下技术效果:According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects:
本发明提供一种基于深度学习的智能停车管理方法及系统,针对智能停车场管理的需求,基于Darknet框架设计卷积神经网络模型对车辆进行定位,通过对预测的车辆位置和车位进行重叠程度计算,实现对车位上的停车状态的实时监测,并根据车位状态进行停车费用统计。本发明通过将深度学习方法应用于停车资源管理中,实现了停车状态及停车费用的智能、自动化监测,不仅缓解了城市交通拥挤,有效规范了停车资源的使用,同时也使得对停车资源的管理更加便捷和智能化,解放了人力。The present invention provides an intelligent parking management method and system based on deep learning. According to the requirements of intelligent parking lot management, a convolutional neural network model is designed based on the Darknet framework to locate the vehicle, and the overlap degree of the predicted vehicle position and the parking space is calculated. , realize the real-time monitoring of the parking status on the parking space, and conduct the parking fee statistics according to the parking space status. By applying the deep learning method to the management of parking resources, the present invention realizes intelligent and automatic monitoring of parking status and parking fees, which not only alleviates urban traffic congestion, effectively regulates the use of parking resources, but also enables the management of parking resources. More convenient and intelligent, liberating manpower.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the accompanying drawings required in the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some of the present invention. In the embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative labor.
图1为本发明提供的基于深度学习的智能停车管理方法的流程图;Fig. 1 is the flow chart of the intelligent parking management method based on deep learning provided by the present invention;
图2为本发明提供的ARM端的数据流程图;Fig. 2 is the data flow diagram of the ARM end provided by the present invention;
图3为本发明提供的服务器端的数据处理流程图;Fig. 3 is the data processing flow chart of the server end provided by the present invention;
图4为本发明提供的卷积神经网络模型中特征提取器和检测器的网络结构图;4 is a network structure diagram of a feature extractor and a detector in the convolutional neural network model provided by the present invention;
图5为本发明提供的基于深度学习的智能停车管理系统的结构图。FIG. 5 is a structural diagram of an intelligent parking management system based on deep learning provided by the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本发明涉及深度学习、计算机视觉、Darknet框架、实时通信、图像和视频传输、多线程并行、ARM开发、C++图形用户界面应用程序开发等相关技术领域,尤其是关于深度学习和计算机视觉中的关于目标分类、定位、检测的深度学习神经网络的设计和训练,以及车位和检测到的目标的融合。本发明的目的是提供一种基于深度学习的智能停车管理方法及系统,将深度学习方法应用于停车资源管理中,深度学习技术在计算机视觉上的应用使得机器视觉与人类视觉更为接近,甚至在某些方面其表现已经超过了人类,利用该技术进行车辆的实时跟踪定位,能有效的对车辆的出入进行实时统计,从而能够解决传统的停车资源人工管理方式效率低下且极其耗费人力的问题。The invention relates to related technical fields such as deep learning, computer vision, Darknet framework, real-time communication, image and video transmission, multi-thread parallelism, ARM development, C++ graphical user interface application development, etc., in particular to the related technical fields in deep learning and computer vision Design and training of deep learning neural networks for object classification, localization, detection, and fusion of parking spaces and detected objects. The purpose of the present invention is to provide an intelligent parking management method and system based on deep learning. The deep learning method is applied to parking resource management. The application of deep learning technology in computer vision makes machine vision and human vision closer, and even In some aspects, its performance has surpassed that of human beings. Using this technology to track and locate vehicles in real time can effectively conduct real-time statistics on vehicle entry and exit, so as to solve the problem of inefficient and extremely labor-intensive traditional manual management of parking resources. .
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
本发明方法及系统处理的关键问题包括:The key issues addressed by the method and system of the present invention include:
1)分析该系统的定位和系统的组成;1) Analyze the positioning of the system and the composition of the system;
2)视频图像如何传输;2) How to transmit video images;
3)如何判断车位状态;3) How to judge the status of the parking space;
4)如何进行目标定位;4) How to carry out target positioning;
5)如何提取车位信息;5) How to extract parking space information;
6)根据3)的方法进行计算以实时监控车位状态;6) Calculate according to the method of 3) to monitor the parking space status in real time;
7)系统如何集成。7) How the system is integrated.
在以上主要的7个问题中,其中问题1)和7)统筹全局,问题2)、4)5)最为核心。本发明主要通过以下方式解决这7个问题:Among the above seven main questions, questions 1) and 7) are the overall situation, and questions 2), 4) and 5) are the core. The present invention mainly solves these 7 problems in the following ways:
1.本发明方法及系统针对的是现有难以管理的停车资源的统一管理,但是对于停车场难以通过一个摄像头进行监控,所以,每3到5个车位采用一个摄像头进行监控。系统组成由一个前端ARM控制摄像头的视频抓取和转发,后台接收服务器对接收到的不同摄像头的数据进行处理,由于有多个摄像头分别对多个车位进行监控,需要将这些不同摄像头的处理结果反映到实时画面上,故需要采用多线程将不同的摄像头的数据传送到各自的实时画面上。1. The method and system of the present invention are aimed at the unified management of existing parking resources that are difficult to manage, but it is difficult to monitor the parking lot through a camera, so every 3 to 5 parking spaces is monitored by a camera. The system consists of a front-end ARM to control the video capture and forwarding of the camera, and the background receiving server to process the data received from different cameras. Since there are multiple cameras to monitor multiple parking spaces, it is necessary to process the results of these different cameras. It is reflected on the real-time screen, so it is necessary to use multiple threads to transmit the data of different cameras to the respective real-time screen.
2.鉴于TCP协议的可靠性,本发明在系统通信传输部分采用TCP进行视频传输。前端到后台的连接选择4G通讯模块,用户可以利用因特网连接前端和后台。多个摄像头均连接一台ARM嵌入式开发板管理一片区域,每一个摄像头获取的一帧图像都加入该摄像头编号信息。在ARM端进行多线程操作,加入区域信息和摄像头信息后整合成一个数据包,打包发送至服务器。2. In view of the reliability of the TCP protocol, the present invention adopts TCP for video transmission in the system communication transmission part. The front-end to back-end connection selects the 4G communication module, and users can use the Internet to connect the front-end and back-end. Multiple cameras are connected to an ARM embedded development board to manage an area, and a frame of image obtained by each camera is added to the camera number information. Multi-threaded operation is performed on the ARM side, and the region information and camera information are added and integrated into a data packet, which is packaged and sent to the server.
3.车位状态的判断需要通过对视频中车的位置检测结果和车位的位置信息来确定。经过计算目标车辆预测框与车位的重叠部分,以及重叠部分和车位的重叠比例,从而判断该车位的状态。3. The judgment of the parking space status needs to be determined by the position detection result of the car in the video and the position information of the parking space. After calculating the overlapping part of the target vehicle prediction frame and the parking space, as well as the overlapping ratio of the overlapping part and the parking space, the state of the parking space is judged.
4.车的位置检测采用当前的深度学习技术,基于Darknet框架、YOLO算法中的思想和一个23层的预训练模型,在此基础上设计检测器,检测器的设计在图4中给出;然后在VOC数据集上进行训练、调参,直到满足优化目标,最终得到训练好的卷积神经网络模型。视频帧经过该模型处理之后,得到是关于视频画面中目标车辆对象预测框的信息,得到的车的预测框由四个关键参数表示。4. The position detection of the car adopts the current deep learning technology, based on the Darknet framework, the idea in the YOLO algorithm and a 23-layer pre-training model, on this basis, the detector is designed, and the design of the detector is given in Figure 4; Then train and adjust parameters on the VOC data set until the optimization goal is met, and finally a trained convolutional neural network model is obtained. After the video frame is processed by the model, the information about the prediction frame of the target vehicle object in the video picture is obtained, and the obtained prediction frame of the vehicle is represented by four key parameters.
5.通过发明提供的车位获取方式,编写代码实现参数调整的UI界面,这样就可以通过说明书的方式由管理员自己安装并调试该系统。5. Through the parking space acquisition method provided by the invention, code is written to realize the UI interface of parameter adjustment, so that the system can be installed and debugged by the administrator himself through the manual.
6.实时检测车位状态,并多线程实时显示多个摄像头的画面。6. Real-time detection of parking space status, and multi-threaded real-time display of multiple cameras.
7.系统的集成,即将之前的1到6中的设计进行系统平台的整合。所有程序模块单元测试通过,再进行整个系统的测试和调试,重复进行此过程,直到整个系统稳定运行。7. System integration, which is to integrate the system platform of the previous designs in 1 to 6. All program modules have passed the unit test, and then the whole system is tested and debugged, and this process is repeated until the whole system runs stably.
根据以上发明构思,本发明提出一种基于深度学习的智能停车管理方法及系统。According to the above inventive concept, the present invention proposes an intelligent parking management method and system based on deep learning.
图1为本发明提供的基于深度学习的智能停车管理方法的流程图。参见图1,本发明提供的基于深度学习的智能停车管理方法具体包括:FIG. 1 is a flowchart of a deep learning-based intelligent parking management method provided by the present invention. Referring to FIG. 1, the deep learning-based intelligent parking management method provided by the present invention specifically includes:
步骤101:获取停车场摄像头采集的视频帧。Step 101: Acquire the video frames collected by the parking lot cameras.
由于停车场要对车辆的出入进行实时跟踪,以便于实时进行车位状态统计,所以需要将停车场摄像头收集的视频及时传给后台服务器。Since the parking lot needs to track the entry and exit of vehicles in real time to facilitate the statistics of parking space status in real time, it is necessary to transmit the video collected by the parking lot camera to the background server in time.
由于对摄像头抓取的视频传输需要稳定可靠,所以本发明采用TCP(TransmissionControl Protocol)传输协议;为满足时间要求,将摄像头采集到的视频帧采用4G技术进行传输,4G集3G和WLAN于一体,能够快速传输高质量视频、音频和图像等数据。Since the transmission of the video captured by the camera needs to be stable and reliable, the present invention adopts the TCP (Transmission Control Protocol) transmission protocol; in order to meet the time requirement, the video frame captured by the camera is transmitted by 4G technology, which integrates 3G and WLAN, Capable of transferring data such as high-quality video, audio, and images quickly.
停车场普遍采用多个摄像头对多个车位进行视频采集,本发明设置每个摄像头对4个车位进行监控,所以每个摄像头需要一个显示画面,并且需要实时显示。所以,需要对每个摄像头都进行监听以判断车位状态和计时,将监听到的视频数据和各自的状态以及计时信息通过各自的线程显示在各自对应的实时画面上。The parking lot generally uses multiple cameras to collect video from multiple parking spaces. The present invention sets each camera to monitor 4 parking spaces, so each camera needs a display screen and needs to be displayed in real time. Therefore, it is necessary to monitor each camera to determine the status and timing of the parking space, and display the monitored video data and their respective status and timing information on their corresponding real-time images through their respective threads.
摄像头采集的视频先经过ARM(Advanced RISC Machines)板对其进行抓取,然后编号,通过TCP协议转发给后台服务器。由于ARM端内存有限,因此需要在开发时对程序做一定的内存优化,避免ARM端在抓取图像后进行处理的过程中内存溢出崩溃,造成不必要的麻烦。通过对多个摄像头进行编号并监听,将每个摄像头抓取的视频帧进行标记,转发给后台服务器。The video captured by the camera is first captured by the ARM (Advanced RISC Machines) board, then numbered, and forwarded to the background server through the TCP protocol. Due to the limited memory on the ARM side, it is necessary to perform certain memory optimizations on the program during development to avoid memory overflow and crash when the ARM side processes images after grabbing them, causing unnecessary trouble. By numbering and monitoring multiple cameras, the video frames captured by each camera are marked and forwarded to the background server.
图2为本发明提供的ARM端的数据流程图。由于本发明采用的是一个摄像头对有限的区域进行监控,所以对于一个停车场来说就需要多个摄像头对不同的区域进行监控。多个摄像头将采集的数据帧传输到ARM端,由ARM端对这些数据进行处理。ARM端在接收数据之前先检测自身为该任务划分的空间是否足够,足够才接收,否则丢弃当前帧。经过编码后将帧转发给后端服务器。FIG. 2 is a data flow diagram of the ARM side provided by the present invention. Since the present invention uses one camera to monitor a limited area, multiple cameras are required for a parking lot to monitor different areas. Multiple cameras transmit the collected data frames to the ARM side, and the ARM side processes the data. Before receiving the data, the ARM side checks whether the space it has allocated for the task is enough, and then receives it if it is enough, otherwise the current frame is discarded. After encoding, the frame is forwarded to the backend server.
对于车位的最终监控显示由一个管理界面执行,该管理界面采用Qt进行开发。Qt是一个跨平台的图形用户界面应用程序开发框架,采用面向对象的方式使用特殊的代码生成扩展及一些宏,允许组件编程,极大提高了开发效率。The final monitoring display for parking spaces is performed by a management interface developed with Qt. Qt is a cross-platform graphical user interface application development framework. It uses special code generation extensions and some macros in an object-oriented way to allow component programming, which greatly improves development efficiency.
服务器端接收来自ARM端转发来的视频数据,并将每个数据放到网络中进行预测。后台服务器将传回的视频放入训练好的卷积神经网络模型,通过网络模型处理后,可以实时检测车辆,并计算车位与车辆位置的重叠部分占车位的比值,根据这个占比运算的结果是否满足设置的阈值判断车位是否为空,并统计车位不空的时间,直到车位为空状态;当车位由空转变为不空时,开始计时,直到该车位由不空转变为空状态时停止计时,此时计算出应该缴纳的停车费用。The server side receives the video data forwarded from the ARM side, and puts each data into the network for prediction. The backend server puts the returned video into the trained convolutional neural network model. After processing through the network model, the vehicle can be detected in real time, and the ratio of the overlapping part of the parking space and the vehicle position to the parking space can be calculated. According to the result of this ratio calculation Whether the set threshold is met to judge whether the parking space is empty, and count the time when the parking space is not empty until the parking space is empty; when the parking space changes from empty to not empty, start timing until the parking space changes from not empty to empty. Stop when the state is empty Time, at this time calculate the parking fee that should be paid.
车位的空与不空需要实时显示,停车时间也需要实时显示。所以,除了对每个车位进行检测外,还需要将检测状态实时显示出来,即需要对所有摄像头检测的车位进行分析,并给出结果;给每一个摄像头建立一个线程,统一收集所有摄像头的视频信息,并给每个摄像头及所收集的视频信息进行编号,按序输出,实时显示在对应的摄像头的管理界面上。The vacancy and non-empty of parking spaces need to be displayed in real time, and the parking time also needs to be displayed in real time. Therefore, in addition to detecting each parking space, it is also necessary to display the detection status in real time, that is, it is necessary to analyze the parking spaces detected by all cameras and give the results; create a thread for each camera to collect the videos of all cameras uniformly information, and number each camera and the collected video information, output them in sequence, and display them on the management interface of the corresponding camera in real time.
但是,虽然每个数据帧在ARM端已经编号,而且在后台服务器能够知道对应的帧来自哪个摄像头,但由于网络模型需要的只是视频帧,而不需要摄像头的标记信息,在网络模型处理完一帧图像时,并不知道该图像是来自哪个摄像头,所以,首先需要将摄像头采集的视频帧放到缓冲队列中,并设置一个flag[n]标记数组,用于记录当前网络中处理的帧是来自哪一个摄像头,将flag中所有元素初始化为0,当网络中处理第j个摄像头的数据时,就将flag[j]变为1,得到预测值之后,便可以由flag中的标记得到该视频帧是属于哪个摄像头;同时,由于flag中的元素只能表示一个摄像头,所以需要对flag上锁,同时网络模型的预测过程也不能受到干扰,所以对网络模型预测过程同样需要上锁;网络对当前摄像头的帧运算完成之后,需要解锁自身、再解锁flag,以便于缓冲队列中的数据可以继续在网络模型中进行检测,并将相应的结果反映到对应的摄像头中进行显示。However, although each data frame has been numbered on the ARM side, and the server in the background can know which camera the corresponding frame comes from, since the network model only needs video frames, not the camera's label information, after the network model processes a When you frame an image, you don't know which camera the image is from. Therefore, you first need to put the video frame collected by the camera into the buffer queue, and set a flag[n] tag array to record the frame processed in the current network. From which camera, initialize all elements in the flag to 0. When the data of the jth camera is processed in the network, the flag[j] is changed to 1. After the predicted value is obtained, the flag can be obtained from the flag. Which camera the video frame belongs to; at the same time, since the elements in the flag can only represent one camera, the flag needs to be locked, and the prediction process of the network model cannot be disturbed, so the prediction process of the network model also needs to be locked; network After the frame operation of the current camera is completed, it needs to unlock itself and then unlock the flag, so that the data in the buffer queue can continue to be detected in the network model, and the corresponding results are reflected in the corresponding camera for display.
图3为本发明提供的服务器端的数据处理流程图。服务器端接收ARM端传来的数据之前,首先对自身为接收视频帧所划的空间进行检查,如可以存放当前的视频帧,则存入,否则丢弃当前的帧。为了减少对机器设备的性能依赖,使单帧进入单个网络进行计算,计算结果即为车辆位置预测框,本发明后台服务器对车的位置框和车位框进行处理并更新车位状态。FIG. 3 is a flow chart of data processing on the server side provided by the present invention. Before the server side receives the data from the ARM side, it first checks the space it has reserved for receiving video frames. If the current video frame can be stored, it will be stored, otherwise the current frame will be discarded. In order to reduce the performance dependence on machine equipment, a single frame enters a single network for calculation, and the calculation result is the vehicle position prediction frame. The background server of the present invention processes the vehicle position frame and the parking space frame and updates the parking space status.
由于采用多个摄像头监测车位,每个摄像头都会将自己的车位状态独立地输出到对应的实时画面上,而且,对每个摄像头所对应的车位都会有一个车位的位置的初始化。所以,得到车的预测位置后,找到第j个摄像头所对应的初始化信息,进行后续处理。为了使每个摄像头的画面能够实时独立地显示在各自的画面中,本发明采用多个线程对不同的摄像头的信息进行处理,从而达到对多个摄像头采集画面稳定可靠的实时监控。Since multiple cameras are used to monitor parking spaces, each camera will independently output its own parking space status to the corresponding real-time screen, and the parking space corresponding to each camera will have a position initialization for the parking space. Therefore, after the predicted position of the car is obtained, the initialization information corresponding to the jth camera is found, and subsequent processing is performed. In order to enable the pictures of each camera to be independently displayed in their respective pictures in real time, the present invention uses multiple threads to process the information of different cameras, so as to achieve stable and reliable real-time monitoring of the pictures captured by multiple cameras.
步骤102:加载训练好的卷积神经网络模型。Step 102: Load the trained convolutional neural network model.
本发明车辆的识别和检测使用的卷积神经网络模型的设计是基于Darknet框架实现的。Darknet是一个轻型的深度学习框架,采用C语言实现,支持CPU(Central ProcessingUnit,中央处理器)和GPU(Graphics Processing Unit,图形处理器),其没有强大的API(Application Programming Interface,应用程序编程接口),但是正是由于其轻巧的实现,使其更容易进行使用。使用Darknet设计、训练网络也很方便,除了Darknet的环境搭建不是那么容易之外,用其进行网络设计和训练极其方便。由于opencv(Open SourceComputer Vision Library,开源计算机视觉库)从3.3.1开始就已经正式支持Draknet框架,所以经过GPU训练后得到的模型可以在CPU上进行使用,降低了对使用机器的要求。The design of the convolutional neural network model used in the identification and detection of the vehicle of the present invention is realized based on the Darknet framework. Darknet is a lightweight deep learning framework, implemented in C language, supports CPU (Central Processing Unit, central processing unit) and GPU (Graphics Processing Unit, graphics processor), it does not have a powerful API (Application Programming Interface, application programming interface) ), but it is its lightweight implementation that makes it easier to use. It is also very convenient to use Darknet to design and train the network. In addition to the fact that the darknet environment is not so easy to build, it is extremely convenient to use it for network design and training. Since opencv (Open SourceComputer Vision Library, open source computer vision library) has officially supported the Draknet framework since 3.3.1, the model obtained after GPU training can be used on the CPU, reducing the requirements for using the machine.
本发明建立的卷积神经网络模型包括特征提取器和检测器,基于Darknet框架和YOLO(You Only Look Once)等算法中的目标检测思想,在Darknet19特征提取器的基础上,进行检测器的设计。检测器主要用于检测视频中车辆的位置,采用自顶向下的方式,在不同尺度的特征上对目标对象进行识别、定位、检测,从而使得对目标的识别更加精确。The convolutional neural network model established by the present invention includes a feature extractor and a detector. Based on the Darknet framework and the target detection idea in algorithms such as YOLO (You Only Look Once), the detector is designed on the basis of the Darknet19 feature extractor. . The detector is mainly used to detect the position of the vehicle in the video. It adopts a top-down method to identify, locate and detect the target object on the features of different scales, so as to make the recognition of the target more accurate.
本发明卷积神经网络模型中特征提取器和检测器的详细网络结构设计如图4和下表1所示:The detailed network structure design of the feature extractor and the detector in the convolutional neural network model of the present invention is shown in Figure 4 and Table 1 below:
表1卷积神经网络模型网络结构Table 1 Convolutional Neural Network Model Network Structure
表1中Type表示层类型,Filters表示卷积核个数(即输出的通道数),Size/Stride表示Filter的尺寸/步幅,Output表示输出,表1和图4中的Convolutional或conv表示卷积,Maxpool或maxpool表示最大池化,YOLO或yolo出自文章《You Only Look Once:Unified,Real-Time Object Detection》提出的方法,没有合适的中文表述,route表示路由,upsample表示向上采样,Concate表示级联。Type in Table 1 represents the layer type, Filters represents the number of convolution kernels (that is, the number of output channels), Size/Stride represents the size/stride of the Filter, Output represents the output, and Convolutional or conv in Table 1 and Figure 4 represents the volume Product, Maxpool or maxpool means maximum pooling, YOLO or yolo comes from the method proposed in the article "You Only Look Once: Unified, Real-Time Object Detection", there is no suitable Chinese expression, route means routing, upsample means upsampling, Concate means cascade.
参见表1和图4,本发明建立的卷积神经网络模型网络设计主要分为两个部分,特征提取部分和检测部分。特征提取部分采用的是Darknet19特征提取模型,该模型负责检测视频帧中的特征,为后续的检测部分提供检测基础;检测部分主要负责对特征提取部分提供的特征图进行检测,最终得到的是所检测到的对象的预测框,用于表示对象的位置。Referring to Table 1 and Figure 4, the network design of the convolutional neural network model established by the present invention is mainly divided into two parts, a feature extraction part and a detection part. The feature extraction part adopts the Darknet19 feature extraction model, which is responsible for detecting the features in the video frame and providing the detection basis for the subsequent detection part; the detection part is mainly responsible for the detection of the feature map provided by the feature extraction part, and the final result is the The predicted box of the detected object, used to represent the location of the object.
具体的,所述卷积神经网络模型中的特征提取器采用Darknet19,Darknet19共有23层,其中18个卷积层,5个最大池化层。检测器设计成在两个尺度上进行预测,共有12层,其中两个Yolo层作为输出层,分别在13x13和26x26的尺度基础上进行检测。在13x13的尺度上有三个卷积层,一个yolo层;由yolo层输出13x13尺度上的预测结果。之后将第23层route到第28层进行一次卷积,在卷积后的特征图上进行向上采样,然后再与第16层的26x26的特征图进行级联,即此过程中一个route层,一个卷积层,一个向上采样层,最后再是一个route层进行级联;级联后进行三次卷积,由yolo层输出26x26尺度上的预测结果。所述卷积神经网络模型输入的是视频帧,输出的是13x13和26x26两个尺度上的预测结果,该预测结果为一个三维的张量(tensor),该三维张量表示视频中的对象的边界预测框(即车辆位置预测框)、该对象的置信度及对象类型,每一个边界预测框由一个向量组成,该向量表示的是检测的车辆位置预测框的中心坐标及宽、高。Specifically, the feature extractor in the convolutional neural network model adopts Darknet19, and Darknet19 has a total of 23 layers, including 18 convolutional layers and 5 maximum pooling layers. The detector is designed to make predictions at two scales, with a total of 12 layers, of which two Yolo layers are used as output layers to detect at scales of 13x13 and 26x26, respectively. There are three convolutional layers and one yolo layer on the 13x13 scale; the yolo layer outputs the prediction results on the 13x13 scale. Then route the 23rd layer to the 28th layer for a convolution, up-sampling on the convolved feature map, and then cascade with the 26x26 feature map of the 16th layer, that is, a route layer in this process, A convolutional layer, an up-sampling layer, and finally a route layer for cascading; after cascading, three convolutions are performed, and the yolo layer outputs the prediction results on the 26x26 scale. The input of the convolutional neural network model is a video frame, and the output is a prediction result on two scales of 13x13 and 26x26. The prediction result is a three-dimensional tensor (tensor), and the three-dimensional tensor represents the object in the video. Boundary prediction box (that is, the vehicle position prediction box), the confidence level of the object and the object type, each boundary prediction box is composed of a vector, and the vector represents the center coordinates, width and height of the detected vehicle position prediction box.
除了route层之外,其他的相邻层都是前面层的输出作为后面层的输入,后面的层处理的是相邻前面层输出的结果,每层的输出结果都为特征图。Except for the route layer, other adjacent layers use the output of the previous layer as the input of the latter layer.
在所述卷积神经网络模型的网络设计完成之后,逐层编写网络,之后在VOC(Visual Object Class,视觉对象类)数据集上进行训练、调参,直到满足收敛目标,得到训练好的卷积神经网络模型。所述卷积神经网络模型训练过程中采用的损失函数如下:After the network design of the convolutional neural network model is completed, the network is written layer by layer, and then training and parameter adjustment are performed on the VOC (Visual Object Class, visual object class) data set until the convergence target is met, and the trained volume is obtained. A neural network model. The loss function used in the training process of the convolutional neural network model is as follows:
其中x、y表示预测到的车辆位置框(即车辆位置预测框)的中心坐标,w、h分别表示预测到的车辆位置框的宽与高,λ为惩罚系数,所述损失函数中分别对有对象(obj)时和无对象(noobj)时进行了惩罚,S表示送入网络的图像的划分参数,c为检测到对象的置信度,p(c)为检测到的对象属于c类的置信度。公式中λcoord和λnoobj是指惩罚系数,前者指的是车辆位置预测框中有对象时的惩罚系数,后者表示车辆位置预测框中无对象时的惩罚系数。B表示边界预测框数目。表示第j个边界预测框在网格i中。参数x、y、w、h、c、p中,不带上标参数的表示该参数的预测结果(例如参数xi、yi表示车辆位置预测框中心坐标的预测值),带上标的表示的是该参数在ground truth(正确标记的数据)中对应的项(例如参数xi'、yi'表示车辆位置预测框中心坐标在ground truth中对应的项)。where x and y represent the center coordinates of the predicted vehicle position frame (ie, the vehicle position prediction frame), w and h respectively represent the width and height of the predicted vehicle position frame, and λ is the penalty coefficient. Penalty is performed when there is an object (obj) and when there is no object (noobj), S represents the division parameter of the image sent to the network, c is the confidence of the detected object, p(c) is the detected object belongs to the c class Confidence. In the formula, λ coord and λ noobj refer to the penalty coefficient, the former refers to the penalty coefficient when there is an object in the vehicle position prediction frame, and the latter refers to the penalty coefficient when there is no object in the vehicle position prediction frame. B represents the number of bounding prediction boxes. Indicates that the jth bounding box is in grid i. In the parameters x, y, w, h, c, and p, the parameters without superscript represent the prediction result of the parameter (for example, the parameters x i and y i represent the predicted value of the center coordinates of the vehicle position prediction frame), and the superscript represents the prediction result of the parameter. is the corresponding item of the parameter in the ground truth (correctly labeled data) (for example, the parameters x i ', y i ' represent the corresponding item in the ground truth of the center coordinates of the vehicle position prediction box).
训练好的卷积神经网络模型用于检测车辆位置框(即车辆边界预测框,本发明中也称为车辆位置预测框)的中心坐标、宽、高、车辆的置信度(即是车概率,用于与算法中的置信度阈值比较)以及车辆类型(由于采用one_hot(独热码)方式表示,如果检测对象为车的话,表示车辆的那一项为1)。The trained convolutional neural network model is used to detect the center coordinates, width and height of the vehicle position frame (that is, the vehicle boundary prediction frame, also referred to as the vehicle position prediction frame in the present invention), and the confidence level of the vehicle (that is, the vehicle probability, It is used to compare with the confidence threshold in the algorithm) and the vehicle type (due to the one_hot (one-hot code) representation, if the detection object is a car, the item representing the vehicle is 1).
生成训练好的卷积神经网络模型后,在进行停车场车辆检测时可以直接进行加载、调用。After the trained convolutional neural network model is generated, it can be directly loaded and called when detecting vehicles in the parking lot.
步骤103:将所述视频帧输入所述训练好的卷积神经网络模型,输出车辆位置预测框信息。所述车辆位置预测框信息包括车辆位置预测框的中心坐标以及宽、高。Step 103: Input the video frame into the trained convolutional neural network model, and output the vehicle position prediction frame information. The vehicle position prediction frame information includes the center coordinates, width and height of the vehicle position prediction frame.
所述卷积神经网络模型输入的是视频帧,输出的是13x13和26x26两个尺度上的预测结果,该预测结果为一个三维的张量(tensor),该三维张量表示视频中的对象的边界预测框(即车辆位置预测框)、该对象的置信度及对象类型,每一个边界预测框由一个向量组成,该向量表示的是检测的车辆位置预测框的中心坐标及宽、高。将摄像头采集到的当前视频帧输入所述训练好的卷积神经网络模型中,可以直接输出车辆位置预测框的中心坐标以及宽、高。The input of the convolutional neural network model is a video frame, and the output is a prediction result on two scales of 13x13 and 26x26. The prediction result is a three-dimensional tensor (tensor), and the three-dimensional tensor represents the object in the video. Boundary prediction box (that is, the vehicle position prediction box), the confidence level of the object and the object type, each boundary prediction box is composed of a vector, and the vector represents the center coordinates, width and height of the detected vehicle position prediction box. The current video frame collected by the camera is input into the trained convolutional neural network model, and the center coordinates, width and height of the vehicle position prediction frame can be directly output.
步骤104:获取车位框信息。所述车位框信息包括车位框的左上角坐标和右下角坐标。Step 104: Acquire parking space frame information. The parking space frame information includes the coordinates of the upper left corner and the lower right corner of the parking space frame.
在本发明方法及系统运行之初,需要对车位进行提取或标记,以获得车位信息。由于车位存在于视频帧背景之中,而且特征不明显,很难通过神经网络进行提取,因此本发明采用参数化调整的方式来进行车位信息获取,详细获取方式如下:At the beginning of the operation of the method and system of the present invention, the parking space needs to be extracted or marked to obtain the parking space information. Since the parking space exists in the background of the video frame, and the features are not obvious, it is difficult to extract through the neural network, so the present invention adopts the method of parameterized adjustment to obtain the parking space information. The detailed acquisition method is as follows:
获取车位的宽高比信息,设车位的宽为w,车位的高为h,则车位宽高比设车位偏斜角度为α;设车位间的距离为δ。由于每个摄像头管理三到五个车位,所以还需设立一个参数β表示同一摄像头监测的车位的数目。所有的预置车位框看作一个整体,还需要一个参数设置这些预置车位框的一个初始位置,本发明只需要设定这个整体的一个角的位置即可,本发明实施例中设置的是车位框左上角的位置坐标,设该坐标点为p。Obtain the aspect ratio information of the parking space, set the width of the parking space as w and the height of the parking space as h, then the aspect ratio of the parking space Let the deflection angle of the parking spaces be α; let the distance between the parking spaces be δ. Since each camera manages three to five parking spaces, a parameter β needs to be established to represent the number of parking spaces monitored by the same camera. All the preset parking space frames are regarded as a whole, and a parameter is needed to set an initial position of these preset parking space frames. The present invention only needs to set the position of one corner of the whole. The position coordinate of the upper left corner of the parking space frame, set the coordinate point as p.
首先初始化车位框的宽、高分别为w0和h0,由于不同的停车场可能车位大小不同,所以宽高比ε通常由停车场管理员给出,实际上,通过ε的设置,只需要知道w0和h0的任意一个即可。假设只设置h0,则w0=ε*h0,车位的偏斜角度设置为α0(一般的车位角度设置为直角,如果车位角度偏斜较大,则可以通过该参数适当进行调整),车位框的间隔δ的初始化值为δ0,车位的数目初始化为β0,车位框左上角的位置坐标为p0(x0,y0)。由此,初始化阶段在摄像头的画面中首先固定β0个宽高比为ε、长为h0(w0可以根据ε进行推断获得)、车位框间隔为δ0的初始化预置车位框,当摄像头安装好后,通过调整p0(x0,y0),将所有的车位框放到摄像头画面中车位所在的那一排,然后通过调整ε、α、δ、h、β调整车位框的位置,使摄像头画面中的预置车位框调整到与实际车位的停车线大概一致,调整幅度为预置车位框能够表示实际车位即可,从而使采集的视频帧检测起来更快速。当所有的车位框都调整到适当的位置之后,就可以获取不同车位的位置和相应的车位框信息。所述车位框信息包括车位框的左上角坐标和右下角坐标。First, the width and height of the parking space frame are initialized as w 0 and h 0 respectively. Since different parking lots may have different sizes of parking spaces, the aspect ratio ε is usually given by the parking lot administrator. In fact, through the setting of ε, only need It is sufficient to know either w 0 and h 0 . Assuming that only h 0 is set, then w 0 =ε*h 0 , and the skew angle of the parking space is set to α 0 (generally, the parking space angle is set to a right angle. If the deflection of the parking space angle is large, it can be adjusted appropriately by this parameter) , the initial value of the interval δ of the parking space frame is δ 0 , the number of parking spaces is initialized as β 0 , and the position coordinate of the upper left corner of the parking space frame is p 0 (x 0 , y 0 ). Therefore, in the initialization stage, in the image of the camera, firstly, β 0 is fixed to an initialized preset parking space frame with an aspect ratio of ε, a length of h 0 (w 0 can be obtained by inference according to ε), and a parking space frame interval of δ 0. When After the camera is installed, by adjusting p 0 (x 0 , y 0 ), put all the parking space frames in the row where the parking spaces are located in the camera screen, and then adjust the ε, α, δ, h, β by adjusting the parking space frame. position, so that the preset parking space frame in the camera screen is adjusted to approximately the same parking line as the actual parking space. After all the parking space frames are adjusted to appropriate positions, the positions of different parking spaces and the corresponding parking space frame information can be obtained. The parking space frame information includes the coordinates of the upper left corner and the lower right corner of the parking space frame.
步骤105:根据所述车辆位置预测框信息和所述车位框位置信息确定当前车位状态。Step 105: Determine the current parking space state according to the vehicle position prediction frame information and the parking space frame position information.
本发明中,车辆的位置采用笛卡尔坐标表示,通过所述训练好的卷积神经网络模型预测得到的车辆位置预测框的左上角坐标点和右下角坐标点进行表示。车位的表示也是建立在笛卡尔坐标系中的,对其四个顶角的坐标进行提取并记录,系统启动之时便对每个摄像头中所对应的车位进行获取,并将对应的摄像头的相应的车位信息保存起来,同时记录在内存中,以便于对所传回的画面进行实时分析时,能够计时获取车位的信息。所述笛卡尔坐标系的x轴为视频帧的宽度(w)方向,y轴为视频帧的高度(即h)方向,需要注意的本发明中建立的笛卡尔坐标系中y轴正方向向下,而非常规的向上。In the present invention, the position of the vehicle is represented by Cartesian coordinates, and is represented by the upper left corner coordinate point and the lower right corner coordinate point of the vehicle position prediction frame predicted by the trained convolutional neural network model. The representation of the parking space is also established in the Cartesian coordinate system. The coordinates of the four corners are extracted and recorded. When the system starts, the corresponding parking space in each camera is acquired, and the corresponding The information of the parking space is saved and recorded in the memory at the same time, so that the information of the parking space can be obtained in time when the real-time analysis of the returned picture is performed. The x-axis of the Cartesian coordinate system is the width (w) direction of the video frame, the y-axis is the height (i.e. h) direction of the video frame, and the positive direction of the y-axis in the Cartesian coordinate system established in the present invention should be noted. down instead of the conventional up.
本发明中,用(x,y,w,h)来表示所述训练好的卷积神经网络模型的预测结果predict_car,x、y表示车辆位置预测框的左上角坐标,w、h分别表示车辆位置预测框的宽和高,(obj_x_topleft,obj_y_topleft)、(obj_x_bottomright,obj_y_bottomright)分别表示车辆位置预测框的左上角坐标和右下角坐标,(x_topleft,y_topleft)、(x_bottomright,y_bottomright)分别表示车位框的左上角坐标和右下角坐标,(x_toplo,y_toplo)、(x_bottomro,y_bottomro)分别表示所述车辆位置预测框与所述车位框的重叠区域的左上角坐标和右下角坐标,占比为η,判断车位是否为空的过程如下:In the present invention, (x, y, w, h) is used to represent the prediction result predict_car of the trained convolutional neural network model, x and y represent the coordinates of the upper left corner of the vehicle position prediction frame, and w and h represent the vehicle respectively. The width and height of the position prediction frame, (obj_x_topleft, obj_y_topleft), (obj_x_bottomright, obj_y_bottomright) represent the upper left corner and lower right corner coordinates of the vehicle position prediction frame, respectively, (x_topleft, y_topleft), (x_bottomright, y_bottomright) respectively represent the parking space frame The coordinates of the upper left corner and the lower right corner, (x_toplo, y_toplo), (x_bottomro, y_bottomro) respectively represent the upper left corner coordinates and the lower right corner coordinates of the overlapping area of the vehicle position prediction frame and the parking space frame, and the ratio is n, judged The process of whether the parking space is empty is as follows:
第一步:根据所述车辆位置预测框信息计算车辆位置预测框的左上角坐标和右下角坐标。Step 1: Calculate the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information.
车辆位置预测框左上角和右下角坐标是需要根据模型预测结果中的x、y、w、h去计算的,实际计算方法为:obj_x_topleft=x–w/2;obj_y_topleft=y–h/2;obj_x_bottomright=obj_x_topleft+w/2;obj_y_bottomright=obj_y_topleft+h/2。The coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame need to be calculated according to the x, y, w, and h in the model prediction result. The actual calculation method is: obj_x_topleft=x–w/2; obj_y_topleft=y–h/2; obj_x_bottomright=obj_x_topleft+w/2; obj_y_bottomright=obj_y_topleft+h/2.
第二步:根据所述车辆位置预测框的左上角坐标和右下角坐标以及所述车位框的左上角坐标和右下角坐标计算所述车辆位置预测框与所述车位框的重叠区域位置;所述重叠区域位置包括所述车辆位置预测框与所述车位框的重叠区域的左上角坐标和右下角坐标。Step 2: Calculate the overlapping area position of the vehicle position prediction frame and the parking space frame according to the upper left corner coordinates and the lower right corner coordinates of the vehicle position prediction frame and the upper left corner coordinates and the lower right corner coordinates of the parking space frame; The overlapping area position includes coordinates of the upper left corner and the lower right corner of the overlapping area of the vehicle position prediction frame and the parking space frame.
取预测框和车位框的左上角的x方向的坐标最大值为x_toplo,取预测框和车位框的左上角的y方向的最大值为y_toplo,从而获得重叠区域的左上角坐标(x_toplo,y_toplo);再取预测框和车位框的右下角的x方向的坐标最小值为x_bottomro,取预测框和车位框的右下角的y方向的最小值为y_bottomro,从而获得重叠区域的右下角坐标(x_bottomro,y_bottomro)。Take the maximum value of the coordinates in the x direction of the upper left corner of the prediction frame and the parking space frame as x_toplo, and take the maximum value of the y direction in the upper left corner of the prediction frame and the parking space frame as y_toplo, so as to obtain the upper left corner of the overlapping area coordinates (x_toplo, y_toplo) ; Then take the minimum value of the coordinates in the x direction of the lower right corner of the prediction frame and the parking space frame as x_bottomro, and take the minimum value of the y direction of the lower right corner of the prediction frame and the parking space frame as y_bottomro, so as to obtain the coordinates of the lower right corner of the overlapping area (x_bottomro, y_bottomro).
第三步:根据所述重叠区域位置计算重叠区域面积。Step 3: Calculate the area of the overlapping area according to the position of the overlapping area.
重叠区域面积overlap_area的计算公式为overlap_area=abs((x_toplo–x_bottomro)*(y_toplo–y_bottomro))。公式中的abs表示取绝对值。The calculation formula of the overlap area overlap_area is overlap_area=abs((x_toplo-x_bottomro)*(y_toplo-y_bottomro)). The abs in the formula means to take the absolute value.
第四步:根据所述车位框位置计算车位框面积。Step 4: Calculate the area of the parking space frame according to the position of the parking space frame.
所述车位框面积parking_area的计算公式为parking_area=abs((x_topleft–x_bottomright)*(y_topleft–y_bottomright))。The calculation formula of the parking space frame area parking_area is parking_area=abs((x_topleft−x_bottomright)*(y_topleft−y_bottomright)).
第五步:计算所述重叠区域面积与所述车位框面积的比值η作为车辆的置信度。Step 5: Calculate the ratio η of the area of the overlapping area to the area of the parking space frame as the confidence level of the vehicle.
置信度η的计算公式为η=overlap_area/parking_area。The calculation formula of confidence η is η=overlap_area/parking_area.
第六步:根据阈值和比值η判断车位是否为空。Step 6: Determine whether the parking space is empty according to the threshold and the ratio η.
判断所述车辆的置信度是否大于等于置信度阈值,若是,确定当前车位状态为被占状态;若否,确定当前车位状态为空状态。It is judged whether the confidence of the vehicle is greater than or equal to the confidence threshold, and if so, it is determined that the current parking space state is an occupied state; if not, it is determined that the current parking space state is an empty state.
初始时,对车位状态设立两个相同的二维数组进行表示,该二维数组实际上是用于表示车位状态的一个矩阵,该矩阵分别命名为old和new,矩阵中不同的行表示不同的摄像头,不同的列表示该摄像头所对应的不同车位,old数组用于记录前一状态的车位状态,new用于记录当前状态的车位状态。所述车位状态包括两个状态,即空和不空(或者说被占)状态,分别用0和1表示。数组new中数据的变化根据模型检测结果和车位信息来进行判断。Initially, two identical two-dimensional arrays are set up to represent the parking space status. The two-dimensional array is actually a matrix used to represent the parking space status. The matrices are named old and new respectively. Different rows in the matrix represent different Camera, different columns represent different parking spaces corresponding to the camera, the old array is used to record the parking space status of the previous state, and the new is used to record the parking space status of the current state. The parking space state includes two states, namely empty and not empty (or occupied) states, which are represented by 0 and 1 respectively. The change of the data in the array new is judged according to the model detection result and the parking space information.
即,所述步骤105的程序流程如下表2所示:That is, the program flow of
表2判断车位是否为空的流程表Table 2 The flow chart of judging whether the parking space is empty
步骤106:根据所述当前车位状态确定停车费用。Step 106: Determine the parking fee according to the current parking space state.
根据old和new矩阵上相同位置的数的状态来判断车位状态的变化,即通过相同位置上由old到new的不同数据变化来表征车位状态的变化。例如:由0变为1时,表示车位由空状态变为了被占状态,此时记录车辆的开始停车时间;由1变为0,表示车位由被占变为了空状态,进而记录结束停车时间;从而根据开始停车时间及所述结束停车时间计算收费金额(由1变为0时)。According to the state of the number of the same position on the old and new matrices, the change of the parking space state is judged, that is, the change of the parking space state is represented by the different data changes from old to new in the same position. For example: when it changes from 0 to 1, it means that the parking space has changed from an empty state to an occupied state, and the starting parking time of the vehicle is recorded at this time; from 1 to 0, it means that the parking space has changed from being occupied to an empty state, and then the end parking time is recorded. ; Thereby, the toll amount is calculated according to the starting parking time and the ending parking time (when it changes from 1 to 0).
即,所述步骤106根据所述当前车位状态确定停车费用,具体包括:That is, the
获取所述当前车位状态由空状态变为被占状态的开始停车时间;Obtain the starting parking time when the current parking space state changes from an empty state to an occupied state;
获取所述当前车位状态由被占状态变为空状态的结束停车时间;Obtain the ending parking time when the current parking space state changes from the occupied state to the empty state;
根据所述开始停车时间及所述结束停车时间计算车辆的停车时间;Calculate the parking time of the vehicle according to the start parking time and the end parking time;
根据所述停车时间确定停车费用。The parking fee is determined according to the parking time.
本发明方法针对智能停车场管理的需求,基于Darknet框架设计卷积神经网络模型对车辆进行定位,通过对预测的车辆位置和车位进行重叠程度计算,实现对车位上的停车状态的实时监测,并根据车位状态进行停车费用统计。本发明通过将深度学习方法应用于停车资源管理中,实现了停车状态及停车费用的智能、自动化监测,不仅缓解了城市交通拥挤,有效规范了停车资源的使用,同时也使得对停车资源的管理更加便捷和智能化,解放了人力。Aiming at the requirements of intelligent parking lot management, the method of the invention designs a convolutional neural network model to locate the vehicle based on the Darknet framework, and realizes the real-time monitoring of the parking status on the parking space by calculating the overlap degree of the predicted vehicle position and the parking space. Statistics of parking fees are carried out according to the status of the parking spaces. By applying the deep learning method to the management of parking resources, the present invention realizes intelligent and automatic monitoring of parking status and parking fees, which not only alleviates urban traffic congestion, effectively regulates the use of parking resources, but also enables the management of parking resources. More convenient and intelligent, liberating manpower.
基于本发明提供的智能停车管理方法,本发明还提供一种基于深度学习的智能停车管理系统。图5为本发明提供的基于深度学习的智能停车管理系统的结构图,参见图5,所述系统包括:Based on the intelligent parking management method provided by the present invention, the present invention also provides an intelligent parking management system based on deep learning. FIG. 5 is a structural diagram of a deep learning-based intelligent parking management system provided by the present invention. Referring to FIG. 5 , the system includes:
视频采集模块501,用于获取停车场摄像头采集的视频帧;A
模型加载模块502,用于加载训练好的卷积神经网络模型;A
车辆位置预测模块503,用于将所述视频帧输入所述训练好的卷积神经网络模型,输出车辆位置预测框信息;所述车辆位置预测框信息包括车辆位置预测框的中心坐标以及宽、高;The vehicle
车位框获取模块504,用于获取车位框信息;所述车位框信息包括车位框的左上角坐标和右下角坐标;a parking space
当前车位状态判断模块505,用于根据所述车辆位置预测框信息和所述车位框位置信息确定当前车位状态;The current parking space
停车费用确定模块506,用于根据所述当前车位状态确定停车费用。The parking
所述系统还包括模型建立模块,所述模型建立模块具体包括:The system also includes a model establishment module, and the model establishment module specifically includes:
模型建立单元,用于基于Darknet框架和YOLO算法建立卷积神经网络模型;所述卷积神经网络模型包括特征提取器和检测器;A model establishment unit for establishing a convolutional neural network model based on the Darknet framework and the YOLO algorithm; the convolutional neural network model includes a feature extractor and a detector;
模型训练单元,用于采用VOC数据集对所述卷积神经网络模型进行训练、调参,生成训练好的卷积神经网络模型。The model training unit is used to train and adjust parameters of the convolutional neural network model by using the VOC data set to generate a trained convolutional neural network model.
其中,所述当前车位状态判断模块505具体包括:Wherein, the current parking space
车辆位置计算单元,用于根据所述车辆位置预测框信息计算车辆位置预测框的左上角坐标和右下角坐标;a vehicle position calculation unit, configured to calculate the upper left corner coordinate and the lower right corner coordinate of the vehicle position prediction frame according to the vehicle position prediction frame information;
重叠区域位置计算单元,用于根据所述车辆位置预测框的左上角坐标和右下角坐标以及所述车位框的左上角坐标和右下角坐标计算所述车辆位置预测框与所述车位框的重叠区域位置;所述重叠区域位置包括所述车辆位置预测框与所述车位框的重叠区域的左上角坐标和右下角坐标;an overlapping area position calculation unit, configured to calculate the overlap between the vehicle position prediction frame and the parking space frame according to the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame and the coordinates of the upper left corner and the lower right corner of the parking space frame area position; the overlapping area position includes the coordinates of the upper left corner and the lower right corner of the overlapping area of the vehicle position prediction frame and the parking space frame;
重叠区域面积计算单元,用于根据所述重叠区域位置计算重叠区域面积;an overlapping area area calculation unit, configured to calculate the overlapping area area according to the overlapping area position;
车位框面积计算单元,用于根据所述车位框位置计算车位框面积;a parking space frame area calculation unit, configured to calculate the parking space frame area according to the position of the parking space frame;
置信度计算单元,用于计算所述重叠区域面积与所述车位框面积的比值作为车辆的置信度;a confidence degree calculation unit, configured to calculate the ratio of the area of the overlapping area to the area of the parking space frame as the confidence degree of the vehicle;
置信度判断单元,用于判断所述车辆的置信度是否大于等于置信度阈值,获得第一判断结果;a confidence level judgment unit, configured to judge whether the confidence level of the vehicle is greater than or equal to a confidence level threshold, and obtain a first judgment result;
被占状态判断单元,用于若所述第一判断结果为所述车辆的置信度大于等于置信度阈值,确定当前车位状态为被占状态;an occupied state judgment unit, configured to determine that the current parking space state is an occupied state if the first judgment result is that the confidence level of the vehicle is greater than or equal to a confidence level threshold;
空状态判断单元,用于若所述车辆的置信度小于置信度阈值,确定当前车位状态为空状态。An empty state judging unit, configured to determine that the current parking space state is an empty state if the confidence level of the vehicle is less than the confidence level threshold.
所述停车费用确定模块506具体包括:The parking
开始停车时间记录单元,用于获取所述当前车位状态由空状态变为被占状态的开始停车时间;a parking start time recording unit, used to obtain the start parking time when the current parking space state changes from an empty state to an occupied state;
结束停车时间记录单元,用于获取所述当前车位状态由被占状态变为空状态的结束停车时间;an end parking time recording unit, used to obtain the end parking time when the current parking space state changes from an occupied state to an empty state;
停车时间计算单元,用于根据所述开始停车时间及所述结束停车时间计算车辆的停车时间;a parking time calculation unit, configured to calculate the parking time of the vehicle according to the start parking time and the end parking time;
停车费用计算单元,用于根据所述停车时间确定停车费用。A parking fee calculation unit, configured to determine the parking fee according to the parking time.
本发明系统使用计算机视觉前沿技术研究成果,设计卷积神经网络对车辆进行动态识别、实时定位并检测,对车辆出入停车场的情况进行实时跟踪和计时收费;同时,通过多线程方式对每个摄像头进行监控,并将每个摄像头实时传回的视频帧进行检测,计算车辆和车位的重叠部分和车位的比值,来对车位状态进行实时统计,最终再将车位状态信息和计时收费信息分别显示在对应摄像头所在的画面中。通过统一集成的软件系统,对停车场的停车状态进行实时显示,从而实现对停车场的智能化管理。The system of the invention uses the research results of the frontier technology of computer vision to design a convolutional neural network for dynamic identification, real-time positioning and detection of vehicles, and real-time tracking of vehicles entering and leaving the parking lot and timing charging; The camera monitors, and detects the video frames returned by each camera in real time, calculates the ratio of the overlap between the vehicle and the parking space and the parking space, and conducts real-time statistics on the parking space status, and finally displays the parking space status information and hourly charging information separately. in the screen where the corresponding camera is located. Through the unified integrated software system, the parking status of the parking lot is displayed in real time, so as to realize the intelligent management of the parking lot.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的系统而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments can be referred to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method.
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。In this paper, specific examples are used to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present invention; meanwhile, for those skilled in the art, according to the present invention There will be changes in the specific implementation and application scope. In conclusion, the contents of this specification should not be construed as limiting the present invention.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910089082.4A CN109784306B (en) | 2019-01-30 | 2019-01-30 | Intelligent parking management method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910089082.4A CN109784306B (en) | 2019-01-30 | 2019-01-30 | Intelligent parking management method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109784306A CN109784306A (en) | 2019-05-21 |
CN109784306B true CN109784306B (en) | 2020-03-10 |
Family
ID=66503758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910089082.4A Active CN109784306B (en) | 2019-01-30 | 2019-01-30 | Intelligent parking management method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784306B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110264765A (en) * | 2019-06-26 | 2019-09-20 | 广州小鹏汽车科技有限公司 | Detection method, device, computer equipment and the storage medium of vehicle parking state |
CN110852151B (en) * | 2019-09-26 | 2024-02-20 | 深圳市金溢科技股份有限公司 | Method and device for detecting shielding of berths in roads |
CN110910655A (en) * | 2019-12-11 | 2020-03-24 | 深圳市捷顺科技实业股份有限公司 | Parking management method, device and equipment |
CN111292353B (en) * | 2020-01-21 | 2023-12-19 | 成都恒创新星科技有限公司 | Parking state change identification method |
CN111768648B (en) * | 2020-06-10 | 2022-02-15 | 浙江大华技术股份有限公司 | Vehicle access determining method and system |
CN111784857A (en) * | 2020-06-22 | 2020-10-16 | 浙江大华技术股份有限公司 | Parking space management method and device and computer storage medium |
CN111951601B (en) * | 2020-08-05 | 2021-10-26 | 智慧互通科技股份有限公司 | Method and device for identifying parking positions of distribution vehicles |
CN111932933B (en) * | 2020-08-05 | 2022-07-26 | 杭州像素元科技有限公司 | Urban intelligent parking space detection method and equipment and readable storage medium |
CN112037504B (en) * | 2020-09-09 | 2021-06-25 | 深圳市润腾智慧科技有限公司 | Vehicle parking scheduling management method and related components thereof |
CN113065427A (en) * | 2021-03-19 | 2021-07-02 | 上海眼控科技股份有限公司 | Vehicle parking state determination method, device, equipment and storage medium |
CN113205691B (en) * | 2021-04-26 | 2023-05-02 | 超级视线科技有限公司 | Method and device for identifying vehicle position |
CN113421382B (en) * | 2021-06-01 | 2022-08-30 | 杭州鸿泉物联网技术股份有限公司 | Detection method, system, equipment and storage medium for shared electric bill standard parking |
CN113706920B (en) * | 2021-08-20 | 2023-08-11 | 云往(上海)智能科技有限公司 | Parking behavior judging method and intelligent parking system |
CN114067602B (en) * | 2021-11-16 | 2024-03-26 | 深圳市捷顺科技实业股份有限公司 | Parking space state judging method, system and parking space management device |
CN114267180B (en) * | 2022-03-03 | 2022-05-31 | 科大天工智能装备技术(天津)有限公司 | A computer vision-based parking management method and system |
CN114724107B (en) * | 2022-03-21 | 2023-09-01 | 北京卓视智通科技有限责任公司 | Image detection method, device, equipment and medium |
CN115035741B (en) * | 2022-04-29 | 2024-03-22 | 阿里云计算有限公司 | Method, device, storage medium and system for discriminating parking position and parking |
CN114694124B (en) * | 2022-05-31 | 2022-08-26 | 成都国星宇航科技股份有限公司 | Parking space state detection method and device and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100449579C (en) * | 2006-04-21 | 2009-01-07 | 浙江工业大学 | Electronic parking guidance system based on omnidirectional computer vision |
CN100559420C (en) * | 2007-03-29 | 2009-11-11 | 汤一平 | Parking Guidance System Based on Computer Vision |
CN105760849B (en) * | 2016-03-09 | 2019-01-29 | 北京工业大学 | Target object behavioral data acquisition methods and device based on video |
CN106935035B (en) * | 2017-04-07 | 2019-07-23 | 西安电子科技大学 | Parking offense vehicle real-time detection method based on SSD neural network |
-
2019
- 2019-01-30 CN CN201910089082.4A patent/CN109784306B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109784306A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784306B (en) | Intelligent parking management method and system based on deep learning | |
CN109829400B (en) | Rapid vehicle detection method | |
CN113420607A (en) | Multi-scale target detection and identification method for unmanned aerial vehicle | |
CN110443208A (en) | YOLOv 2-based vehicle target detection method, system and equipment | |
CN114049572A (en) | Detection method for identifying small target | |
EP3951741B1 (en) | Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform | |
CN110414401A (en) | A PYNQ-based intelligent monitoring system and monitoring method | |
CN115346177A (en) | Novel system and method for detecting target under road side view angle | |
CN117789255B (en) | Pedestrian abnormal behavior video identification method based on attitude estimation | |
WO2021184616A1 (en) | Parking space detection method and apparatus, and device and storage medium | |
CN115331141A (en) | High-altitude smoke and fire detection method based on improved YOLO v5 | |
CN110298281A (en) | Video structural method, apparatus, electronic equipment and storage medium | |
CN110009634A (en) | Vehicle count method in a kind of lane based on full convolutional network | |
CN117994987B (en) | Traffic parameter extraction method and related device based on target detection technology | |
CN114781514A (en) | Floater target detection method and system integrating attention mechanism | |
CN111339934A (en) | A human head detection method that combines image preprocessing and deep learning target detection | |
CN113989709A (en) | Target detection method and device, storage medium and electronic equipment | |
CN117636241A (en) | Multi-modal pedestrian detection and tracking method in low-light scenes based on decision-level fusion | |
CN118247684A (en) | Method, device and storage medium for multi-source remote sensing image fusion and vehicle target recognition | |
CN113052039A (en) | Method, system and server for detecting pedestrian density of traffic network | |
KR102240638B1 (en) | Parking guidance method and system using boundary pixel data estimated in vehicle image and analysis of vehicle model viewpoint | |
CN115661188A (en) | A road panorama object detection and tracking method based on an edge computing platform | |
CN113963310A (en) | People flow detection method and device for bus station and electronic equipment | |
CN113112479A (en) | Progressive target detection method and device based on key block extraction | |
CN103324956A (en) | Seat statistical method based on distributed type video detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |