CN109784306B - Intelligent parking management method and system based on deep learning - Google Patents
Intelligent parking management method and system based on deep learning Download PDFInfo
- Publication number
- CN109784306B CN109784306B CN201910089082.4A CN201910089082A CN109784306B CN 109784306 B CN109784306 B CN 109784306B CN 201910089082 A CN201910089082 A CN 201910089082A CN 109784306 B CN109784306 B CN 109784306B
- Authority
- CN
- China
- Prior art keywords
- parking
- parking space
- frame
- vehicle
- vehicle position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an intelligent parking management method and system based on deep learning. Aiming at the requirement of intelligent parking lot management, a convolutional neural network model is designed based on a Darknet frame to position a vehicle, the predicted vehicle position and the predicted parking space are subjected to overlapping degree calculation, the parking state of the parking space is monitored in real time, and parking cost statistics is carried out according to the parking space state. The invention realizes intelligent and automatic monitoring of parking state and parking cost by applying the deep learning method to the parking resource management, relieves the urban traffic congestion, effectively standardizes the use of parking resources, and simultaneously leads the management of the parking resources to be more convenient and intelligent, and liberates manpower.
Description
Technical Field
The invention relates to the technical field of intelligent parking management, in particular to an intelligent parking management method and system based on deep learning.
Background
In recent years, with the increase of private cars, large, medium and small cities face a dilemma of 'a plurality of cars is few', and the dilemma is made to be serious by the lagging management status of parking resources. Traditional parking resource management is managed through artifical manual mode, and is not only inefficient, consumes the manpower extremely moreover.
Disclosure of Invention
The invention aims to provide an intelligent parking management method and system based on deep learning, which are used for applying the deep learning method to parking resource management so as to solve the problems of low efficiency and extremely high labor consumption of the traditional manual parking resource management mode.
In order to achieve the purpose, the invention provides the following scheme:
a smart parking management method based on deep learning, the method comprising:
acquiring a video frame acquired by a parking lot camera;
loading a trained convolutional neural network model;
inputting the video frame into the trained convolutional neural network model, and outputting vehicle position prediction frame information; the vehicle position prediction frame information comprises the center coordinates, the width and the height of the vehicle position prediction frame;
acquiring parking stall frame information; the parking space frame information comprises a left upper corner coordinate and a right lower corner coordinate of the parking space frame;
determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information;
and determining parking cost according to the current parking space state.
Optionally, before the loading the trained convolutional neural network model, the method further includes:
establishing a convolutional neural network model based on a Darknet frame and a YOLO algorithm; the convolutional neural network model comprises a feature extractor and a detector;
and training and parameter adjusting are carried out on the convolutional neural network model by adopting a VOC data set, so that a trained convolutional neural network model is generated.
Optionally, the determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information specifically includes:
calculating the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information;
calculating the position of an overlapping area of the vehicle position prediction frame and the parking space frame according to the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame and the coordinates of the upper left corner and the lower right corner of the parking space frame; the overlapping region position comprises an upper left corner coordinate and a lower right corner coordinate of an overlapping region of the vehicle position prediction frame and the parking space frame;
calculating the area of the overlapping area according to the position of the overlapping area;
calculating the area of the parking stall frame according to the position of the parking stall frame;
calculating the ratio of the area of the overlapping area to the area of the parking space frame as the confidence coefficient of the vehicle;
judging whether the confidence of the vehicle is greater than or equal to a confidence threshold value or not, and obtaining a first judgment result;
if the first judgment result is that the confidence coefficient of the vehicle is greater than or equal to the confidence coefficient threshold value, determining that the current parking space state is an occupied state;
and if the confidence coefficient of the vehicle is smaller than the confidence coefficient threshold value, determining that the current parking space state is an empty state.
Optionally, determining parking fee according to the current parking space state specifically includes:
obtaining the starting parking time when the current parking space state is changed from an empty state to an occupied state;
acquiring the parking ending time when the current parking space state is changed from an occupied state to an empty state;
calculating the parking time of the vehicle according to the starting parking time and the ending parking time;
and determining parking cost according to the parking time.
A smart parking management system based on deep learning, the system comprising:
the video acquisition module is used for acquiring video frames acquired by the parking lot camera;
the model loading module is used for loading the trained convolutional neural network model;
the vehicle position prediction module is used for inputting the video frame into the trained convolutional neural network model and outputting vehicle position prediction frame information; the vehicle position prediction frame information comprises the center coordinates, the width and the height of the vehicle position prediction frame;
the parking stall frame acquisition module is used for acquiring parking stall frame information; the parking space frame information comprises a left upper corner coordinate and a right lower corner coordinate of the parking space frame;
the current parking space state judging module is used for determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information;
and the parking fee determining module is used for determining the parking fee according to the current parking space state.
Optionally, the system further includes a model building module, where the model building module specifically includes:
the model establishing unit is used for establishing a convolutional neural network model based on a Darknet framework and a YOLO algorithm; the convolutional neural network model comprises a feature extractor and a detector;
and the model training unit is used for training and parameter adjustment of the convolutional neural network model by adopting the VOC data set to generate a trained convolutional neural network model.
Optionally, the current parking space state determination module specifically includes:
the vehicle position calculation unit is used for calculating the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information;
the overlapping area position calculating unit is used for calculating the overlapping area position of the vehicle position prediction frame and the parking space frame according to the upper left corner coordinate and the lower right corner coordinate of the vehicle position prediction frame and the upper left corner coordinate and the lower right corner coordinate of the parking space frame; the overlapping region position comprises an upper left corner coordinate and a lower right corner coordinate of an overlapping region of the vehicle position prediction frame and the parking space frame;
an overlap region area calculation unit for calculating an overlap region area from the overlap region position;
the parking space frame area calculation unit is used for calculating the area of the parking space frame according to the position of the parking space frame;
the confidence coefficient calculation unit is used for calculating the ratio of the area of the overlapping area to the area of the parking space frame as the confidence coefficient of the vehicle;
the confidence coefficient judging unit is used for judging whether the confidence coefficient of the vehicle is greater than or equal to a confidence coefficient threshold value or not and obtaining a first judgment result;
the occupied state judging unit is used for determining that the current parking space state is the occupied state if the first judgment result indicates that the confidence coefficient of the vehicle is greater than or equal to the confidence coefficient threshold;
and the empty state judging unit is used for determining that the current parking space state is an empty state if the confidence coefficient of the vehicle is smaller than the confidence coefficient threshold value.
Optionally, the parking fee determining module specifically includes:
the parking starting time recording unit is used for acquiring the parking starting time when the current parking space state is changed from an empty state to an occupied state;
the parking ending time recording unit is used for acquiring the parking ending time when the current parking space state is changed from the occupied state to the empty state;
the parking time calculation unit is used for calculating the parking time of the vehicle according to the starting parking time and the ending parking time;
and the parking fee calculation unit is used for determining the parking fee according to the parking time.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides an intelligent parking management method and system based on deep learning, aiming at the requirements of intelligent parking lot management, a convolutional neural network model is designed based on a Darknet frame to position a vehicle, the predicted vehicle position and the calculated parking space overlap degree are calculated, the real-time monitoring of the parking state of the parking space is realized, and the parking cost statistics is carried out according to the parking space state. The invention realizes intelligent and automatic monitoring of parking state and parking cost by applying the deep learning method to the parking resource management, relieves the urban traffic congestion, effectively standardizes the use of parking resources, and simultaneously leads the management of the parking resources to be more convenient and intelligent, and liberates manpower.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of an intelligent parking management method based on deep learning according to the present invention;
fig. 2 is a data flow chart of the ARM terminal according to the present invention;
FIG. 3 is a flow chart of data processing at the server end according to the present invention;
FIG. 4 is a network architecture diagram of the feature extractor and detector in the convolutional neural network model provided by the present invention;
fig. 5 is a structural diagram of an intelligent parking management system based on deep learning according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention relates to the technical fields of deep learning, computer vision, a Darknet framework, real-time communication, image and video transmission, multithreading parallelism, ARM development, C + + graphical user interface application program development and the like, in particular to design and training of a deep learning neural network for classifying, positioning and detecting targets in the deep learning and the computer vision and fusion of parking spaces and the detected targets. The invention aims to provide an intelligent parking management method and system based on deep learning, wherein the deep learning method is applied to parking resource management, the application of the deep learning technology in computer vision enables machine vision to be closer to human vision, even the performance of the deep learning technology in some aspects exceeds that of human, the technology is used for tracking and positioning vehicles in real time, and the real-time statistics on the access of the vehicles can be effectively carried out, so that the problems of low efficiency and high labor consumption of the traditional manual parking resource management mode can be solved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The key problems processed by the method and the system of the invention comprise:
1) analyzing the location of the system and the composition of the system;
2) how the video image is transmitted;
3) how to judge the parking space state;
4) how to perform target positioning;
5) how to extract parking space information;
6) calculating according to the method of 3) to monitor the parking space state in real time;
7) how the system is integrated.
Of the above main 7 problems, problems 1) and 7) are overall, and problem 2), 4) and 5) are central. The present invention mainly solves these 7 problems in the following ways:
1. the method and the system aim at the unified management of the parking resources which are difficult to manage in the prior art, but the parking lot is difficult to monitor through one camera, so that one camera is adopted for monitoring every 3 to 5 parking spaces. The system is composed of a front-end ARM (advanced RISC machines) for controlling video capture and forwarding of the cameras, a background receiving server processes received data of different cameras, and a plurality of cameras are used for monitoring a plurality of parking spaces respectively, so that processing results of the different cameras need to be reflected to real-time pictures, and data of the different cameras need to be transmitted to respective real-time pictures by adopting multithreading.
2. In view of the reliability of the TCP protocol, the invention adopts TCP to carry out video transmission in the communication transmission part of the system. The 4G communication module is selected for the connection from the front end to the background, and a user can utilize the Internet to connect the front end and the background. A plurality of cameras are all connected with an ARM embedded development board to manage an area, and a frame of image acquired by each camera is added with the camera number information. And carrying out multithreading operation at the ARM end, adding the area information and the camera information, integrating into a data packet, and packaging and sending to the server.
3. The judgment of the parking space state needs to be determined through the position detection result of the vehicle in the video and the position information of the parking space. And calculating the overlapping part of the target vehicle prediction frame and the parking space and the overlapping proportion of the overlapping part and the parking space, thereby judging the state of the parking space.
4. The position detection of the vehicle adopts the current deep learning technology, and a detector is designed on the basis of the idea in a Darknet frame and a YOLO algorithm and a pre-training model with 23 layers, and the design of the detector is given in figure 4; and then training and parameter adjustment are carried out on the VOC data set until an optimization target is met, and finally a trained convolutional neural network model is obtained. After the video frame is processed by the model, information about a target vehicle object prediction frame in the video frame is obtained, and the obtained vehicle prediction frame is represented by four key parameters.
5. By the parking space acquisition mode provided by the invention, codes are compiled to realize the UI interface for parameter adjustment, so that the system can be installed and debugged by an administrator in a specification mode.
6. And detecting the parking space state in real time, and displaying the pictures of the plurality of cameras in real time in a multithreading manner.
7. And (3) integration of the system, namely integrating the previous designs 1 to 6 into a system platform. And (4) passing the test of all the program module units, then testing and debugging the whole system, and repeating the process until the whole system stably runs.
According to the invention concept, the invention provides an intelligent parking management method and system based on deep learning.
Fig. 1 is a flowchart of an intelligent parking management method based on deep learning according to the present invention. Referring to fig. 1, the intelligent parking management method based on deep learning provided by the invention specifically includes:
step 101: and acquiring a video frame acquired by the parking lot camera.
Because the parking area will carry out real-time tracking to the discrepancy of vehicle to in real time carry out the parking stall state statistics, so need in time pass the video of collecting with the parking area camera to the backstage server.
Because the video transmission captured by the camera needs to be stable and reliable, the invention adopts a Transmission Control Protocol (TCP); in order to meet the time requirement, the video frames acquired by the camera are transmitted by adopting a 4G technology, and the 4G integrates 3G and WLAN, so that high-quality data such as videos, audios and images can be rapidly transmitted.
The parking lot generally adopts a plurality of cameras to carry out video acquisition on a plurality of parking spaces, and each camera is arranged to monitor 4 parking spaces, so that each camera needs one display picture and needs real-time display. Therefore, each camera needs to be monitored to determine the parking space state and timing, and the monitored video data, the respective state and timing information are displayed on the respective corresponding real-time picture through the respective threads.
The video collected by the camera is captured by an ARM (advanced RISC machines) board, then numbered and forwarded to a background server through a TCP (transmission control protocol). Because the memory of the ARM end is limited, certain memory optimization needs to be performed on the program during development, and unnecessary troubles caused by memory overflow and collapse in the process of processing after the ARM end captures the image are avoided. The multiple cameras are numbered and monitored, and video frames captured by each camera are marked and forwarded to the background server.
Fig. 2 is a data flow chart of the ARM terminal provided in the present invention. Because the invention adopts one camera to monitor the limited area, a plurality of cameras are needed to monitor different areas for one parking lot. And the plurality of cameras transmit the acquired data frames to the ARM end, and the ARM end processes the data. Before receiving data, the ARM end detects whether the space divided for the task is enough or not, and if not, the ARM end discards the current frame. The encoded frames are forwarded to a back-end server.
And the final monitoring display of the parking space is executed by a management interface, and the management interface is developed by adopting Qt. Qt is a cross-platform graphical user interface application development framework, and special codes are used for generating extension and macros in an object-oriented mode, so that component programming is allowed, and the development efficiency is greatly improved.
And the server end receives the video data forwarded from the ARM end and puts each data into the network for prediction. The background server puts the returned video into a trained convolutional neural network model, after the video is processed by the network model, the vehicle can be detected in real time, the ratio of the overlapping part of the parking space and the vehicle position to the parking space is calculated, whether the parking space is empty or not is judged according to whether the result of the ratio operation meets a set threshold value or not, and the time that the parking space is not empty is counted until the parking space is in an empty state; and when the parking space is changed from idle running to empty space, starting timing until the parking space is changed from the idle running to the empty state, stopping timing, and calculating the parking fee to be paid.
The empty and non-empty parking spaces need to be displayed in real time, and the parking time also needs to be displayed in real time. Therefore, besides detecting each parking space, the detection state needs to be displayed in real time, that is, the parking spaces detected by all the cameras need to be analyzed, and a result is given; and establishing a thread for each camera, uniformly collecting video information of all the cameras, numbering each camera and the collected video information, outputting the video information in sequence, and displaying the video information on a management interface of the corresponding camera in real time.
However, although each data frame is numbered at the ARM end, and the background server can know which camera the corresponding frame comes from, since the network model only needs the video frame and does not need the mark information of the cameras, when the network model processes a frame of image, it does not know which camera the image comes from, so the video frame collected by the cameras needs to be put into the buffer queue, and a flag [ n ] mark array is set to record which camera the frame processed in the current network comes from, all elements in the flag are initialized to 0, when the data of the jth camera is processed in the network, the flag [ j ] is changed to 1, and after a predicted value is obtained, which camera the video frame belongs to can be obtained by the mark in the flag; meanwhile, because the element in the flag can only represent one camera, the flag needs to be locked, and meanwhile, the prediction process of the network model cannot be interfered, so that the network model prediction process also needs to be locked; after the network finishes the frame operation of the current camera, the network needs to unlock the self and then unlock the flag, so that the data in the buffer queue can be continuously detected in the network model, and the corresponding result is reflected to the corresponding camera for displaying.
Fig. 3 is a data processing flow chart of the server side provided by the present invention. Before the server receives the data transmitted from the ARM end, the space occupied by the server for receiving the video frame is checked, if the current video frame can be stored, the current video frame is stored, otherwise, the current frame is discarded. In order to reduce the performance dependence on machine equipment and enable a single frame to enter a single network for calculation, wherein the calculation result is a vehicle position prediction frame, the background server processes the position frame and the parking space frame of the vehicle and updates the parking space state.
Because the parking spaces are monitored by the aid of the cameras, the parking space state of each camera can be independently output to the corresponding real-time picture, and the parking space corresponding to each camera is initialized. Therefore, after the predicted position of the vehicle is obtained, the initialization information corresponding to the jth camera is found and the subsequent processing is carried out. In order to enable the picture of each camera to be independently displayed in the respective picture in real time, the invention adopts a plurality of threads to process the information of different cameras, thereby achieving the purpose of stably and reliably monitoring the collected pictures of a plurality of cameras in real time.
Step 102: and loading the trained convolutional neural network model.
The design of the convolutional neural network model used for the identification and detection of the vehicle is realized based on a Darknet framework. The Darknet is a lightweight deep learning framework implemented by C language, supports CPU (Central Processing Unit) and GPU (Graphics Processing Unit), has no strong API (Application Programming Interface), but is easier to use due to its lightweight implementation. The Darknet is convenient to design and train the network, and the Darknet is extremely convenient to design and train the network except that the environment of the Darknet is not easy to set up. Since opensourcecomputer Vision Library already formally supports the Draknet framework from 3.3.1, the model obtained after GPU training can be used on the CPU, which reduces the requirement for using the machine.
The convolutional neural network model established by the invention comprises a feature extractor and a detector, and the detector is designed on the basis of the Darknet19 feature extractor on the basis of target detection ideas in algorithms such as a Darknet frame and a YoLO (you Only Look one) and the like. The detector is mainly used for detecting the position of a vehicle in a video, and a top-down mode is adopted to identify, position and detect target objects on the characteristics of different scales, so that the identification of the targets is more accurate.
The detailed network structure design of the feature extractor and the detector in the convolutional neural network model of the present invention is shown in fig. 4 and the following table 1:
TABLE 1 convolutional neural network model network architecture
Type in table 1 indicates layer Type, Filters indicates the number of convolution kernels (i.e. the number of channels Output), Size/Stride indicates the Size/Stride of Filter, Output indicates Output, constraint or conv in table 1 and fig. 4 indicates convolution, Maxpool or Maxpool indicates max pooling, YOLO or YOLO comes from the method proposed by the article "You Only Look on: unity, read-Time Object Detection", no proper chinese expression, route indicates routing, upsample indicates upsampling, and concatement indicates concatenation.
Referring to table 1 and fig. 4, the convolutional neural network model network design established by the present invention is mainly divided into two parts, a feature extraction part and a detection part. The characteristic extraction part adopts a Darknet19 characteristic extraction model which is responsible for detecting the characteristics in the video frame and providing a detection basis for the subsequent detection part; the detection part is mainly responsible for detecting the feature map provided by the feature extraction part, and finally obtains a prediction frame of the detected object, wherein the prediction frame is used for representing the position of the object.
Specifically, the feature extractor in the convolutional neural network model adopts Darknet19, Darknet19, which has 23 layers in total, wherein 18 convolutional layers and 5 maximum pooling layers. The detector is designed to predict on two scales, 12 layers in total, with two layers of Yolo as output layers, detecting on a scale basis of 13x13 and 26x26, respectively. There are three convolutional layers on the scale of 13x13, one yolo layer; the predicted result on the 13x13 scale is output by the yolo layer. Performing convolution on the layer 23 to the layer 28, performing upward sampling on the feature map after convolution, and then performing cascade connection with the feature map of the layer 16, namely performing cascade connection on one route layer, one convolution layer, one upward sampling layer and finally one route layer in the process; after the cascade, three convolutions are performed, and the prediction result on the 26x26 scale is output by the yolo layer. The convolutional neural network model inputs video frames and outputs prediction results on two scales of 13x13 and 26x26, the prediction results are a three-dimensional tensor (tensor), the three-dimensional tensor represents a boundary prediction frame (namely a vehicle position prediction frame) of an object in the video, the confidence coefficient of the object and the type of the object, each boundary prediction frame consists of a vector, and the vector represents the central coordinate, the width and the height of the detected vehicle position prediction frame.
Except for the route layer, the other adjacent layers are the output of the front layer as the input of the back layer, the back layer processes the output result of the adjacent front layer, and the output result of each layer is a feature map.
After the network design of the convolutional neural network model is completed, the network is written layer by layer, and then training and parameter adjustment are carried out on a VOC (Visual Object Class) data set until a convergence target is met, so that the trained convolutional neural network model is obtained. The loss function adopted in the convolutional neural network model training process is as follows:
wherein x and y represent the central coordinates of the predicted vehicle position frame (i.e., the vehicle position prediction frame), w and h represent the width and height of the predicted vehicle position frame, respectively, λ is a penalty coefficient, the loss function penalizes the presence and absence of an object (obj), respectively, S represents a partition parameter of an image sent to the network, c is a confidence level of the detected object, and p (c) is a confidence level of the detected object belonging to class c. In the formula lambdacoordAnd λnoobjThe penalty factor is a penalty factor when an object is present in the vehicle position prediction frame, and the penalty factor when no object is present in the vehicle position prediction frame. B denotes the number of boundary prediction boxes.Indicating that the jth boundary prediction box is in mesh i. Of the parameters x, y, w, h, c, p, the prediction result of the parameter is represented without a superscript parameter (for example, the parameter xi、yiRepresenting the predicted value of the coordinates of the center of the prediction box of the vehicle position), and the item (for example, parameter x) corresponding to the parameter in the group route (correctly labeled data) is indicated with a superscripti'、yi' represents a corresponding term of the center coordinates of the vehicle position prediction frame in the ground route).
The trained convolutional neural network model is used for detecting the center coordinates, width and height of a vehicle position frame (i.e. a vehicle boundary prediction frame, also referred to as a vehicle position prediction frame in the present invention), the confidence level of the vehicle (i.e. the vehicle probability for comparison with a confidence level threshold in an algorithm), and the type of the vehicle (since the one _ hot mode is used for representation, if the detection object is a vehicle, the one term representing the vehicle is 1).
After the trained convolutional neural network model is generated, loading and calling can be directly carried out when the vehicle detection of the parking lot is carried out.
Step 103: and inputting the video frame into the trained convolutional neural network model, and outputting vehicle position prediction frame information. The vehicle position prediction frame information includes center coordinates, and width and height of the vehicle position prediction frame.
The convolutional neural network model inputs video frames and outputs prediction results on two scales of 13x13 and 26x26, the prediction results are a three-dimensional tensor (tensor), the three-dimensional tensor represents a boundary prediction frame (namely a vehicle position prediction frame) of an object in the video, the confidence coefficient of the object and the type of the object, each boundary prediction frame consists of a vector, and the vector represents the central coordinate, the width and the height of the detected vehicle position prediction frame. And inputting the current video frame acquired by the camera into the trained convolutional neural network model, and directly outputting the center coordinate, width and height of the vehicle position prediction frame.
Step 104: and acquiring parking space frame information. The parking space frame information comprises the upper left corner coordinate and the lower right corner coordinate of the parking space frame.
At the beginning of the operation of the method and the system, the parking space needs to be extracted or marked to obtain the parking space information. Because the parking space exists in the background of the video frame, the characteristics are not obvious, and the extraction is difficult to carry out through the neural network, the invention adopts a parameterization adjustment mode to carry out the acquisition of the parking space information, and the detailed acquisition mode is as follows:
obtaining the width-height ratio information of the parking space, and setting the width of the parking space as w and the height of the parking space as h, then the width-height ratio of the parking space is obtainedThe parking space deflection angle is α, the distance between parking spaces is delta, and as each camera manages three to five parking spaces, a parameter β needs to be set to represent the number of the parking spaces monitored by the same camera, all the preset parking space frames are regarded as a whole, and a parameter is needed to set an initial position of the preset parking space frames.
Firstly, initializing the width and the height of a parking space frame as w0And h0Since different parking lots may have different parking spaces, the aspect ratio epsilon is usually given by a parking lot manager, and actually, only w needs to be known through the setting of epsilon0And h0Any one of them may be used. Suppose that only h is set0Then w is0=ε*h0The deflection angle of the parking space is set to α0(the general parking space angle is set as a right angle, if the parking space angle is inclined greatly, the parking space angle can be adjusted properly through the parameter), and the initial value of the interval delta of the parking space frame is delta0The number of parking spaces is initialized to β0The position coordinate of the upper left corner of the parking space frame is p0(x0,y0) Thus, the initialization phase first fixes β in the picture of the camera0The width-to-height ratio is epsilon and the length is h0(w0Can be inferred from epsilon), the space frame interval is delta0After the camera is installed, the parking stall frame is preset by adjusting the p0(x0,y0) The method includes the steps that all parking stall frames are placed in the row where parking stalls are located in a camera picture, then the positions of the parking stall frames are adjusted through adjusting epsilon, α, delta, h and β, preset parking stall frames in the camera picture are adjusted to be approximately consistent with a parking line of an actual parking stall, the adjustment range is that the preset parking stall frames can represent the actual parking stall, and therefore collected video frames can be detected more quickly.
Step 105: and determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information.
In the invention, the position of the vehicle is expressed by adopting Cartesian coordinates, and the coordinate point at the upper left corner and the coordinate point at the lower right corner of the vehicle position prediction frame obtained by the prediction of the trained convolutional neural network model are expressed. The representation of parking stall is also established in the cartesian coordinate system, draws and the record to the coordinate of its four apex angles, just acquires the parking stall that corresponds in every camera when the system starts to keep the corresponding parking stall information of corresponding camera, record in the memory simultaneously, so that when carrying out real-time analysis to the picture that returns, can acquire the information of parking stall by timing. The x axis of the cartesian coordinate system is the width (w) direction of the video frame, the y axis is the height (i.e. h) direction of the video frame, and it should be noted that the positive direction of the y axis in the cartesian coordinate system established in the present invention is downward, not upward conventionally.
In the present invention, (x, y, w, h) represents a prediction result prediction _ car of the trained convolutional neural network model, x, y represent an upper left corner coordinate of a vehicle position prediction frame, w, h represent a width and a height of the vehicle position prediction frame respectively, (obj _ x _ top, obj _ y _ top), (obj _ x _ bottom, obj _ y _ bottom) represent an upper left corner coordinate and a lower right corner coordinate of the vehicle position prediction frame respectively, (x _ top, y _ top), (x _ bottom, y _ bottom) represent an upper left corner coordinate and a lower right corner coordinate of a parking space frame respectively, (x _ top, y _ top), (x _ bottom, y _ bottom) represent an upper left corner coordinate and a lower right corner coordinate of the parking space frame respectively, (x _ top, y _ bottom), (x _ bottom, y _ bottom) represent an upper left corner coordinate and a lower right corner coordinate of an overlapping area of the vehicle position prediction frame and the parking space frame respectively, and the duty ratio is η, and the following parking space determination process is as follows:
the first step is as follows: and calculating the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the information of the vehicle position prediction frame.
Coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame need to be calculated according to x, y, w and h in a model prediction result, and the actual calculation method is that obj _ x _ top is x-w/2; obj _ y _ toplift ═ y-h/2; obj _ x _ bottomright ═ obj _ x _ toplex + w/2; obj _ y _ bottomright ═ obj _ y _ toplift + h/2.
The second step is that: calculating the position of an overlapping area of the vehicle position prediction frame and the parking space frame according to the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame and the coordinates of the upper left corner and the lower right corner of the parking space frame; the overlapping region position comprises the upper left corner coordinate and the lower right corner coordinate of the overlapping region of the vehicle position prediction frame and the parking space frame.
Taking the maximum value of the coordinates in the x direction of the upper left corners of the prediction frame and the parking space frame as x _ toplo, and taking the maximum value in the y direction of the upper left corners of the prediction frame and the parking space frame as y _ toplo, so as to obtain the coordinates (x _ toplo, y _ toplo) of the upper left corners of the overlapped areas; and then taking the minimum value of the coordinates in the x direction of the lower right corners of the prediction frame and the parking space frame as x _ bottomo, and taking the minimum value in the y direction of the lower right corners of the prediction frame and the parking space frame as y _ bottomo, so as to obtain the coordinates (x _ bottomo, y _ bottomo) of the lower right corners of the overlapping area.
The third step: and calculating the area of the overlapping area according to the position of the overlapping area.
The calculation formula of the overlap _ area is overlap _ area ═ abs ((x _ toplo-x _ bootomro) × (y _ toplo-y _ bootomro)). Abs in the formula represents the absolute value.
The fourth step: and calculating the area of the parking space frame according to the position of the parking space frame.
The calculation formula of the parking space frame area parkking _ area is parkking _ area abs ((x _ top-x _ bottom) about (y _ top-y _ bottom).
And fifthly, calculating η ratio of the area of the overlapping area to the area of the parking space frame as confidence of the vehicle.
The confidence η is calculated as η ═ overlap _ area/making _ area.
And sixthly, judging whether the parking space is empty or not according to the threshold value and the ratio η.
Judging whether the confidence of the vehicle is greater than or equal to a confidence threshold, if so, determining that the current parking space state is an occupied state; if not, determining that the current parking space state is an empty state.
Initially, two identical two-dimensional arrays are set for representing the parking space state, the two-dimensional arrays are actually a matrix for representing the parking space state, the matrices are named old and new respectively, different rows in the matrix represent different cameras, different columns represent different parking spaces corresponding to the cameras, the old arrays are used for recording the parking space state of the previous state, and the new arrays are used for recording the parking space state of the current state. The parking space state includes two states, namely an empty state and a non-empty (or occupied) state, which are respectively represented by 0 and 1. And judging the change of the data in the array new according to the model detection result and the parking space information.
Namely, the program flow of step 105 is shown in table 2 below:
TABLE 2 flow chart for judging whether parking space is empty or not
Step 106: and determining parking cost according to the current parking space state.
And judging the change of the parking space state according to the state of the numbers at the same positions on the old and new matrixes, namely representing the change of the parking space state by the change of different data from old to new at the same positions. For example: when the parking space is changed from 0 to 1, the parking space is changed from an empty state to an occupied state, and the time for starting parking of the vehicle is recorded at the moment; changing 1 into 0, indicating that the parking space is changed from occupied state to empty state, and further recording the parking ending time; thereby calculating the charged amount (from 1 to 0) based on the start parking time and the end parking time.
That is, the step 106 determines the parking fee according to the current parking space state, and specifically includes:
obtaining the starting parking time when the current parking space state is changed from an empty state to an occupied state;
acquiring the parking ending time when the current parking space state is changed from an occupied state to an empty state;
calculating the parking time of the vehicle according to the starting parking time and the ending parking time;
and determining parking cost according to the parking time.
The method disclosed by the invention aims at the requirements of intelligent parking lot management, a convolutional neural network model is designed based on a Darknet frame to position the vehicle, the predicted vehicle position and the calculated parking space overlap degree are calculated, the real-time monitoring of the parking state of the parking space is realized, and the parking cost statistics is carried out according to the parking space state. The invention realizes intelligent and automatic monitoring of parking state and parking cost by applying the deep learning method to the parking resource management, relieves the urban traffic congestion, effectively standardizes the use of parking resources, and simultaneously leads the management of the parking resources to be more convenient and intelligent, and liberates manpower.
Based on the intelligent parking management method provided by the invention, the invention also provides an intelligent parking management system based on deep learning. Fig. 5 is a structural diagram of an intelligent parking management system based on deep learning according to the present invention, referring to fig. 5, the system includes:
the video acquisition module 501 is used for acquiring video frames acquired by a parking lot camera;
a model loading module 502, configured to load a trained convolutional neural network model;
a vehicle position prediction module 503, configured to input the video frame into the trained convolutional neural network model, and output vehicle position prediction frame information; the vehicle position prediction frame information comprises the center coordinates, the width and the height of the vehicle position prediction frame;
a parking stall frame obtaining module 504, configured to obtain parking stall frame information; the parking space frame information comprises a left upper corner coordinate and a right lower corner coordinate of the parking space frame;
a current parking space state judgment module 505, configured to determine a current parking space state according to the vehicle position prediction frame information and the parking space frame position information;
and the parking fee determining module 506 is configured to determine a parking fee according to the current parking space state.
The system further comprises a model building module, wherein the model building module specifically comprises:
the model establishing unit is used for establishing a convolutional neural network model based on a Darknet framework and a YOLO algorithm; the convolutional neural network model comprises a feature extractor and a detector;
and the model training unit is used for training and parameter adjustment of the convolutional neural network model by adopting the VOC data set to generate a trained convolutional neural network model.
The current parking space state determination module 505 specifically includes:
the vehicle position calculation unit is used for calculating the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information;
the overlapping area position calculating unit is used for calculating the overlapping area position of the vehicle position prediction frame and the parking space frame according to the upper left corner coordinate and the lower right corner coordinate of the vehicle position prediction frame and the upper left corner coordinate and the lower right corner coordinate of the parking space frame; the overlapping region position comprises an upper left corner coordinate and a lower right corner coordinate of an overlapping region of the vehicle position prediction frame and the parking space frame;
an overlap region area calculation unit for calculating an overlap region area from the overlap region position;
the parking space frame area calculation unit is used for calculating the area of the parking space frame according to the position of the parking space frame;
the confidence coefficient calculation unit is used for calculating the ratio of the area of the overlapping area to the area of the parking space frame as the confidence coefficient of the vehicle;
the confidence coefficient judging unit is used for judging whether the confidence coefficient of the vehicle is greater than or equal to a confidence coefficient threshold value or not and obtaining a first judgment result;
the occupied state judging unit is used for determining that the current parking space state is the occupied state if the first judgment result indicates that the confidence coefficient of the vehicle is greater than or equal to the confidence coefficient threshold;
and the empty state judging unit is used for determining that the current parking space state is an empty state if the confidence coefficient of the vehicle is smaller than the confidence coefficient threshold value.
The parking fee determination module 506 specifically includes:
the parking starting time recording unit is used for acquiring the parking starting time when the current parking space state is changed from an empty state to an occupied state;
the parking ending time recording unit is used for acquiring the parking ending time when the current parking space state is changed from the occupied state to the empty state;
the parking time calculation unit is used for calculating the parking time of the vehicle according to the starting parking time and the ending parking time;
and the parking fee calculation unit is used for determining the parking fee according to the parking time.
The system of the invention uses the research result of computer vision leading edge technology to design a convolutional neural network to dynamically identify, position and detect the vehicle in real time, and track and time charge the condition of the vehicle entering and exiting the parking lot in real time; meanwhile, each camera is monitored in a multithreading mode, video frames transmitted back by each camera in real time are detected, the ratio of the overlapping part of the vehicle and the parking space to the parking space is calculated, the parking space state is counted in real time, and finally the parking space state information and the timing charging information are respectively displayed in the picture where the corresponding camera is located. The parking state of the parking lot is displayed in real time through the unified integrated software system, so that the intelligent management of the parking lot is realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (4)
1. An intelligent parking management method based on deep learning is characterized by comprising the following steps:
acquiring a video frame acquired by a parking lot camera;
loading a trained convolutional neural network model;
inputting the video frame into the trained convolutional neural network model, and outputting vehicle position prediction frame information; the vehicle position prediction frame information comprises the center coordinates, the width and the height of the vehicle position prediction frame;
acquiring parking stall frame information; the parking space frame information comprises a left upper corner coordinate and a right lower corner coordinate of the parking space frame; the parking stall frame information is acquired by adopting a parameterization adjustment mode, and the detailed acquisition mode is as follows:
obtaining the width-height ratio information of the parking space, and setting the width of the parking space as w and the height of the parking space as h, then the width-height ratio of the parking space is obtainedSetting a parking space deflection angle to α, setting a distance between parking spaces to be delta, setting a parameter β to represent the number of parking spaces monitored by the same camera as each camera manages three to five parking spaces, regarding all the preset parking space frames as a whole, setting an initial position of the preset parking space frames by using one parameter, setting the position of only one corner of the whole, and setting the position coordinate of the upper left corner of the parking space frame to be p (x, y);
firstly, initializing the width and the height of a parking space frame as w0And h0Because the parking spaces of different parking lots are different in size, the aspect ratio epsilon is usually given by a parking lot manager; in fact, by setting ε, only w needs to be known0And h0Any one of them is acceptable; suppose that only h is set0Then w is0=ε*h0The deflection angle of the parking space is set to α0The interval delta of the parking space frame is initialized to delta0The number of parking spaces is initialized to β0The position coordinate of the upper left corner of the parking space frame is p0(x0,y0) The initialization phase is thus fixed β in the frame of the camera first0The width-to-height ratio is epsilon and the length is h0The space between the parking spaces is delta0After the camera is installed, the parking stall frame is preset by adjusting the p0(x0,y0) Put the parking stall frame of all parking stalls in the camera picture that the parking stall is locatedThen, the positions of the parking stall frames are adjusted by adjusting epsilon, α, delta, h and β, so that the preset parking stall frames in the camera picture are adjusted to be approximately consistent with the stop line of the actual parking stall, and the adjustment range is that the preset parking stall frames can represent the actual parking stall, so that the acquired video frames can be detected more quickly;
determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information; the determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information specifically includes:
calculating the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information;
calculating the position of an overlapping area of the vehicle position prediction frame and the parking space frame according to the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame and the coordinates of the upper left corner and the lower right corner of the parking space frame; the overlapping region position comprises an upper left corner coordinate and a lower right corner coordinate of an overlapping region of the vehicle position prediction frame and the parking space frame;
calculating the area of the overlapping area according to the position of the overlapping area;
calculating the area of the parking stall frame according to the position of the parking stall frame;
calculating the ratio of the area of the overlapping area to the area of the parking space frame as the confidence coefficient of the vehicle;
judging whether the confidence of the vehicle is greater than or equal to a confidence threshold value or not, and obtaining a first judgment result;
if the first judgment result is that the confidence coefficient of the vehicle is greater than or equal to the confidence coefficient threshold value, determining that the current parking space state is an occupied state;
and if the confidence coefficient of the vehicle is smaller than the confidence coefficient threshold value, determining that the current parking space state is an empty state.
2. The intelligent parking management method based on deep learning of claim 1, further comprising, before the loading the trained convolutional neural network model:
establishing a convolutional neural network model based on a Darknet frame and a YOLO algorithm; the convolutional neural network model comprises a feature extractor and a detector;
and training and parameter adjusting are carried out on the convolutional neural network model by adopting a visual object type VOC data set, so that a trained convolutional neural network model is generated.
3. An intelligent parking management system based on deep learning, the system comprising:
the video acquisition module is used for acquiring video frames acquired by the parking lot camera;
the model loading module is used for loading the trained convolutional neural network model;
the vehicle position prediction module is used for inputting the video frame into the trained convolutional neural network model and outputting vehicle position prediction frame information; the vehicle position prediction frame information comprises the center coordinates, the width and the height of the vehicle position prediction frame;
the parking stall frame acquisition module is used for acquiring parking stall frame information; the parking space frame information comprises a left upper corner coordinate and a right lower corner coordinate of the parking space frame; the parking stall frame information is acquired by adopting a parameterization adjustment mode, and the detailed acquisition mode is as follows:
obtaining the width-height ratio information of the parking space, and setting the width of the parking space as w and the height of the parking space as h, then the width-height ratio of the parking space is obtainedSetting a parking space deflection angle to α, setting a distance between parking spaces to be delta, setting a parameter β to represent the number of parking spaces monitored by the same camera as each camera manages three to five parking spaces, regarding all the preset parking space frames as a whole, setting an initial position of the preset parking space frames by using one parameter, setting the position of only one corner of the whole, and setting the position coordinate of the upper left corner of the parking space frame to be p (x, y);
firstly, initializing the width of the parking space frameHigh is w0And h0Because the parking spaces of different parking lots are different in size, the aspect ratio epsilon is usually given by a parking lot manager; in fact, by setting ε, only w needs to be known0And h0Any one of them is acceptable; suppose that only h is set0Then w is0=ε*h0The deflection angle of the parking space is set to α0The interval delta of the parking space frame is initialized to delta0The number of parking spaces is initialized to β0The position coordinate of the upper left corner of the parking space frame is p0(x0,y0) The initialization phase is thus fixed β in the frame of the camera first0The width-to-height ratio is epsilon and the length is h0The space between the parking spaces is delta0After the camera is installed, the parking stall frame is preset by adjusting the p0(x0,y0) The method comprises the steps of placing all parking stall frames in a row where parking stalls are located in a camera picture, adjusting the positions of the parking stall frames by adjusting epsilon, α, delta, h and β to enable preset parking stall frames in the camera picture to be approximately consistent with a parking line of an actual parking stall, wherein the adjustment range is that the preset parking stall frames can represent the actual parking stall, so that collected video frames can be detected more quickly;
the current parking space state judging module is used for determining the current parking space state according to the vehicle position prediction frame information and the parking space frame position information;
the current parking space state judgment module specifically comprises:
the vehicle position calculation unit is used for calculating the coordinates of the upper left corner and the lower right corner of the vehicle position prediction frame according to the vehicle position prediction frame information;
the overlapping area position calculating unit is used for calculating the overlapping area position of the vehicle position prediction frame and the parking space frame according to the upper left corner coordinate and the lower right corner coordinate of the vehicle position prediction frame and the upper left corner coordinate and the lower right corner coordinate of the parking space frame; the overlapping region position comprises an upper left corner coordinate and a lower right corner coordinate of an overlapping region of the vehicle position prediction frame and the parking space frame;
an overlap region area calculation unit for calculating an overlap region area from the overlap region position;
the parking space frame area calculation unit is used for calculating the area of the parking space frame according to the position of the parking space frame;
the confidence coefficient calculation unit is used for calculating the ratio of the area of the overlapping area to the area of the parking space frame as the confidence coefficient of the vehicle;
the confidence coefficient judging unit is used for judging whether the confidence coefficient of the vehicle is greater than or equal to a confidence coefficient threshold value or not and obtaining a first judgment result;
the occupied state judging unit is used for determining that the current parking space state is the occupied state if the first judgment result indicates that the confidence coefficient of the vehicle is greater than or equal to the confidence coefficient threshold;
and the empty state judging unit is used for determining that the current parking space state is an empty state if the confidence coefficient of the vehicle is smaller than the confidence coefficient threshold value.
4. The intelligent parking management system based on deep learning of claim 3, further comprising a model building module, wherein the model building module specifically comprises:
the model establishing unit is used for establishing a convolutional neural network model based on a Darknet framework and a YOLO algorithm; the convolutional neural network model comprises a feature extractor and a detector;
and the model training unit is used for training and parameter adjustment of the convolutional neural network model by adopting the VOC data set to generate a trained convolutional neural network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910089082.4A CN109784306B (en) | 2019-01-30 | 2019-01-30 | Intelligent parking management method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910089082.4A CN109784306B (en) | 2019-01-30 | 2019-01-30 | Intelligent parking management method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109784306A CN109784306A (en) | 2019-05-21 |
CN109784306B true CN109784306B (en) | 2020-03-10 |
Family
ID=66503758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910089082.4A Active CN109784306B (en) | 2019-01-30 | 2019-01-30 | Intelligent parking management method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784306B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110264765A (en) * | 2019-06-26 | 2019-09-20 | 广州小鹏汽车科技有限公司 | Detection method, device, computer equipment and the storage medium of vehicle parking state |
CN110852151B (en) * | 2019-09-26 | 2024-02-20 | 深圳市金溢科技股份有限公司 | Method and device for detecting shielding of berths in roads |
CN110910655A (en) * | 2019-12-11 | 2020-03-24 | 深圳市捷顺科技实业股份有限公司 | Parking management method, device and equipment |
CN111292353B (en) * | 2020-01-21 | 2023-12-19 | 成都恒创新星科技有限公司 | Parking state change identification method |
CN111768648B (en) * | 2020-06-10 | 2022-02-15 | 浙江大华技术股份有限公司 | Vehicle access determining method and system |
CN111784857A (en) * | 2020-06-22 | 2020-10-16 | 浙江大华技术股份有限公司 | Parking space management method and device and computer storage medium |
CN111951601B (en) * | 2020-08-05 | 2021-10-26 | 智慧互通科技股份有限公司 | Method and device for identifying parking positions of distribution vehicles |
CN111932933B (en) * | 2020-08-05 | 2022-07-26 | 杭州像素元科技有限公司 | Urban intelligent parking space detection method and equipment and readable storage medium |
CN112037504B (en) * | 2020-09-09 | 2021-06-25 | 深圳市润腾智慧科技有限公司 | Vehicle parking scheduling management method and related components thereof |
CN113205691B (en) * | 2021-04-26 | 2023-05-02 | 超级视线科技有限公司 | Method and device for identifying vehicle position |
CN113421382B (en) * | 2021-06-01 | 2022-08-30 | 杭州鸿泉物联网技术股份有限公司 | Detection method, system, equipment and storage medium for shared electric bill standard parking |
CN113706920B (en) * | 2021-08-20 | 2023-08-11 | 云往(上海)智能科技有限公司 | Parking behavior judging method and intelligent parking system |
CN114067602B (en) * | 2021-11-16 | 2024-03-26 | 深圳市捷顺科技实业股份有限公司 | Parking space state judging method, system and parking space management device |
CN114267180B (en) * | 2022-03-03 | 2022-05-31 | 科大天工智能装备技术(天津)有限公司 | Parking management method and system based on computer vision |
CN114724107B (en) * | 2022-03-21 | 2023-09-01 | 北京卓视智通科技有限责任公司 | Image detection method, device, equipment and medium |
CN115035741B (en) * | 2022-04-29 | 2024-03-22 | 阿里云计算有限公司 | Method, device, storage medium and system for discriminating parking position and parking |
CN114694124B (en) * | 2022-05-31 | 2022-08-26 | 成都国星宇航科技股份有限公司 | Parking space state detection method and device and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100449579C (en) * | 2006-04-21 | 2009-01-07 | 浙江工业大学 | All-round computer vision-based electronic parking guidance system |
CN100559420C (en) * | 2007-03-29 | 2009-11-11 | 汤一平 | Parking guidance system based on computer vision |
CN105760849B (en) * | 2016-03-09 | 2019-01-29 | 北京工业大学 | Target object behavioral data acquisition methods and device based on video |
CN106935035B (en) * | 2017-04-07 | 2019-07-23 | 西安电子科技大学 | Parking offense vehicle real-time detection method based on SSD neural network |
-
2019
- 2019-01-30 CN CN201910089082.4A patent/CN109784306B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109784306A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109784306B (en) | Intelligent parking management method and system based on deep learning | |
CN110472496B (en) | Traffic video intelligent analysis method based on target detection and tracking | |
CN104303193B (en) | Target classification based on cluster | |
WO2021139049A1 (en) | Detection method, detection apparatus, monitoring device, and computer readable storage medium | |
CN113420607A (en) | Multi-scale target detection and identification method for unmanned aerial vehicle | |
CN101447075B (en) | Wide-angle lens-based FPGA & DSP embedded multi-valued targets threshold categorization tracking device | |
Peng et al. | Drone-based vacant parking space detection | |
CN109670405B (en) | Complex background pedestrian detection method based on deep learning | |
CN111191576A (en) | Personnel behavior target detection model construction method, intelligent analysis method and system | |
CN111626128A (en) | Improved YOLOv 3-based pedestrian detection method in orchard environment | |
CN109558815A (en) | A kind of detection of real time multi-human face and tracking | |
CN110781964A (en) | Human body target detection method and system based on video image | |
CN102982341A (en) | Self-intended crowd density estimation method for camera capable of straddling | |
CN112270381B (en) | People flow detection method based on deep learning | |
CN105844659A (en) | Moving part tracking method and device | |
CN111476089A (en) | Pedestrian detection method, system and terminal based on multi-mode information fusion in image | |
CN110008834B (en) | Steering wheel intervention detection and statistics method based on vision | |
CN104166836B (en) | A kind of multiple dimensioned engineering truck recognition methods of piecemeal modeled based on multiple features and system | |
Ouyang et al. | Aerial target detection based on the improved YOLOv3 algorithm | |
CN111461222A (en) | Method and device for acquiring target object track similarity and electronic equipment | |
CN110880205A (en) | Parking charging method and device | |
KR102240638B1 (en) | Parking guidance method and system using boundary pixel data estimated in vehicle image and analysis of vehicle model viewpoint | |
CN106096554A (en) | Decision method and system are blocked in a kind of parking stall | |
CN108537828A (en) | A kind of shop data analysing method and system | |
CN114155571A (en) | Method for mixed extraction of pedestrians and human faces in video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |