CN111597902A - Motor vehicle illegal parking monitoring method - Google Patents

Motor vehicle illegal parking monitoring method Download PDF

Info

Publication number
CN111597902A
CN111597902A CN202010299684.5A CN202010299684A CN111597902A CN 111597902 A CN111597902 A CN 111597902A CN 202010299684 A CN202010299684 A CN 202010299684A CN 111597902 A CN111597902 A CN 111597902A
Authority
CN
China
Prior art keywords
motor vehicle
network
representing
sample
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010299684.5A
Other languages
Chinese (zh)
Other versions
CN111597902B (en
Inventor
邵奇可
卢熠
颜世航
陈一苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010299684.5A priority Critical patent/CN111597902B/en
Publication of CN111597902A publication Critical patent/CN111597902A/en
Application granted granted Critical
Publication of CN111597902B publication Critical patent/CN111597902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The method for monitoring the illegal parking of the motor vehicle comprises the following steps: 1) the method comprises the steps of collecting a large number of images of high-altitude cameras in streets and other motor vehicle data sets, calibrating the data sets according to field management requirements, and determining a used one-stage target detection algorithm model. 2) Constructing a parameter adaptive loss function
Figure DDA0002453509330000011
And

Description

Motor vehicle illegal parking monitoring method
Technical Field
The invention belongs to the technical field of image recognition and computer vision, and relates to a motor vehicle illegal parking monitoring method.
Background
At present, aiming at the detection problem of the illegal parking of motor vehicles on the street, the traditional detection method mainly comprises the following steps: micro radar detection, infrared detection, geomagnetic induction coil detection and radio frequency identification. The method needs to install special sensing equipment at each position of the street, and has the disadvantages of high engineering cost, difficult post-maintenance and high cost of manpower and material resources. The security camera in the existing street is used for identifying the motor vehicle illegal parking in the region in the street, the ground of the street does not need to be changed, and the equipment maintenance and the repair are easy, so the video-based motor vehicle illegal parking detection system has good popularization value.
The video stream of the security camera is used for judging whether the motor vehicle is in the street area, and the requirements on the accuracy of the identification algorithm and the real-time performance of the motor vehicle illegal parking information in the application scene are high. Therefore, the target detection algorithm based on deep learning is reasonable. The target detection algorithm based on deep learning is divided into a two-stage model and a one-stage model. Although the two-stage target detection model has better detection precision, the forward reasoning speed is slow, and the real-time requirement of a service scene cannot be met. In the traditional one-stage target detection algorithm model, the algorithm has good real-time performance, but the detection precision of the two-stage target detection algorithm model cannot be achieved. When the image is used for detecting the target, a large number of street background objects are contained, although the loss value of the street background objects is small, the number of the street background objects is far more than that of the motor vehicle target, and the traditional target detection method at present is difficult to obtain higher identification accuracy under the complex scene, so that a highly adaptive target detection method is urgently needed.
Disclosure of Invention
The present invention is to overcome the above-mentioned drawbacks of the prior art, and provide a method for monitoring illegal parking of a motor vehicle with high self-adaptability and high recognition accuracy.
The invention improves the loss function in a one-stage target detection algorithm model. The loss function is used as an objective function of a gradient descent process in the convolutional neural network, and directly influences the training result of the convolutional neural network. The quality of the training result of the convolutional neural network is directly related to the identification precision of target detection, so that the method is particularly important for the design and display of a loss function. In a stage target detection algorithm model training process, a network contains a large number of street background objects when a target is detected by an image, and although the loss value of the street background objects is small, the number of the street background objects is far more than that of motor vehicle targets, so that when the loss value is calculated, the street background loss value with a small probability value overwhelms the target loss value of the motor vehicle, the model precision is greatly reduced, and a focus loss function is embedded into a detection model to improve the training precision. And if the hyper-parameters exist in the focus loss function, the hyper-parameters need to be set according to empirical values, and the magnitude of the hyper-parameters can not be automatically adjusted according to the predicted class probability value.
The invention provides a deep learning loss function based on semi-supervised learning, aiming at the problems that hyper-parameters need to be adjusted manually in the training process of a focus loss function and the parameters in the training process do not have self-adaptability.
The method for monitoring the illegal parking of the motor vehicle comprises the following steps:
step 1: the method comprises the steps of constructing a motor vehicle sample data set M, a training data set T, a verification data set V, marking the motor vehicle sample category number C, the training data batch size batch, the training batch number batch, the learning rate l _ rate and the proportionality coefficient zeta between the training data set T and the verification data set V.
Figure BDA0002453509310000021
Figure BDA0002453509310000022
Wherein V ∪ T is M, C ∈ N+,ζ∈(0,1),batches∈N+,l_rate∈N+,batch∈N+
Figure BDA0002453509310000023
Representing the height and width of the image and r representing the number of channels of the image.
Step 2: determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure BDA0002453509310000031
representing the kth characteristic diagram in the l-th network
Figure BDA0002453509310000032
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure BDA0002453509310000033
Figure BDA0002453509310000034
Figure BDA0002453509310000035
Figure BDA0002453509310000036
Figure BDA0002453509310000037
wherein:
Figure BDA0002453509310000038
respectively representing convolution kernels corresponding to the l-th networkHeight, width, and dimensions of the feature map and anchor points.
Figure BDA0002453509310000039
Indicating the fill size of the layer l network convolution kernel,
Figure BDA00024535093100000310
representing the convolution step size of the layer I network, f representing the excitation function of the convolution neuron, theta representing the selected input feature, Λ∈ N+Denotes the total number of anchor points xi ∈ N in the layer I network+Representing the total number of output layer nodes, Φ ∈ N+Indicates the total number of layer I network feature maps, Δ ∈ N+Representing the total number of the l-th layer convolution kernels.
Step 3: designing a parameter adaptive focus loss function, which specifically comprises the following steps:
Figure BDA00024535093100000311
wherein:
Figure BDA00024535093100000312
Figure BDA00024535093100000313
Figure BDA00024535093100000314
Figure BDA00024535093100000315
indicating that the jth anchor point in the ith grid on the ith network is in the image tkThe motor vehicle sample and the street background sample confidence loss function; in the same way, the method for preparing the composite material,
Figure BDA00024535093100000316
a loss function representing a sample prediction box for the vehicle,
Figure BDA0002453509310000041
a loss function representing the motor vehicle class, λ ∈ Q being the loss function
Figure BDA0002453509310000042
And (4) parameters.
Figure BDA0002453509310000043
And
Figure BDA0002453509310000044
the loss functions of the motor vehicle sample object and the street background object are respectively expressed as follows:
Figure BDA0002453509310000045
Figure BDA0002453509310000046
Figure BDA0002453509310000047
the probability value of the foreground motor vehicle sample predicted by the jth anchor point in the ith grid on the ith network is represented, and similarly,
Figure BDA0002453509310000048
representing a corresponding street context probability value.
Figure BDA0002453509310000049
Respectively representing the abscissa and the ordinate of the central point of the prediction frame of the jth anchor point in the ith grid on the ith network, and the like
Figure BDA00024535093100000410
Respectively representing the abscissa and the ordinate of the central point of the motor vehicle sample calibration frame;
Figure BDA00024535093100000411
respectively representing the pre-numbers of jth anchor points in ith grid on the ith networkMeasuring the shortest Euclidean distance from the frame center point to the frame boundary, and the same way
Figure BDA00024535093100000412
Respectively representing the shortest Euclidean distance from the central point of the motor vehicle sample calibration frame to the frame boundary;
Figure BDA00024535093100000413
and the predicted value of the motor vehicle sample class is represented by the predicted value of the jth anchor point in the ith grid on the ith network. In the same way, the method for preparing the composite material,
Figure BDA00024535093100000414
indicating the calibration status of the sample class of motor vehicles,
Figure BDA00024535093100000415
a sample of the motor vehicle is represented for prediction,
Figure BDA00024535093100000416
whether the street background sample is predicted or not is represented, and the specific calculation is as follows:
Figure BDA00024535093100000417
Figure BDA00024535093100000418
Figure BDA00024535093100000419
wherein the parameters α∈ (0, 1); ioujRepresenting anchor points mjThe overlap ratio of the anchor point box and the motor vehicle sample calibration box in the ith grid, miou represents the maximum overlap ratio.
Step 4: and (3) based on a loss function of a stage target detection algorithm model in Step 3, carrying out gradient descent method training on the model by using a training set until the model converges. In the model testing stage, the alarm time is set as a timer, when the system model detects the motor vehicle, the detailed category and the position information of the motor vehicle are automatically recorded, timing is started, and after the given time timer is exceeded, if the detailed category and the position information of the motor vehicle detected again are consistent with the information detected before, an alarm is given.
The invention has the advantages that: the parameter adaptability of the illegal vehicle monitoring model can be improved, and the accuracy of illegal vehicle monitoring is greatly improved.
Drawings
Fig. 1 is a network configuration diagram of the convolutional neural network of the present invention.
Fig. 2 is a diagram of a loss function structure in the convolutional neural network of the present invention.
FIG. 3 is a flowchart of the motor vehicle violation detection algorithm deployment based on the convolutional neural network of the present invention.
Detailed Description
In order to better explain the technical scheme of the invention, the invention is further explained by combining the attached drawings.
The method for monitoring the illegal parking of the motor vehicle comprises the following steps:
step 1: collecting a large amount of motor vehicle image data shot at high altitude, constructing motor vehicle sample data sets M with the number of 10000, training data sets T with the number of 8000, verification data sets V with the number of 2000, marking motor vehicle category number C with the value of 5, respectively being a sports car, a cross-country car, a van, a minibus and a common car, training data batch size batch with the value of 4, training batch times batches with the value of 1000, learning rate l _ rate with the value of 0.001, proportionality coefficient zeta between the training data sets T and the verification data sets V with the value of 0.25, setting of the number of high, wide and channel of all images to be consistent, setting of the number of high h of the images to be consistent, setting of the number of channels of the high hkAnd width wkThe values are 416 and 416 respectively, and the number r of channels of the image is 3.
Step 2: determining a one-stage target detection model as Yolov3, setting the depth L of the convolutional neural network as 139, wherein the height, width and dimension settings of the convolutional kernel are specifically shown in FIG. 1, and the filling size of the convolutional kernel
Figure BDA0002453509310000051
Default to 1, convolution step size
Figure BDA0002453509310000061
The excitation function f of the convolutional neurons is defaulted to be a LEAkly _ relu excitation function, anchor points are shared in each layer network, an anchor point set M is set to be { (10,13), (30,61) and (156,198) }, namely, the total number of anchor points Λ in each layer network layer is set to be 3, the network output layer adopts a full-connection mode, a convolution kernel set A is set to be { (1,1,30), (1,1,30) }, namely, the total number of output layer nodes is set to be 3.
Step 3: as shown in fig. 2, a parameter adaptive focus LOSS function LOSS is constructed, where the value of the parameter α is 0.25 and the value of the parameter λ is 0.5.
Step 4: and (3) based on a loss function of a stage target detection algorithm model in Step 3, carrying out gradient descent method training on the model by using a training set until the model converges. Referring to fig. 3, the video stream of the camera installed in the street is used for real-time detection, the alarm time timer takes 3 minutes, when the system model detects the motor vehicle, the detailed category and the position information of the motor vehicle are automatically recorded, timing is started, and after 3 minutes, if the detailed category and the position information of the motor vehicle detected again are consistent with those of the motor vehicle detected before, an alarm is sent out, so that the illegal parking management of the motor vehicle in the street is realized.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. The method for monitoring the illegal parking of the motor vehicle comprises the following steps:
step 1: constructing a motor vehicle sample data set M, a training data set T, a verification data set V, labeling the category number C of the motor vehicle sample, the batch size batch of the training data, the number batches of the training data, the learning rate l _ rate, and a proportionality coefficient zeta between the training data set T and the verification data set V;
Figure FDA0002453509300000011
Figure FDA0002453509300000012
ζ=Card(V)/Card(T)
wherein V ∪ T is M, C ∈ N+,ζ∈(0,1),batches∈N+,l_rate∈N+,batch∈N+
Figure FDA0002453509300000013
Representing the height and width of the image, and r represents the number of channels of the image;
step 2: determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure FDA0002453509300000014
representing the kth characteristic diagram in the l-th network
Figure FDA0002453509300000015
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure FDA0002453509300000016
Figure FDA0002453509300000017
Figure FDA0002453509300000018
Figure FDA0002453509300000019
Figure FDA00024535093000000110
wherein:
Figure FDA00024535093000000111
respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the l-th network;
Figure FDA00024535093000000112
indicating the fill size of the layer l network convolution kernel,
Figure FDA00024535093000000113
representing the convolution step size of the layer I network, f representing the excitation function of the convolution neuron, theta representing the selected input feature, Λ∈ N+Denotes the total number of anchor points xi ∈ N in the layer I network+Representing the total number of output layer nodes, Φ ∈ N+Indicates the total number of layer I network feature maps, Δ ∈ N+Represents the total number of the l layer convolution kernels;
step 3: designing a parameter adaptive focus loss function, which specifically comprises the following steps:
Figure FDA00024535093000000114
wherein:
Figure FDA00024535093000000115
Figure FDA00024535093000000116
Figure FDA00024535093000000117
Figure FDA00024535093000000118
indicating that the jth anchor point in the ith grid on the ith network is in the image tkThe motor vehicle sample and the street background sample confidence loss function; in the same way, the method for preparing the composite material,
Figure FDA00024535093000000119
a loss function representing a sample prediction box for the vehicle,
Figure FDA00024535093000000120
a loss function representing the motor vehicle class, λ ∈ Q being the loss function
Figure FDA00024535093000000121
A parameter;
Figure FDA0002453509300000021
and
Figure FDA0002453509300000022
the loss functions of the motor vehicle sample object and the street background object are respectively expressed as follows:
Figure FDA0002453509300000023
Figure FDA0002453509300000024
Figure FDA0002453509300000025
the probability value of the foreground motor vehicle sample predicted by the jth anchor point in the ith grid on the ith network is represented, and similarly,
Figure FDA0002453509300000026
representing a corresponding street context probability value;
Figure FDA0002453509300000027
respectively representing the abscissa and the ordinate of the central point of the prediction frame of the jth anchor point in the ith grid on the ith network, and the like
Figure FDA0002453509300000028
Respectively representing the abscissa and the ordinate of the central point of the motor vehicle sample calibration frame;
Figure FDA0002453509300000029
respectively representing the shortest Euclidean distance from the central point of the prediction frame of the jth anchor point in the ith grid on the ith network to the boundary of the frame, and the same way
Figure FDA00024535093000000210
Respectively representing the shortest Euclidean distance from the central point of the motor vehicle sample calibration frame to the frame boundary;
Figure FDA00024535093000000211
representing the predicted motor vehicle sample category value of the jth anchor point prediction in the ith grid on the ith network; in the same way, the method for preparing the composite material,
Figure FDA00024535093000000212
indicating the calibration status of the sample class of motor vehicles,
Figure FDA00024535093000000213
a sample of the motor vehicle is represented for prediction,
Figure FDA00024535093000000214
whether the street background sample is predicted or not is represented, and the specific calculation is as follows:
Figure FDA00024535093000000215
Figure FDA00024535093000000216
Figure FDA00024535093000000217
wherein the parameters α∈ (0, 1); ioujRepresenting anchor points mjOverlapping rate of anchor point frame and motor vehicle sample calibration frame in ith grid, miou represents maximum overlapping rate;
step 4: performing gradient descent method training on the model by using a loss function of a stage target detection algorithm model in Step 3 until the model converges; in the system operation stage, a first-order target detection model is used for extracting a network characteristic value, an anchor point is determined based on a K-means clustering method, in the system operation stage, an alarm time is set as a timer, when the system model detects a motor vehicle, the detailed type and position information of the motor vehicle are automatically recorded, timing is started, and after the given time is exceeded, if the detailed type and position information of the motor vehicle detected again are consistent with the information detected before, an alarm is sent.
CN202010299684.5A 2020-04-16 2020-04-16 Method for monitoring motor vehicle illegal parking Active CN111597902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010299684.5A CN111597902B (en) 2020-04-16 2020-04-16 Method for monitoring motor vehicle illegal parking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010299684.5A CN111597902B (en) 2020-04-16 2020-04-16 Method for monitoring motor vehicle illegal parking

Publications (2)

Publication Number Publication Date
CN111597902A true CN111597902A (en) 2020-08-28
CN111597902B CN111597902B (en) 2023-08-11

Family

ID=72189003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010299684.5A Active CN111597902B (en) 2020-04-16 2020-04-16 Method for monitoring motor vehicle illegal parking

Country Status (1)

Country Link
CN (1) CN111597902B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112289037A (en) * 2020-10-29 2021-01-29 南通中铁华宇电气有限公司 Motor vehicle illegal parking detection method and system based on high visual angle under complex environment
CN112711996A (en) * 2020-12-22 2021-04-27 中通服咨询设计研究院有限公司 System for detecting occupancy of fire fighting access
CN115082903A (en) * 2022-08-24 2022-09-20 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902677A (en) * 2019-01-30 2019-06-18 深圳北斗通信科技有限公司 A kind of vehicle checking method based on deep learning
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2
CN110490156A (en) * 2019-08-23 2019-11-22 哈尔滨理工大学 A kind of fast vehicle detection method based on convolutional neural networks
WO2020048242A1 (en) * 2018-09-04 2020-03-12 阿里巴巴集团控股有限公司 Method and apparatus for generating vehicle damage image based on gan network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020048242A1 (en) * 2018-09-04 2020-03-12 阿里巴巴集团控股有限公司 Method and apparatus for generating vehicle damage image based on gan network
CN109902677A (en) * 2019-01-30 2019-06-18 深圳北斗通信科技有限公司 A kind of vehicle checking method based on deep learning
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2
CN110490156A (en) * 2019-08-23 2019-11-22 哈尔滨理工大学 A kind of fast vehicle detection method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵奇可 等: "基于深度学习的高速服务区车位检测算法", 《计算机系统应用》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112289037A (en) * 2020-10-29 2021-01-29 南通中铁华宇电气有限公司 Motor vehicle illegal parking detection method and system based on high visual angle under complex environment
CN112289037B (en) * 2020-10-29 2022-06-07 南通中铁华宇电气有限公司 Motor vehicle illegal parking detection method and system based on high visual angle under complex environment
CN112711996A (en) * 2020-12-22 2021-04-27 中通服咨询设计研究院有限公司 System for detecting occupancy of fire fighting access
CN115082903A (en) * 2022-08-24 2022-09-20 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium
CN115082903B (en) * 2022-08-24 2022-11-11 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111597902B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN111597902A (en) Motor vehicle illegal parking monitoring method
CN103258432B (en) Traffic accident automatic identification processing method and system based on videos
CN111597901A (en) Illegal billboard monitoring method
CN110717387B (en) Real-time vehicle detection method based on unmanned aerial vehicle platform
JP4723582B2 (en) Traffic sign detection method
EP3807837A1 (en) Vehicle re-identification techniques using neural networks for image analysis, viewpoint-aware pattern recognition, and generation of multi-view vehicle representations
CN105184271A (en) Automatic vehicle detection method based on deep learning
CN114023062B (en) Traffic flow information monitoring method based on deep learning and edge calculation
CN106462737A (en) Systems and methods for haziness detection
CN111709336B (en) Expressway pedestrian detection method, equipment and readable storage medium
CN111340151B (en) Weather phenomenon recognition system and method for assisting automatic driving of vehicle
CN111508269B (en) Open type parking space vehicle distinguishing method and device based on image recognition
US9299008B2 (en) Unsupervised adaptation method and automatic image classification method applying the same
Sayeed et al. Bangladeshi Traffic Sign Recognition and Classification using CNN with Different Kinds of Transfer Learning through a new (BTSRB) Dataset
CN111597900A (en) Illegal dog walking identification method
CN108932839B (en) Method and device for judging vehicles in same-driving mode
CN113537170A (en) Intelligent traffic road condition monitoring method and computer readable storage medium
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN111597897A (en) Parking space identification method for high-speed service area
CN111339823A (en) Threshing and sunning ground detection method based on machine vision and back projection algorithm
CN109583282B (en) Vector road determining method and device
CN116434056A (en) Target identification method and system based on radar fusion and electronic equipment
CN113593256A (en) Unmanned aerial vehicle intelligent driving-away control method and system based on city management and cloud platform
KR20220071822A (en) Identification system and method of illegal parking and stopping vehicle numbers using drone images and artificial intelligence technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant