CN111597897A - Parking space identification method for high-speed service area - Google Patents

Parking space identification method for high-speed service area Download PDF

Info

Publication number
CN111597897A
CN111597897A CN202010297837.2A CN202010297837A CN111597897A CN 111597897 A CN111597897 A CN 111597897A CN 202010297837 A CN202010297837 A CN 202010297837A CN 111597897 A CN111597897 A CN 111597897A
Authority
CN
China
Prior art keywords
network
representing
vehicle
loss function
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010297837.2A
Other languages
Chinese (zh)
Other versions
CN111597897B (en
Inventor
邵奇可
卢熠
颜世航
陈一苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010297837.2A priority Critical patent/CN111597897B/en
Publication of CN111597897A publication Critical patent/CN111597897A/en
Application granted granted Critical
Publication of CN111597897B publication Critical patent/CN111597897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method for identifying the parking spaces in the high-speed service area comprises the following steps: 1) the method comprises the steps of collecting a large number of images of high-altitude cameras in a parking lot and other vehicle data sets, calibrating the data sets according to field management requirements, and determining a used one-stage target detection algorithm model. 2) Constructing a parameter adaptive loss function
Figure DDA0002452876950000011
And

Description

Parking space identification method for high-speed service area
Technical Field
The invention belongs to the technical field of image recognition and computer vision, and relates to a parking space recognition method in a high-speed service area.
Background
At present, aiming at the problem of detecting parking spaces in a high-speed service area, the traditional detection method mainly comprises the following steps: micro radar detection, infrared detection, geomagnetic induction coil detection and radio frequency identification. The method needs to install special sensing equipment for each parking space of the parking lot in the high-speed service area, and has the disadvantages of high engineering cost overhead, difficult later maintenance and high cost of manpower and material resources. The parking space state is identified in real time by utilizing a security camera in the existing parking lot of the high-speed service area, and then the parking space information of the area is counted. Because it utilizes current parking area supervisory equipment, need not change parking area parking stall ground, and equipment maintenance is easy moreover, therefore this kind of parking stall detecting system based on video has fine spreading value.
The parking space state is identified by using the video stream of the security camera, and the requirements on the accuracy of an identification algorithm and the real-time performance of the information of the vacant parking spaces in an application scene are high. Therefore, the target detection algorithm based on deep learning is reasonable. The target detection algorithm based on deep learning is divided into a two-stage model and a one-stage model. Although the two-stage convolutional neural network model has better detection accuracy, the forward reasoning speed is slow, and the real-time requirement of a service scene cannot be met. In the traditional one-stage target detection algorithm model, the algorithm has good real-time performance, but the detection precision of the two-stage convolutional neural network model cannot be achieved. The high-speed service area parking space identification method based on the focus loss function parameter self-adaption is beneficial to improving the detection precision of the system and ensuring that the real-time performance of the system meets the requirements of application scenes.
Disclosure of Invention
The invention provides a method for identifying a parking space in a high-speed service area to overcome the defects in the prior art, so as to improve the detection precision and the real-time property.
The invention improves the loss function in a one-stage target detection algorithm model. The loss function is used as an objective function of a gradient descent process in the convolutional neural network, and directly influences the training result of the convolutional neural network. The quality of the training result of the convolutional neural network is directly related to the identification precision of target detection, so that the method is particularly important for the design and display of a loss function.
In the training process of the one-stage target detection algorithm model, a network contains a large number of service area background objects when a target is detected by an image, although the loss value of the service area background objects is small, the number of the service area background objects is far more than that of vehicle targets, so that when the loss value is calculated, the service area background loss value with small probability value overwhelms the target loss value of the vehicle, the model precision is greatly reduced, and a focus loss function is embedded into the one-stage target detection algorithm model to improve the training precision. And if the hyper-parameters exist in the focus loss function, the hyper-parameters need to be set according to empirical values, and the magnitude of the hyper-parameters can not be automatically adjusted according to the predicted class probability value.
Therefore, the invention provides a deep learning loss function based on semi-supervised learning aiming at the problems that the focus loss function needs to manually adjust the hyper-parameters in the training process and the parameters in the training process do not have self-adaptability, wherein the loss function improves the hyper-parameters by using a weighting method, so that the network hyper-parameters can be adaptively adjusted in the gradient descending process of the network, and the network learning efficiency is further improved.
In order to solve the technical problem, a parameter self-adaptive focus loss function is adopted to enhance the network training capability and provide the identification precision of the system.
The method for identifying the parking spaces in the high-speed service area comprises the following steps:
step 1: the method comprises the steps of constructing a parking lot data set M of a high-speed service area, a training data set T, a verification data set V, a labeled vehicle category number C, a training data batch size batch, a training batch number batch, a learning rate l _ rate and a proportionality coefficient zeta between the training data set T and the verification data set V.
Figure BDA0002452876930000021
Figure BDA0002452876930000031
Wherein V ∪ T is M, C ∈ N+,ζ∈(0,1),batches∈N+,l_rate∈N+,batch∈N+
Figure BDA0002452876930000032
Representing the height and width of the image and r representing the number of channels of the image.
Step 2: determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure BDA0002452876930000033
representing the kth characteristic diagram in the l-th network
Figure BDA0002452876930000034
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure BDA0002452876930000035
Figure BDA0002452876930000036
Figure BDA0002452876930000037
Figure BDA0002452876930000038
Figure BDA0002452876930000039
wherein:
Figure BDA00024528769300000310
and respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the l-th network.
Figure BDA00024528769300000311
Indicating the fill size of the layer l network convolution kernel,
Figure BDA00024528769300000312
representing the convolution step size of the layer I network, f representing the excitation function of the convolution neuron, theta representing the selected input feature, Λ∈ N+Denotes the total number of anchor points xi ∈ N in the layer I network+Representing the total number of output layer nodes, Φ ∈ N+Indicates the total number of layer I network feature maps, Δ ∈ N+Representing the total number of the l-th layer convolution kernels.
Step 3: the design parameter adaptive focus loss function is as follows:
Figure BDA00024528769300000313
wherein:
Figure BDA00024528769300000314
Figure BDA00024528769300000315
Figure BDA0002452876930000041
Figure BDA0002452876930000042
indicating that the jth anchor point in the ith grid on the ith network is in the image tkThe loss function of the confidence coefficient of the vehicle sample and the parking lot background sample; in the same way, the method for preparing the composite material,
Figure BDA0002452876930000043
a loss function representing a prediction box of the vehicle,
Figure BDA0002452876930000044
a loss function representing the class of the vehicle, λ being the loss function
Figure BDA0002452876930000045
And (4) parameters.
Figure BDA0002452876930000046
And
Figure BDA0002452876930000047
the loss functions of the vehicle object and the parking lot background object are respectively expressed as follows:
Figure BDA0002452876930000048
Figure BDA0002452876930000049
Figure BDA00024528769300000410
the probability value of the foreground vehicle predicted by the jth anchor point in the ith grid on the ith network is shown, and similarly,
Figure BDA00024528769300000411
representing a corresponding parking lot background probability value.
Figure BDA00024528769300000412
Respectively representing the abscissa and the ordinate of the central point of the prediction frame of the jth anchor point in the ith grid on the ith network, and the like
Figure BDA00024528769300000413
Respectively representing the abscissa and the ordinate of the central point of the vehicle sample calibration frame;
Figure BDA00024528769300000414
respectively representing the shortest Euclidean distance from the central point of the prediction frame of the jth anchor point in the ith grid on the ith network to the boundary of the frame, and the same way
Figure BDA00024528769300000415
Respectively representing the shortest Euclidean distance from the central point of the vehicle sample calibration frame to the frame boundary;
Figure BDA00024528769300000416
and the vehicle category predicted value of the jth anchor point prediction in the ith grid on the ith network is represented. In the same way, the method for preparing the composite material,
Figure BDA00024528769300000417
a calibration status indicating the category of the vehicle,
Figure BDA00024528769300000418
a sample of the vehicle is represented for prediction,
Figure BDA00024528769300000419
whether the parking lot background sample is predicted or not is represented, and the specific calculation is as follows:
Figure BDA00024528769300000420
Figure BDA00024528769300000421
Figure BDA00024528769300000422
wherein the parameters α∈ (0, 1); ioujRepresenting anchor points mjThe overlap ratio of the anchor point box and the vehicle calibration box in the ith grid, miou represents the maximum overlap ratio.
Step 4: loss function based on a stage target detection algorithm model in Step 3, and utilization of loss functionThe training set carries out gradient descent method training on the model until the model converges, and in the model testing stage, the total number of parking spaces is set to sum ∈ N+Outputting a test sample of the current video monitoring area for target detection, and recording num ∈ N+When the number of vehicles in the parking lot is indicated, the vacant parking space s _ num is sum-num.
The invention has the advantages that: the provided focus loss function can improve the parameter adaptability of the target detection model, improve the detection precision of the system and ensure that the real-time performance of the system meets the requirements of application scenes.
Drawings
Fig. 1 is a network configuration diagram of the convolutional neural network of the present invention.
Fig. 2 is a diagram of a loss function structure in the convolutional neural network of the present invention.
Fig. 3 is a flowchart of the deployment of the parking space detection algorithm based on the convolutional neural network provided by the present invention.
Detailed Description
In order to better explain the technical scheme of the invention, the invention is further explained by an embodiment example with the accompanying drawings.
The method for identifying the parking spaces in the high-speed service area comprises the following steps:
step 1: acquiring a large amount of image data shot by a high-altitude camera, constructing 10000 high-speed service area parking lot data sets M, 8000 training data sets T, 2000 verification data sets V, 5 marked vehicle category numbers C, respectively a car, a cross-country vehicle, a large truck, a police car and an engineering maintenance vehicle, wherein the training data batch size batch value is 4, the training batch times batches value is 1000, the learning rate l _ rate value is 0.001, the proportionality coefficient zeta value between the training data set T and the verification data set V is 0.25, and the height h of the image isk=416,wkAnd 416, r is 3, and the height, width and channel number of all the images are consistent.
Step 2: determining a one-stage target detection model as Yolov3, setting the depth L of the convolutional neural network as 139, wherein the height, width and dimension settings of the convolutional kernel are specifically shown in FIG. 1, and filling the convolutional kernelSize of the charger
Figure BDA0002452876930000051
Default to 1, convolution step size
Figure BDA0002452876930000061
The default is 1, the excitation function f of the convolution neuron is a LEAKLy _ relu excitation function, anchor points are shared in each layer network, the set of anchor points M is { (10,13), (30,61), (156,198) }, Λ is 3, the network output layer adopts a full connection mode, and the set of convolution kernels A is { (1,1,30), (1,1,30), (1,1,30) }, and xi is 3.
Step 3: as shown in fig. 2, a parameter adaptive focus LOSS function LOSS is constructed, where the value of the parameter α is 0.25 and the value of the parameter λ is 0.5.
Step 4: and (3) based on a loss function of a stage target detection algorithm model in Step 3, carrying out gradient descent method training on the model by using a training set until the model converges. As shown in fig. 3, a video stream of a camera installed in a parking lot is used for real-time detection, the total number sum of parking spaces is set to 10, a test sample of a current video monitoring area is output for target detection, and the remaining parking spaces are calculated according to the detected number of vehicles and the total number of parking spaces, so that management of the parking spaces is realized.
The embodiments described in this specification are merely illustrative of implementations of the inventive concept and the scope of the present invention should not be considered limited to the specific forms set forth in the embodiments but rather by the equivalents thereof as may occur to those skilled in the art upon consideration of the present inventive concept.

Claims (1)

1. The method for identifying the parking spaces in the high-speed service area comprises the following steps:
step 1: constructing a parking lot data set M of a high-speed service area, a training data set T, a verification data set V, a labeled vehicle category number C, a training data batch size batch, a training batch number batch, a learning rate l _ rate and a proportionality coefficient zeta between the training data set T and the verification data set V;
Figure FDA0002452876920000011
Figure FDA0002452876920000012
wherein V ∪ T is M, C ∈ N+,ζ∈(0,1),batches∈N+,l_rate∈N+,batch∈N+
Figure FDA0002452876920000013
Representing the height and width of the image, and r represents the number of channels of the image;
step 2: determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure FDA0002452876920000014
representing the kth characteristic diagram in the l-th network
Figure FDA0002452876920000015
The corresponding grid number and anchor point set M specifically include:
Figure FDA0002452876920000016
Figure FDA0002452876920000017
Figure FDA0002452876920000018
Figure FDA0002452876920000019
Figure FDA00024528769200000110
wherein:
Figure FDA00024528769200000111
respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the l-th network;
Figure FDA00024528769200000112
indicating the fill size of the layer l network convolution kernel,
Figure FDA00024528769200000113
representing the convolution step size of the layer I network, f representing the excitation function of the convolution neuron, theta representing the selected input feature, Λ∈ N+Denotes the total number of anchor points xi ∈ N in the layer I network+Representing the total number of output layer nodes, Φ ∈ N+Indicates the total number of layer I network feature maps, Δ ∈ N+Represents the total number of the l layer convolution kernels;
step 3: designing a parameter adaptive focus loss function, which specifically comprises the following steps:
Figure FDA00024528769200000114
wherein:
Figure FDA00024528769200000115
Figure FDA00024528769200000116
Figure FDA00024528769200000117
Figure FDA00024528769200000118
indicating that the jth anchor point in the ith grid on the ith network is in the image tkThe loss function of the confidence coefficient of the vehicle sample and the parking lot background sample; in the same way, the method for preparing the composite material,
Figure FDA00024528769200000119
a loss function representing a prediction box of the vehicle,
Figure FDA00024528769200000120
a loss function representing the class of the vehicle, λ ∈ Q being the loss function
Figure FDA00024528769200000121
A parameter;
Figure FDA00024528769200000122
and
Figure FDA0002452876920000021
the loss functions of the vehicle object and the parking lot background object are respectively expressed as follows:
Figure FDA0002452876920000022
Figure FDA0002452876920000023
Figure FDA0002452876920000024
the probability value of the foreground vehicle predicted by the jth anchor point in the ith grid on the ith network is shown, and similarly,
Figure FDA0002452876920000025
representing a corresponding parking lot background probability value;
Figure FDA0002452876920000026
respectively representing the abscissa and the ordinate of the central point of the prediction frame of the jth anchor point in the ith grid on the ith network, and the like
Figure FDA0002452876920000027
Respectively representing the abscissa and the ordinate of the central point of the vehicle sample calibration frame;
Figure FDA0002452876920000028
respectively representing the shortest Euclidean distance from the central point of the prediction frame of the jth anchor point in the ith grid on the ith network to the boundary of the frame, and the same way
Figure FDA0002452876920000029
Respectively representing the shortest Euclidean distance from the central point of the vehicle sample calibration frame to the frame boundary;
Figure FDA00024528769200000210
representing the predicted vehicle category value of the jth anchor point prediction in the ith grid on the ith network; in the same way, the method for preparing the composite material,
Figure FDA00024528769200000211
a calibration status indicating the category of the vehicle,
Figure FDA00024528769200000212
a sample of the vehicle is represented for prediction,
Figure FDA00024528769200000213
whether the parking lot background sample is predicted or not is represented, and the specific calculation is as follows:
Figure FDA00024528769200000214
Figure FDA00024528769200000215
Figure FDA00024528769200000216
wherein the parameters α∈ (0, 1); ioujRepresenting anchor points mjOverlapping rate of the anchor point frame and the vehicle calibration frame in the ith grid, wherein miou represents the maximum overlapping rate;
step 4, performing gradient descent method training on the model by using a loss function of a first-stage target detection algorithm model in Step 3 until the model converges, extracting a network characteristic value by using the first-stage target detection model in the system operation stage, determining an anchor point based on a K-means clustering method, and setting the total number of parking spaces to sum ∈ N+And outputting the number num ∈ N of target detection vehicles in the current video monitoring area+And if the empty parking space s _ num is equal to sum-num.
CN202010297837.2A 2020-04-16 2020-04-16 High-speed service area parking space recognition method Active CN111597897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010297837.2A CN111597897B (en) 2020-04-16 2020-04-16 High-speed service area parking space recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010297837.2A CN111597897B (en) 2020-04-16 2020-04-16 High-speed service area parking space recognition method

Publications (2)

Publication Number Publication Date
CN111597897A true CN111597897A (en) 2020-08-28
CN111597897B CN111597897B (en) 2023-10-24

Family

ID=72187569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010297837.2A Active CN111597897B (en) 2020-04-16 2020-04-16 High-speed service area parking space recognition method

Country Status (1)

Country Link
CN (1) CN111597897B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686340A (en) * 2021-03-12 2021-04-20 成都点泽智能科技有限公司 Dense small target detection method based on deep neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9916522B2 (en) * 2016-03-11 2018-03-13 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation
CN109902677A (en) * 2019-01-30 2019-06-18 深圳北斗通信科技有限公司 A kind of vehicle checking method based on deep learning
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9916522B2 (en) * 2016-03-11 2018-03-13 Kabushiki Kaisha Toshiba Training constrained deconvolutional networks for road scene semantic segmentation
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN109902677A (en) * 2019-01-30 2019-06-18 深圳北斗通信科技有限公司 A kind of vehicle checking method based on deep learning
CN110443208A (en) * 2019-08-08 2019-11-12 南京工业大学 A kind of vehicle target detection method, system and equipment based on YOLOv2

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIN ZHOU ET AL.: "Multi-resolution Networks for Ship Detection in Infrared Remote Sensing Images", 《INFRARED PHYSICS & TECHNOLOGY》 *
邵奇可等: "基于深度学习的高速服务区车位检测算法", 《计算机系统应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686340A (en) * 2021-03-12 2021-04-20 成都点泽智能科技有限公司 Dense small target detection method based on deep neural network
CN112686340B (en) * 2021-03-12 2021-07-13 成都点泽智能科技有限公司 Dense small target detection method based on deep neural network

Also Published As

Publication number Publication date
CN111597897B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
KR102263397B1 (en) Method for acquiring sample images for inspecting label among auto-labeled images to be used for learning of neural network and sample image acquiring device using the same
CN111597901A (en) Illegal billboard monitoring method
CN114023062B (en) Traffic flow information monitoring method based on deep learning and edge calculation
CN111079640B (en) Vehicle type identification method and system based on automatic amplification sample
CN111597902A (en) Motor vehicle illegal parking monitoring method
CN111709336B (en) Expressway pedestrian detection method, equipment and readable storage medium
CN110717387A (en) Real-time vehicle detection method based on unmanned aerial vehicle platform
CN112435356B (en) ETC interference signal identification method and detection system
CN112365482A (en) Crossed chromosome image example segmentation method based on chromosome trisection feature point positioning
CN111723854A (en) Method and device for detecting traffic jam of highway and readable storage medium
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN111597900B (en) Illegal dog walking identification method
CN111597897A (en) Parking space identification method for high-speed service area
CN114612847A (en) Method and system for detecting distortion of Deepfake video
CN116630828B (en) Unmanned aerial vehicle remote sensing information acquisition system and method based on terrain environment adaptation
CN112597995A (en) License plate detection model training method, device, equipment and medium
CN113780462B (en) Vehicle detection network establishment method based on unmanned aerial vehicle aerial image and application thereof
CN115984723A (en) Road damage detection method, system, device, storage medium and computer equipment
CN116309270A (en) Binocular image-based transmission line typical defect identification method
CN116152750A (en) Vehicle feature recognition method based on monitoring image
CN115861595A (en) Multi-scale domain self-adaptive heterogeneous image matching method based on deep learning
CN115223087A (en) Group control elevator traffic mode identification method
CN114548376A (en) Intelligent transportation system-oriented vehicle rapid detection network and method
CN114495160A (en) Pedestrian detection method and system based on improved RFBNet algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant