CN111597899A - Scenic spot ground plastic bottle detection method - Google Patents

Scenic spot ground plastic bottle detection method Download PDF

Info

Publication number
CN111597899A
CN111597899A CN202010298079.6A CN202010298079A CN111597899A CN 111597899 A CN111597899 A CN 111597899A CN 202010298079 A CN202010298079 A CN 202010298079A CN 111597899 A CN111597899 A CN 111597899A
Authority
CN
China
Prior art keywords
plastic bottle
network
representing
model
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010298079.6A
Other languages
Chinese (zh)
Other versions
CN111597899B (en
Inventor
邵奇可
陈一苇
卢熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010298079.6A priority Critical patent/CN111597899B/en
Publication of CN111597899A publication Critical patent/CN111597899A/en
Application granted granted Critical
Publication of CN111597899B publication Critical patent/CN111597899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The method for detecting plastic bottles on the ground in scenic spots comprises the following steps: 1) collecting a large number of images of high-altitude cameras in scenic spots and other plastic bottle data sets, calibrating the data sets according to on-site management requirements, and determining a used one-stage target detection algorithm model; 2) constructing a parameter adaptive loss function
Figure DDA0002452975080000011
And

Description

Scenic spot ground plastic bottle detection method
Technical Field
The invention belongs to the technical field of image recognition and computer vision, and relates to a scenic spot ground plastic bottle detection method.
Background
At present, to tourist abandon the plastic bottle at will in the scenic spot, the staff can't in time handle the plastic bottle problem of abandoning subaerial, and traditional processing method mainly includes: firstly, the scenic spot is continuously inspected by workers; secondly, plastic bottles on the ground of the scenic spot are identified through a traditional image algorithm. Wherein, through the staff patrol and examine the processing mode of accomplishing scenic spot ground plastic bottle, this processing mode need consume a large amount of manpowers, material resources and financial resources, and because the staff need vacate and have factors such as hourglass inspection at artificial patrol and examine in-process, the effect is not ideal. The generalization of the traditional image algorithm is poor, and the plastic bottle in the image can be detected only by the camera under the conditions of a fixed angle and fixed illumination.
Therefore, the plastic bottles on the ground of the scenic spot are identified in real time by utilizing the existing security cameras in the scenic spot, the position information of the ground plastic bottles is sent to the control center, field workers are informed to process in time, labor cost can be greatly reduced, and the processing efficiency of the scenic spot on the ground plastic bottles can be improved. Therefore, the scenic spot ground plastic bottle detection system based on the video has good popularization value.
The video stream of the security camera is used for identifying ground plastic bottles in the scenic area, and the requirements on the accuracy of an identification algorithm and the real-time performance of information are high. Therefore, the target detection algorithm based on deep learning is reasonable. The target detection algorithm based on deep learning is divided into a two-stage model and a one-stage model. Although the two-stage target detection model has better detection precision, the forward reasoning speed is slow, and the real-time requirement of a service scene cannot be met. In the traditional one-stage target detection algorithm model, although the real-time performance of the algorithm is good, the detection precision of the two-stage target detection algorithm model cannot be achieved. When the target is detected by an image, a large number of scenic background objects are contained, although the loss value of the scenic background objects is small, the number of the scenic background objects is far more than that of the plastic bottle target, and the conventional target detection method is difficult to obtain high identification accuracy under the complex scene, so that a highly adaptive target detection method is urgently needed.
Disclosure of Invention
The invention provides a scenic spot ground plastic bottle detection method which has high identification accuracy and good self-adaptability and aims to overcome the defects in the prior art.
The invention improves the loss function in a one-stage target detection algorithm model. The loss function is used as an objective function of a gradient descent process in the convolutional neural network, and directly influences the training result of the convolutional neural network. The quality of the training result of the convolutional neural network is directly related to the identification precision of target detection, so that the method is particularly important for the design and display of a loss function. In a stage target detection algorithm model training process, a network contains a large number of scenic spot background objects when an image detects a target, although loss values of the scenic spot background objects are small, the loss values far exceed plastic bottles in number, so that the scenic spot background loss values with small probability values overwhelm the plastic bottle target loss values when the loss values are calculated, the model precision is greatly reduced, and a focus loss function is embedded into a detection model to improve the training precision. And if the hyper-parameters exist in the focus loss function, the hyper-parameters need to be set according to empirical values, and the magnitude of the hyper-parameters can not be automatically adjusted according to the predicted class probability value.
The invention provides a deep learning loss function based on semi-supervised learning, aiming at the problems that hyper-parameters need to be adjusted manually in the training process of a focus loss function and the parameters in the training process do not have self-adaptability.
The method for detecting plastic bottles on the ground in scenic spots comprises the following steps:
step 1: constructing a plastic bottle data set M, a training data set T, a verification data set V, labeling the plastic bottle category number C, the training data batch size batch, the training batch number batches, the learning rate l _ rate, and the proportionality coefficient zeta between the training data set T and the verification data set V.
Figure BDA0002452975060000031
Figure BDA0002452975060000032
ζ=Card(V)/Card(T)
Wherein V ∪ T is M, C ∈ N+,ζ∈(0,1),batches∈N+,l_rate∈N+,batch∈N+
Figure BDA0002452975060000033
Representing the height and width of the image and r representing the number of channels of the image.
Step 2: determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure BDA0002452975060000034
representing the kth characteristic diagram in the l-th network
Figure BDA0002452975060000035
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure BDA0002452975060000036
Figure BDA0002452975060000037
Figure BDA0002452975060000038
Figure BDA0002452975060000039
Figure BDA00024529750600000310
wherein:
Figure BDA00024529750600000311
and respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the l-th network.
Figure BDA00024529750600000312
Indicating the fill size of the layer l network convolution kernel,
Figure BDA00024529750600000313
representing the convolution step size of the layer I network, f representing the excitation function of the convolution neuron, theta representing the selected input feature, Λ∈ N+Denotes the total number of anchor points xi ∈ N in the layer I network+Representing the total number of output layer nodes, Φ ∈ N+Indicates the total number of layer I network feature maps, Δ ∈ N+Representing the total number of the l-th layer convolution kernels.
Step 3: the design parameter adaptive focus loss function is as follows:
Figure BDA00024529750600000314
wherein:
Figure BDA00024529750600000315
Figure BDA0002452975060000041
Figure BDA0002452975060000042
Figure BDA0002452975060000043
is shown asImage t of jth anchor point in ith grid on l-layer networkkThe loss function of the confidence degrees of the plastic bottle sample and the scenic spot background sample; in the same way, the method for preparing the composite material,
Figure BDA0002452975060000044
a loss function representing a prediction box for a plastic bottle,
Figure BDA0002452975060000045
λ ∈ Q is a loss function representing the class of plastic bottles
Figure BDA0002452975060000046
And (4) parameters.
Figure BDA0002452975060000047
And
Figure BDA0002452975060000048
the loss functions of the plastic bottle target and the scenic spot background target are respectively expressed as follows:
Figure BDA0002452975060000049
Figure BDA00024529750600000410
Figure BDA00024529750600000411
the probability value of the foreground plastic bottle predicted by the jth anchor point in the ith grid on the ith network is shown, and similarly,
Figure BDA00024529750600000412
representing the corresponding scenic background probability value.
Figure BDA00024529750600000413
Respectively representing the abscissa and the ordinate of the central point of the prediction frame of the jth anchor point in the ith grid on the ith network, and the like
Figure BDA00024529750600000414
Respectively representing the abscissa and the ordinate of the central point of the plastic bottle sample calibration frame;
Figure BDA00024529750600000415
respectively representing the shortest Euclidean distance from the central point of the prediction frame of the jth anchor point in the ith grid on the ith network to the boundary of the frame, and the same way
Figure BDA00024529750600000416
Respectively representing the shortest Euclidean distance from the central point of the plastic bottle sample calibration frame to the frame boundary;
Figure BDA00024529750600000417
and (4) representing the predicted plastic bottle category value of the jth anchor point prediction in the ith grid on the ith network. In the same way, the method for preparing the composite material,
Figure BDA00024529750600000418
indicating the nominal status of the class of plastic bottles,
Figure BDA00024529750600000419
indicating that a sample of a plastic bottle was predicted,
Figure BDA00024529750600000420
whether to predict the background sample of the scenic spot is represented, and the specific calculation is as follows:
Figure BDA00024529750600000421
Figure BDA00024529750600000422
Figure BDA0002452975060000051
wherein the parameters α∈ (0, 1); ioujRepresenting anchor points mjThe overlap ratio of the anchor point box and the plastic bottle calibration box in the ith grid, and miou represents the maximum overlap ratio.
Step 4: and (3) based on a loss function of a stage target detection algorithm model in Step 3, carrying out gradient descent method training on the model by using a training set until the model converges. And in the model testing stage, setting alarm time as timer, automatically recording the detailed category and position information of the plastic bottle when the plastic bottle is detected by the system model, starting timing, and sending an alarm if the detailed category and position information of the plastic bottle detected again are consistent with those before after the given time is exceeded.
The invention has the advantages that: the parameter adaptability of the plastic bottle detection model can be improved, and the accuracy of plastic bottle detection is greatly improved.
Drawings
Fig. 1 is a network configuration diagram of the convolutional neural network of the present invention.
Fig. 2 is a diagram of a loss function structure in the convolutional neural network of the present invention.
FIG. 3 is a flowchart of the present invention for the disposition of plastic bottle detection algorithm based on convolutional neural network.
Detailed Description
In order to better explain the technical scheme of the invention, the invention is further explained by an embodiment with the accompanying drawings.
The method for detecting plastic bottles on the ground in scenic spots comprises the following steps:
step 1: collecting a large amount of plastic bottle image data shot at high altitude, constructing a plastic bottle data set M with the number of 10000, a training data set T with the number of 8000, a verification data set V with the number of 2000, a labeled plastic bottle category number C with the value of 5, namely a Fenda plastic bottle, a kouchuole plastic bottle, a pulsating plastic bottle, a scream plastic bottle and a farmer spring plastic bottle, wherein the training data batch size batch value is 4, the training batch times batches value is 1000, the learning rate l _ rate value is 0.001, the proportionality coefficient zeta value between the training data set T and the verification data set V is 0.25, the height, width and channel number of all images are set consistently, and the height h of the images is set consistentlykAnd width wkThe values are 416 and 416 respectively, and the number r of channels of the image is 3.
Step 2: determining a one-stage target detection model as Yolov3, setting the depth L of the convolutional neural network as 139, wherein the height, width and dimension settings of the convolutional kernel are specifically shown in FIG. 1, and the filling size of the convolutional kernel
Figure BDA0002452975060000061
Default to 1, convolution step size
Figure BDA0002452975060000062
The excitation function f of the convolutional neurons is defaulted to be a LEAkly _ relu excitation function, anchor points are shared in each layer network, an anchor point set M is set to be { (10,13), (30,61) and (156,198) }, namely, the total number of anchor points Λ in each layer network layer is set to be 3, the network output layer adopts a full-connection mode, a convolution kernel set A is set to be { (1,1,30), (1,1,30) }, namely, the total number of output layer nodes is set to be 3.
Step 3: as shown in fig. 2, a parameter adaptive focus LOSS function LOSS is constructed, where the value of the parameter α is 0.25 and the value of the parameter λ is 0.5.
Step 4: and (3) based on a loss function of a stage target detection algorithm model in Step 3, carrying out gradient descent method training on the model by using a training set until the model converges. As shown in fig. 3, the video stream of the camera installed in the scenic spot is used for real-time detection, in the model test stage, the alarm time timer takes 3 minutes, when the plastic bottle is detected by the system model, the detailed type and position information of the plastic bottle are automatically recorded, timing is started, and after 3 minutes, if the detailed type and position information of the plastic bottle detected again are consistent with the previous detailed type and position information, an alarm is given.
While the foregoing has described a preferred embodiment of the invention, it will be appreciated that the invention is not limited to the embodiment described, but is capable of numerous modifications without departing from the basic spirit and scope of the invention as set out in the appended claims.

Claims (1)

1. The scenic spot ground plastic bottle detection method comprises the following steps:
step 1: constructing a plastic bottle data set M, a training data set T, a verification data set V, labeling the plastic bottle category number C, the training data batch size batch, the training batch number batches, the learning rate l _ rate, and a proportionality coefficient zeta between the training data set T and the verification data set V;
Figure FDA0002452975050000011
Figure FDA0002452975050000012
ζ=Card(V)/Card(T)
wherein V ∪ T is M, C ∈ N+,ζ∈(0,1),batches∈N+,l_rate∈N+,batch∈N+
Figure FDA0002452975050000013
Representing the height and width of the image, and r represents the number of channels of the image;
step 2: determining a stage target detection model to be trained, setting the depth of a convolutional neural network as L, setting a network convolutional layer convolutional kernel set G, setting a network output layer in a full-connection mode, setting a convolutional kernel set A and a network characteristic diagram set U,
Figure FDA0002452975050000014
representing the kth characteristic diagram in the l-th network
Figure FDA0002452975050000015
The corresponding grid number and anchor point set M are specifically defined as follows:
Figure FDA0002452975050000016
Figure FDA0002452975050000017
Figure FDA0002452975050000018
Figure FDA0002452975050000019
Figure FDA00024529750500000110
wherein:
Figure FDA00024529750500000111
respectively representing the height, width and dimension of a convolution kernel, a characteristic diagram and an anchor point corresponding to the l-th network;
Figure FDA00024529750500000112
indicating the fill size of the layer l network convolution kernel,
Figure FDA00024529750500000113
representing the convolution step size of the layer I network, f representing the excitation function of the convolution neuron, theta representing the selected input feature, Λ∈ N+Denotes the total number of anchor points xi ∈ N in the layer I network+Representing the total number of output layer nodes, Φ ∈ N+Indicates the total number of layer I network feature maps, Δ ∈ N+Represents the total number of the l layer convolution kernels;
step 3: designing a parameter adaptive focus loss function, which specifically comprises the following steps:
Figure FDA00024529750500000114
wherein:
Figure FDA00024529750500000115
Figure FDA00024529750500000116
Figure FDA00024529750500000117
Figure FDA00024529750500000118
indicating that the jth anchor point in the ith grid on the ith network is in the image tkThe loss function of the confidence degrees of the plastic bottle sample and the scenic spot background sample; in the same way, the method for preparing the composite material,
Figure FDA00024529750500000119
a loss function representing a prediction box for a plastic bottle,
Figure FDA00024529750500000120
λ ∈ Q is a loss function representing the class of plastic bottles
Figure FDA00024529750500000121
A parameter;
Figure FDA0002452975050000021
and
Figure FDA0002452975050000022
the loss functions of the plastic bottle target and the scenic spot background target are respectively expressed as follows:
Figure FDA0002452975050000023
Figure FDA0002452975050000024
Figure FDA0002452975050000025
the probability value of the foreground plastic bottle predicted by the jth anchor point in the ith grid on the ith network is shown, and similarly,
Figure FDA0002452975050000026
representing a corresponding scenic region background probability value;
Figure FDA0002452975050000027
respectively representing the abscissa and the ordinate of the central point of the prediction frame of the jth anchor point in the ith grid on the ith network, and the like
Figure FDA0002452975050000028
Respectively representing the abscissa and the ordinate of the central point of the plastic bottle sample calibration frame;
Figure FDA0002452975050000029
respectively representing the shortest Euclidean distance from the central point of the prediction frame of the jth anchor point in the ith grid on the ith network to the boundary of the frame, and the same way
Figure FDA00024529750500000210
Respectively representing the shortest Euclidean distance from the central point of the plastic bottle sample calibration frame to the frame boundary;
Figure FDA00024529750500000211
the plastic bottle category prediction value represents the prediction of the jth anchor point in the ith grid on the ith network; in the same way, the method for preparing the composite material,
Figure FDA00024529750500000212
indicating the nominal status of the class of plastic bottles,
Figure FDA00024529750500000213
indicating that a sample of a plastic bottle was predicted,
Figure FDA00024529750500000214
whether to predict the background sample of the scenic spot is represented, and the specific calculation is as follows:
Figure FDA00024529750500000215
Figure FDA00024529750500000216
Figure FDA00024529750500000217
wherein the parameters α∈ (0, 1); ioujRepresenting anchor points mjThe overlapping rate of the anchor point frame and the plastic bottle calibration frame in the ith grid, wherein miou represents the maximum overlapping rate;
step 4: performing gradient descent method training on the model by using a loss function of a stage target detection algorithm model in Step 3 until the model converges; in the system operation stage, a first-order target detection model is used for extracting a network characteristic value, an anchor point is determined based on a K-means clustering method, the alarm time is set to be a timer, when the plastic bottle is detected by the system model, the detailed type and position information of the plastic bottle are automatically recorded, timing is started, and after the given time is exceeded, if the detailed type and position information of the plastic bottle detected again are consistent with the previous detailed type and position information of the plastic bottle, an alarm is given.
CN202010298079.6A 2020-04-16 2020-04-16 Scenic spot ground plastic bottle detection method Active CN111597899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010298079.6A CN111597899B (en) 2020-04-16 2020-04-16 Scenic spot ground plastic bottle detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010298079.6A CN111597899B (en) 2020-04-16 2020-04-16 Scenic spot ground plastic bottle detection method

Publications (2)

Publication Number Publication Date
CN111597899A true CN111597899A (en) 2020-08-28
CN111597899B CN111597899B (en) 2023-08-11

Family

ID=72190405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010298079.6A Active CN111597899B (en) 2020-04-16 2020-04-16 Scenic spot ground plastic bottle detection method

Country Status (1)

Country Link
CN (1) CN111597899B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697459A (en) * 2018-12-04 2019-04-30 云南大学 One kind is towards optical coherence tomography image patch Morphology observation method
CN109902677A (en) * 2019-01-30 2019-06-18 深圳北斗通信科技有限公司 A kind of vehicle checking method based on deep learning
CN110135267A (en) * 2019-04-17 2019-08-16 电子科技大学 A kind of subtle object detection method of large scene SAR image
CN110163187A (en) * 2019-06-02 2019-08-23 东北石油大学 Remote road traffic sign detection recognition methods based on F-RCNN
CN110287905A (en) * 2019-06-27 2019-09-27 浙江工业大学 A kind of traffic congestion region real-time detection method based on deep learning
CN110298307A (en) * 2019-06-27 2019-10-01 浙江工业大学 A kind of exception parking real-time detection method based on deep learning
CN110309765A (en) * 2019-06-27 2019-10-08 浙江工业大学 A kind of video frequency motion target efficient detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697459A (en) * 2018-12-04 2019-04-30 云南大学 One kind is towards optical coherence tomography image patch Morphology observation method
CN109902677A (en) * 2019-01-30 2019-06-18 深圳北斗通信科技有限公司 A kind of vehicle checking method based on deep learning
CN110135267A (en) * 2019-04-17 2019-08-16 电子科技大学 A kind of subtle object detection method of large scene SAR image
CN110163187A (en) * 2019-06-02 2019-08-23 东北石油大学 Remote road traffic sign detection recognition methods based on F-RCNN
CN110287905A (en) * 2019-06-27 2019-09-27 浙江工业大学 A kind of traffic congestion region real-time detection method based on deep learning
CN110298307A (en) * 2019-06-27 2019-10-01 浙江工业大学 A kind of exception parking real-time detection method based on deep learning
CN110309765A (en) * 2019-06-27 2019-10-08 浙江工业大学 A kind of video frequency motion target efficient detection method

Also Published As

Publication number Publication date
CN111597899B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN106960195B (en) Crowd counting method and device based on deep learning
CN107609525B (en) Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy
CN105426870B (en) A kind of face key independent positioning method and device
CN111832605B (en) Training method and device for unsupervised image classification model and electronic equipment
CN111079640B (en) Vehicle type identification method and system based on automatic amplification sample
CN111597901A (en) Illegal billboard monitoring method
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN114023062B (en) Traffic flow information monitoring method based on deep learning and edge calculation
CN111709336B (en) Expressway pedestrian detection method, equipment and readable storage medium
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN110276247A (en) A kind of driving detection method based on YOLOv3-Tiny
CN111985325A (en) Aerial small target rapid identification method in extra-high voltage environment evaluation
CN110751209A (en) Intelligent typhoon intensity determination method integrating depth image classification and retrieval
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN111597902B (en) Method for monitoring motor vehicle illegal parking
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN111597900B (en) Illegal dog walking identification method
CN111339950B (en) Remote sensing image target detection method
CN111597899A (en) Scenic spot ground plastic bottle detection method
CN116664545A (en) Offshore benthos quantitative detection method and system based on deep learning
CN116612382A (en) Urban remote sensing image target detection method and device
US20230326183A1 (en) Data Collection and Classifier Training in Edge Video Devices
CN111597897B (en) High-speed service area parking space recognition method
CN114581769A (en) Method for identifying houses under construction based on unsupervised clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant