CN112329768A - Improved YOLO-based method for identifying fuel-discharging stop sign of gas station - Google Patents

Improved YOLO-based method for identifying fuel-discharging stop sign of gas station Download PDF

Info

Publication number
CN112329768A
CN112329768A CN202011148161.7A CN202011148161A CN112329768A CN 112329768 A CN112329768 A CN 112329768A CN 202011148161 A CN202011148161 A CN 202011148161A CN 112329768 A CN112329768 A CN 112329768A
Authority
CN
China
Prior art keywords
yolo
neural network
gas station
iou
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011148161.7A
Other languages
Chinese (zh)
Inventor
周斯加
关超华
陈志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shangshan Zhicheng Suzhou Information Technology Co ltd
Original Assignee
Shangshan Zhicheng Suzhou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shangshan Zhicheng Suzhou Information Technology Co ltd filed Critical Shangshan Zhicheng Suzhou Information Technology Co ltd
Priority to CN202011148161.7A priority Critical patent/CN112329768A/en
Publication of CN112329768A publication Critical patent/CN112329768A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The invention discloses a method for identifying a gas station oil discharge stop sign based on improved YOLO, which comprises the following steps: s1, collecting videos and photos of the oil unloading stop board on an oil unloading operation site of a gas station, and making the videos and the photos into a sample data set; s2, dividing the picture and dividing the regions, and detecting whether the central point is a target in each region; s3, constructing a YOLO convolutional neural network, and extracting features by using the YOLO convolutional neural network; s4, training the YOLO neural network according to the data set prepared in the step S1; s5: and predicting the trained YOLO neural network by using the collected photos to obtain the recognition result of the oil-discharge stop sign of the gas station. The invention provides improved YOLO-based gas station oil discharge stop board sign recognition based on a deep learning model, which not only retains the advantage of high detection speed of a YOLO algorithm, but also greatly improves the detection accuracy and detection effect on the basis.

Description

Improved YOLO-based method for identifying fuel-discharging stop sign of gas station
Technical Field
The invention relates to the technical field of computer vision target recognition, in particular to a method for recognizing a gas station oil discharge stop board sign based on improved YOLO.
Background
Early methods of object detection, which are typically predicted by using a sliding window, are very time consuming and are not very accurate; the object prosal method is presented later, compared with a sliding window mode, the calculation amount is greatly reduced, and the performance is greatly improved; later on, Fast R-CNN and other methods were developed by using selective search in combination with the R-CNN of the convolutional neural network, but the speed of the methods still needs to be improved.
The YOLO is a convolutional neural network for predicting the positions and the types of a plurality of BOX at one time, can realize end-to-end target detection and identification, and has the greatest advantage of high speed. In fact, the nature of target detection is regression, so a CNN implementing the regression function does not require a too complex involved process. The YOLO does not select a sliding window or a method for extracting propusal to train the network, but directly selects a whole graph training model, and the method has the advantage that a target area and a background area can be better distinguished; however, the YOLO algorithm has defects, the accuracy of the YOLO algorithm is lower than that of other R-CNN algorithms, the detection effect of the object is to be improved, and the YOLO algorithm is improved according to the defects, so that the YOLO algorithm can perform better target identification.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a method and a system for identifying a gas station oil discharge stop sign based on improved YOLO, which can improve the accuracy of the traditional YOLO and improve the target detection effect.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
s1, collecting videos and photos of the oil unloading stop board on an oil unloading operation site of a gas station, and making the videos and the photos into a sample data set;
s2, dividing the picture and dividing the regions, and detecting whether the central point is a target in each region;
s3, constructing a YOLO convolutional neural network, extracting features by using the YOLO convolutional neural network, extracting different scales by using the feature map in the step S3, generating predictions of different proportions by using the feature map of different scales, and then definitely performing separation prediction according to length ratios;
s4, training the YOLO neural network according to the data set prepared in the step S1;
s5: and predicting the trained YOLO neural network by using the collected photos to obtain the recognition result of the oil-discharge stop sign of the gas station.
The further setting is that the photos and videos collected in step S1 are obtained from different angles and different heights, and all material shooting needs to be performed in batch according to different states of whether the vehicle is parked in the vehicle parking area.
It is further configured that the segmentation and division in S2 is that the CNN network in YOLO segments the inputted picture into 4 × 4 grids, and further generates 9 possible regions with different sizes based on the grids.
It is further set that detecting at each area whether the center point is an object falling within the area in said S2 includes the steps of:
a. detecting the probability size Pr (object) that the region contains the object;
b. the accuracy of this region;
when no object is contained in the region, pr (object) is 0; when the region includes an object, pr (object) is 1; the accuracy of the region is represented by the intersection of the predicted box and the actual box and the IOU, so the confidence is represented by pr (object) IOU.
Further setting is that the network training in S4 is to train the first 20 convolutional layers, an average-pool layer, and a full-link layer of the classification model on ImageNet, and the loss function of the training is:
Figure BDA0002740319770000031
each bounding box comprises 5 values of x, y, w, h and confidence, wherein (x, y) coordinates represent the center of a 2-bounding box relative to a grid unit bounding box, the width and the height are predicted relative to the whole image, the confidence represents the IOU between the predicted box and an actual bounding box, 1iobj represents whether an object appears in a grid unit i, and whether the object is 1 or 0 is determined according to whether the unit actually has the object; 1ijobj indicates that the j-th bounding box predictor of grid cell i is responsible for the prediction, if the cell really has a target and the bounding box IOU is largest, then the value is 1, or not 0.
The step 5 of predicting the fuel-filling station fuel-discharging stop sign is further arranged to find the areas with higher confidence from the areas in the step S2, calculate the IOU of the areas, compare the value with the set threshold, delete the areas with the value larger than the set threshold, repeat the steps until the areas are processed, and finally obtain the final detection result by the NMS algorithm.
The invention has the technical effects and advantages that: compared with the existing YOLO algorithm, the improved YOLO algorithm not only retains the advantage of high detection speed of the YOLO algorithm, but also greatly improves the detection accuracy and detection effect on the basis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, in an embodiment of the present invention, a method for identifying a fuel discharge stop sign based on a YOLO refueling factory includes the following steps:
s1, collecting the stop board mark, collecting the video material and the photo of the oil-discharging stop board on the oil-discharging operation site of the gas station and making the photo into a data set;
the specific process is that the photos and videos collected in step S1 are obtained from different angles and different heights as much as possible, and all materials are shot in batches according to different states of whether vehicles are parked in the vehicle parking area.
S2, dividing the picture and the regions, and using each region to detect whether the central point is in the target of the region;
the specific process is that the segmentation and division of the regions in step S2 is that the CNN network in YOLO segments the input picture into 4 × 4 meshes, and further generates 9 possible regions with different sizes based on the meshes; the method for detecting whether the central points of the objects fall in the area comprises two steps: firstly, detecting the possibility size Pr (object) of the object contained in the area; second, the accuracy of this region (IOU).
When the region does not contain the target, Pr (object) is 0; when the region includes an object, pr (object) is 1. The accuracy of the region may be determined by the intersection ratio of the predicted box to the actual box (IOU). The confidence may therefore be expressed as pr (object) IOU.
S3, constructing a YOLO convolution neural network, and extracting features by using the convolution network;
the specific process is that the characteristic diagram in the step 3 needs to be extracted with different scales, the characteristic diagrams with different scales are used for generating predictions with different proportions, and then the characteristic diagrams are definitely separated and predicted according to the length ratio, so that the detection effect of the object can be improved by extracting the characteristic diagrams in the mode.
S4, training the YOLO neural network according to the data set prepared in the step S1;
the specific process is that the network training of the step 4 is to train the first 20 convolutional layers, an average-pool layer and a full-link layer of the classification model on ImageNet. The loss function of the training:
Figure BDA0002740319770000051
wherein each bounding box contains 5 values x, y, w, h and confidence. The (x, y) coordinates indicate that 2 the bounding box is centered with respect to the grid cell bounding box, the width and height are predicted with respect to the entire image, confidence indicates the IOU between the predicted box and the actual bounding box, 1iobj indicates whether an object appears in grid cell i (1, 0, determined by whether the cell actually has an object), and 1ijobj indicates that the jth bounding box predictor of grid cell i is "responsible" for the prediction (if the cell does have an object and the bounding box IOU is largest, the value is 1, 0).
S5: and predicting the trained YOLO neural network by using the collected photos to obtain the recognition result of the oil-discharge stop sign of the gas station.
The fuel filling station fuel discharge stop board prediction of S5 finds regions with high confidence from the regions in step S2, calculates the IOU of these regions, compares the value with a set threshold, deletes regions with values greater than the set threshold, repeats the above steps until the regions are processed, and finally obtains the final detection result by using a non-maximum suppression (NMS) algorithm.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.
It should be noted that the embodiments of the present invention can be realized in hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, in programmable memory or on a data carrier such as an optical or electronic signal carrier.

Claims (6)

1. A method for identifying a fuel-discharging stop sign of a gas station based on improved YOLO is characterized by comprising the following steps:
s1, collecting videos and photos of the oil unloading stop board on an oil unloading operation site of a gas station, and making the videos and the photos into a sample data set;
s2, dividing the picture and dividing the regions, and detecting whether the central point is a target in each region;
s3, constructing a YOLO convolutional neural network, extracting features by using the YOLO convolutional neural network, extracting different scales by using the feature map in the step S3, generating predictions of different proportions by using the feature map of different scales, and then definitely performing separation prediction according to length ratios;
s4, training the YOLO neural network according to the data set prepared in the step S1;
s5: and predicting the trained YOLO neural network by using the collected photos to obtain the recognition result of the oil-discharge stop sign of the gas station.
2. The method for identifying the signs of the fuel-discharging stop signs of the gas stations based on the improved YOLO as claimed in claim 1, wherein: the pictures and videos collected in step S1 are obtained from different angles and at different heights, and all material shooting needs to be performed in batch according to different states of whether the vehicle is parked in the vehicle parking area.
3. The method for identifying the signs of the fuel-discharging stop signs of the gas stations based on the improved YOLO as claimed in claim 1, wherein: the segmentation and division in S2 is that the convolutional neural network in YOLO segments the input picture into 4 × 4 grids, and then 9 possible regions with different sizes are further generated.
4. The improved YOLO-based method for identifying signs of fuel discharge stop boards at gas stations as claimed in claim 3, wherein the step of detecting whether the center point is a target falling within each area in S2 comprises the steps of:
a. detecting the probability size Pr (object) that the region contains the object;
b. the accuracy of this region;
when no object is contained in the region, pr (object) is 0; when the region includes an object, pr (object) is 1; the accuracy of the region is represented by the intersection of the predicted box and the actual box and the IOU, so the confidence is represented by pr (object) IOU.
5. The method for identifying the signs of the fuel-discharging stop signs of the gas stations based on the improved YOLO as claimed in claim 1, wherein:
the network training in S4 is to train the first 20 convolutional layers, an average-pool layer, and a full-link layer of the classification model on ImageNet, and the loss function of the training is:
Figure FDA0002740319760000021
each bounding box comprises 5 values of x, y, w, h and confidence, wherein (x, y) coordinates represent the center of a 2-bounding box relative to a grid unit bounding box, the width and the height are predicted relative to the whole image, the confidence represents the IOU between the predicted box and an actual bounding box, 1iobj represents whether an object appears in a grid unit i, and whether the object is 1 or 0 is determined according to whether the unit actually has the object; 1ijobj indicates that the j-th bounding box predictor of grid cell i is responsible for the prediction, if the cell really has a target and the bounding box IOU is largest, then the value is 1, or not 0.
6. The method as claimed in claim 1, wherein the step S5 of predicting the gasoline station fuel discharge stop sign includes finding areas with high confidence from the areas in step S2, calculating IOU of the areas, comparing the IOU value with a set threshold, deleting the areas with higher confidence, repeating the above steps until all the areas are processed, and finally obtaining the final detection result by using a non-maximum suppression algorithm.
CN202011148161.7A 2020-10-23 2020-10-23 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station Pending CN112329768A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011148161.7A CN112329768A (en) 2020-10-23 2020-10-23 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011148161.7A CN112329768A (en) 2020-10-23 2020-10-23 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station

Publications (1)

Publication Number Publication Date
CN112329768A true CN112329768A (en) 2021-02-05

Family

ID=74310948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011148161.7A Pending CN112329768A (en) 2020-10-23 2020-10-23 Improved YOLO-based method for identifying fuel-discharging stop sign of gas station

Country Status (1)

Country Link
CN (1) CN112329768A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469178A (en) * 2021-07-05 2021-10-01 安徽南瑞继远电网技术有限公司 Electric power meter identification method based on deep learning
CN113609891A (en) * 2021-06-15 2021-11-05 北京瞭望神州科技有限公司 Ship identification monitoring method and system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509954A (en) * 2018-04-23 2018-09-07 合肥湛达智能科技有限公司 A kind of more car plate dynamic identifying methods of real-time traffic scene
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109543617A (en) * 2018-11-23 2019-03-29 于兴虎 The detection method of intelligent vehicle movement traffic information based on YOLO target detection technique
CN109815998A (en) * 2019-01-08 2019-05-28 科大国创软件股份有限公司 A kind of AI dress dimension method for inspecting and system based on YOLO algorithm
CN110472467A (en) * 2019-04-08 2019-11-19 江西理工大学 The detection method for transport hub critical object based on YOLO v3
CN110598637A (en) * 2019-09-12 2019-12-20 齐鲁工业大学 Unmanned driving system and method based on vision and deep learning
CN110765865A (en) * 2019-09-18 2020-02-07 北京理工大学 Underwater target detection method based on improved YOLO algorithm
CN110781882A (en) * 2019-09-11 2020-02-11 南京钰质智能科技有限公司 License plate positioning and identifying method based on YOLO model
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN110796186A (en) * 2019-10-22 2020-02-14 华中科技大学无锡研究院 Dry and wet garbage identification and classification method based on improved YOLOv3 network
CN111038888A (en) * 2019-12-26 2020-04-21 上海电力大学 Plastic bottle recycling robot based on machine vision
CN111401148A (en) * 2020-02-27 2020-07-10 江苏大学 Road multi-target detection method based on improved multilevel YO L Ov3
CN111582056A (en) * 2020-04-17 2020-08-25 上善智城(苏州)信息科技有限公司 Automatic detection method for fire-fighting equipment in oil unloading operation site of gas station
CN111612002A (en) * 2020-06-04 2020-09-01 广州市锲致智能技术有限公司 Multi-target object motion tracking method based on neural network
CN111738258A (en) * 2020-06-24 2020-10-02 东方电子股份有限公司 Pointer instrument reading identification method based on robot inspection

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509954A (en) * 2018-04-23 2018-09-07 合肥湛达智能科技有限公司 A kind of more car plate dynamic identifying methods of real-time traffic scene
CN109086792A (en) * 2018-06-26 2018-12-25 上海理工大学 Based on the fine granularity image classification method for detecting and identifying the network architecture
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109543617A (en) * 2018-11-23 2019-03-29 于兴虎 The detection method of intelligent vehicle movement traffic information based on YOLO target detection technique
CN109815998A (en) * 2019-01-08 2019-05-28 科大国创软件股份有限公司 A kind of AI dress dimension method for inspecting and system based on YOLO algorithm
CN110472467A (en) * 2019-04-08 2019-11-19 江西理工大学 The detection method for transport hub critical object based on YOLO v3
CN110781882A (en) * 2019-09-11 2020-02-11 南京钰质智能科技有限公司 License plate positioning and identifying method based on YOLO model
CN110598637A (en) * 2019-09-12 2019-12-20 齐鲁工业大学 Unmanned driving system and method based on vision and deep learning
CN110765865A (en) * 2019-09-18 2020-02-07 北京理工大学 Underwater target detection method based on improved YOLO algorithm
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN110796186A (en) * 2019-10-22 2020-02-14 华中科技大学无锡研究院 Dry and wet garbage identification and classification method based on improved YOLOv3 network
CN111038888A (en) * 2019-12-26 2020-04-21 上海电力大学 Plastic bottle recycling robot based on machine vision
CN111401148A (en) * 2020-02-27 2020-07-10 江苏大学 Road multi-target detection method based on improved multilevel YO L Ov3
CN111582056A (en) * 2020-04-17 2020-08-25 上善智城(苏州)信息科技有限公司 Automatic detection method for fire-fighting equipment in oil unloading operation site of gas station
CN111612002A (en) * 2020-06-04 2020-09-01 广州市锲致智能技术有限公司 Multi-target object motion tracking method based on neural network
CN111738258A (en) * 2020-06-24 2020-10-02 东方电子股份有限公司 Pointer instrument reading identification method based on robot inspection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609891A (en) * 2021-06-15 2021-11-05 北京瞭望神州科技有限公司 Ship identification monitoring method and system
CN113469178A (en) * 2021-07-05 2021-10-01 安徽南瑞继远电网技术有限公司 Electric power meter identification method based on deep learning
CN113469178B (en) * 2021-07-05 2024-03-01 安徽南瑞继远电网技术有限公司 Power meter identification method based on deep learning

Similar Documents

Publication Publication Date Title
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN110610166B (en) Text region detection model training method and device, electronic equipment and storage medium
CN107833213B (en) Weak supervision object detection method based on false-true value self-adaptive method
CN113468967B (en) Attention mechanism-based lane line detection method, attention mechanism-based lane line detection device, attention mechanism-based lane line detection equipment and attention mechanism-based lane line detection medium
CN112200143A (en) Road disease detection method based on candidate area network and machine vision
CN111709416A (en) License plate positioning method, device and system and storage medium
CN113822247B (en) Method and system for identifying illegal building based on aerial image
CN112329768A (en) Improved YOLO-based method for identifying fuel-discharging stop sign of gas station
CN112329881B (en) License plate recognition model training method, license plate recognition method and device
CN111738114B (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN114708437B (en) Training method of target detection model, target detection method, device and medium
CN112766170B (en) Self-adaptive segmentation detection method and device based on cluster unmanned aerial vehicle image
CN114140683A (en) Aerial image target detection method, equipment and medium
CN112364855A (en) Video target detection method and system based on multi-scale feature fusion
CN112633149A (en) Domain-adaptive foggy-day image target detection method and device
CN113052159A (en) Image identification method, device, equipment and computer storage medium
CN113313706A (en) Power equipment defect image detection method based on detection reference point offset analysis
CN112785610B (en) Lane line semantic segmentation method integrating low-level features
CN112784494A (en) Training method of false positive recognition model, target recognition method and device
CN117218545A (en) LBP feature and improved Yolov 5-based radar image detection method
CN111160274A (en) Pedestrian detection method based on binaryzation fast RCNN (radar cross-correlation neural network)
CN116596895A (en) Substation equipment image defect identification method and system
CN116721396A (en) Lane line detection method, device and storage medium
CN112084815A (en) Target detection method based on camera focal length conversion, storage medium and processor
CN115690770A (en) License plate recognition method based on space attention characteristics in non-limited scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination