CN114821599A - Method for identifying characteristic graphic element in electrical drawing - Google Patents
Method for identifying characteristic graphic element in electrical drawing Download PDFInfo
- Publication number
- CN114821599A CN114821599A CN202210427416.6A CN202210427416A CN114821599A CN 114821599 A CN114821599 A CN 114821599A CN 202210427416 A CN202210427416 A CN 202210427416A CN 114821599 A CN114821599 A CN 114821599A
- Authority
- CN
- China
- Prior art keywords
- bitmap image
- extracting
- network
- equipment
- primitives
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000013528 artificial neural network Methods 0.000 claims abstract description 7
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 8
- 102100032202 Cornulin Human genes 0.000 claims description 6
- 101000920981 Homo sapiens Cornulin Proteins 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000000306 recurrent effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000007935 neutral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/19007—Matching; Proximity measures
- G06V30/19093—Proximity measures, i.e. similarity or distance measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19187—Graphical models, e.g. Bayesian networks or Markov models
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method for identifying characteristic primitives in electrical drawing, which comprises the following steps: acquiring a bitmap image of a gray scale mode of electrical drawing; extracting devices in the bitmap image using a YOLOv5 neural network; extracting a text label in the bitmap image by using a convolution cyclic network; associating the device with the text label; selecting character tags with the distance to the equipment within a candidate threshold value as candidate tags, and matching the equipment and the character tags according to the equipment type and the content of the candidate tags. The method can facilitate the artificial intelligence to identify the characteristic primitives in the electrical drawing.
Description
Technical Field
The invention relates to the technical field of pattern recognition, in particular to a method for recognizing characteristic primitives in electrical drawing.
Background
In the electrical drawing, the characteristic primitives drawn by the same equipment are consistent, so that the artificial intelligence is adopted to identify the characteristic primitives in the electrical drawing, and certain feasibility is achieved.
At present, the maintenance of power grid graphs and model data is based on a model cloud platform graph model, the current graph drawing adopts a manual maintenance mode, the problems of low efficiency, large maintenance workload and the like exist, an intelligent technical means is lacked, and a set of automatic and reliable analysis method for intelligent analysis is not provided at present. With the gradual expansion of the scale of the power grid and the continuous increase of the complexity, an automatic analysis method based on artificial intelligence image recognition is urgently needed to adapt to the development requirement of the intelligent power grid.
Disclosure of Invention
The invention aims to provide a method for identifying characteristic primitives in electrical drawing, so as to facilitate artificial intelligence identification of the characteristic primitives in electrical drawing.
The technical scheme of the invention is as follows:
a method of identifying characteristic primitives in an electrical drawing comprising the steps of:
acquiring a bitmap image of a gray scale mode of electrical drawing;
extracting devices in the bitmap image using a YOLOv5 neural network;
extracting a text label in the bitmap image by using a convolution cyclic network;
associating the device with the text label; selecting character tags with the distance to the equipment within a candidate threshold value as candidate tags, and matching the equipment and the character tags according to the equipment type and the content of the candidate tags.
Preferably, the YOLOv5 neural network adopts a one-stage structure and consists of four parts, namely an input end, a Backbone network, a tack layer and a Prediction layer.
Further preferably, the Prediction layer uses an anchor frame mechanism to ensure that the inference result is consistent with the labeled training data result, and is mainly implemented by a Loss function CIOU _ Loss, where the Loss function is as follows:
where IOU represents the ratio of the overlapping portion of two rectangular frames, and v is a parameter for measuring the uniformity of the aspect ratio.
Preferably, the extracting the text labels in the bitmap image by using the convolution cyclic network comprises the following steps:
building a CRNN network model;
training a deep learning network model;
evaluating the trained model;
and extracting the text information in the bitmap image by using the trained model.
Further preferably, the sample data used for training the deep learning network model is generated in a mode of generating the picture word stock by using the specified font depending on the limitation of the nouns in power transformation of the power grid.
Further preferably, the text labels are classified by using a naive Bayes classifier, and the device types identified by the text labels are labeled.
The invention has the beneficial effects that:
1. compared with the conventional image processing, the artificial intelligence image recognition technology can intelligently select and analyze the pictures. The CAD graphic format of the power system is standard, the pattern of each icon is clear, the noise interference is 0, and the method is very suitable for identifying a specific object by using a computer graphics method.
Based on technologies such as machine learning and image recognition, combined with CIM/G standards of the transformer substation wiring diagram, recognition of primitives, texts and topology of the power grid transformer substation wiring diagram and association of a regulation cloud model are achieved, and the intelligent level of wiring diagram maintenance is improved.
Drawings
FIG. 1 is a diagram of a power grid CIM/G graph automatic generation system architecture depending on CAD, PDF and other design drawings.
Detailed Description
The present invention is described below in terms of embodiments in conjunction with the accompanying drawings to assist those skilled in the art in understanding and implementing the present invention. Unless otherwise indicated, the following embodiments and technical terms therein should not be understood to depart from the background of the technical knowledge in the technical field.
Taking the electrical drawing of a transformer substation as an example, main electrical equipment of the transformer substation comprises a main transformer, circuit breakers of all levels of voltage distribution devices, isolating switches, current transformers, voltage transformers, lightning arresters, earthing devices, high-voltage fuses, current-limiting reactors and the like. The reactive power compensation device is provided with a shunt reactor, a shunt capacitor and the like. The neutral point grounding device comprises an arc suppression coil, a grounding transformer, a transformer neutral point small reactor and the like.
In the electrical drawing identification, a convolutional neural network model can be used for identifying equipment primitives and labels in a picture, a topological relation in a wiring diagram is identified by using a topological association module, so that the positions of the equipment primitives can be corrected according to the topological relation, and a CIM/G generation module is used for generating a GIM/G label according to a preset CIM/G file label library.
Example 1: a method of identifying characteristic primitives in an electrical drawing comprising the steps of:
a grayscale-mode bitmap image of the electrical drawing is acquired. If the electrical drawing is saved using the DWG format, DXF format, the electrical drawing is converted into a bitmap image, and if the converted bitmap image is a color image, it is also converted into a grayscale image. If the electrical drawing is stored using the PDF format, a bitmap image is extracted therefrom, and if the extracted bitmap image is a color image, it should be converted into a grayscale image.
Extracting devices in the bitmap image using a YOLOv5 neural network. The YOLOv5 neural network adopts a one-stage structure and consists of four parts, namely an input end, a Backbone network, a Neck layer and a Prediction layer.
The Backbone network part of the Backbone is used for extracting input end element information and generating a feature map (feature map) for the use of the hack layer. The Backbone network of the Backbone consists of a Focus structure and a CSP structure. The Focus structure has the function that in the process of down-sampling element information, the element information is guaranteed not to be lost when being concentrated on an image channel, and then the features of the element information are fully extracted through a convolution operation, so that more complete element down-sampling information is reserved for subsequent feature extraction. The CSP structure mainly solves the problem of high reasoning calculation caused by repeated gradient information in network derivation,
the principle is that the obtained feature mapping is divided into two parts, and the two parts are combined through a cross-stage hierarchical structure, so that the accuracy can be guaranteed while the calculated amount is reduced.
The Neck layer adopts the structure of FPN + PAN to fuse the extracted features. The FPN layer conveys strong semantic features from top to bottom, the PAN conveys strong positioning features from bottom to top, parameter aggregation is carried out on different detection layers from different backbone layers, and the feature extraction capability is further improved.
The Prediction layer ensures that the reasoning result is consistent with the marked training data result by using an anchor frame mechanism, and is mainly realized by a Loss function CIOU _ Loss, wherein the Loss function is as follows:
where IOU represents the ratio of the overlapping portion of two rectangular frames, and v is a parameter for measuring the uniformity of the aspect ratio.
And extracting the text labels in the bitmap image by using a convolution cyclic network. The method mainly comprises the following steps:
1) constructing a CRNN network model;
2) deep learning network model training;
3) evaluating the trained model by adopting a proper method;
4) and extracting the text information in the bitmap image by using the trained model.
The CRNN is a combination of CNN and RNN, and extracts features of the bitmap image through CNN, predicts a sequence with RNN, and finally obtains a final result through a translation layer.
In the CRNN model, the components of conv layers were constructed from conv layers and max-posing layers used in a standard CNN model (FC layers removed). Such components are used to extract a serialized feature representation from an input picture. All pictures need to be normalized to the same height before feed to the network. Then, a sequence of feature vectors is extracted from the feature maps. In particular, each feature vector of a feature sequence is generated from left to right by being listed on feature maps. This means that the ith feature vector is the concatenation (concatenation) of the ith column in all maps. In our setup, the width register of each column is defined as a single pixel.
On the conv layers, a deep bi-RNN network is constructed. The recurrent layers may be represented by the characteristic sequence x ═ x 1 ,……,x T Each frame x in (1) t Predicting a label distribution y t 。
In the network model training process, depending on the limitation of nouns in power transformation of a power grid, sample data is generated in a mode of generating a picture word stock by using a specified font, such as a bus, a main transformer, a standby transformer and the like, so that the training of a CRNN network model is supported.
Training data set by X ═ I i ,l i Definition of I, wherein I i Is a training picture,. l i Is the label sequence of the ground truth. The objective function is-log-likelihood that minimizes the conditional probability of the ground truth:
wherein, y i Is composed of i Sequences generated by recurrent layers and conv layers.
And associating the equipment and the text label. For each kind of equipment, a tag closest to the position of the equipment needs to be found, wherein the distance between the tag and the equipment is subject to an Euclidean distance between coordinates of the tag and the equipment, and specifically, the following steps are performed:
where ρ is the device coordinate (x) 1 ,y 1 ) With the label coordinate (x) 2 ,y 2 ) The euclidean distance between them.
And obtaining all candidate tags according to the Euclidean distance, and for the candidate tags, comparing the device type with the tag content to finally obtain the tags which are accurately matched, and matching the tags with the devices.
Based on drawing specifications of a substation design drawing, names of different types of equipment follow certain characteristic rules, for example, names of transformers are "# 1 main transformer", "1 # transformer", "2 # transformer", and the like, and names of busbars are "# 4", "# 5", "# 4 mother", and "# 5 mother", and the like. These names are short texts and the labels are classified using a naive bayes classifier and labeled with the device type they identify.
After the steps, the substation primitive recognition function can be realized by an artificial intelligence graphic recognition technology analysis method, meanwhile, a machine learning algorithm can be adopted for training aiming at the primitive features marked in a sample library, and the information such as the position, the coordinates and the like of the primitive can be recognized in a wiring diagram, so that the target detection of the primitive is realized. And finally, identifying the type of the primitive of the specific marked position coordinate by adopting a machine learning classification algorithm, and identifying the primary equipment type of the primitive.
The invention is described in detail above with reference to the figures and examples. It should be understood that in practice the description of all possible embodiments is not exhaustive and that the inventive concepts are described herein as far as possible by way of illustration. Without departing from the inventive concept of the present invention and without any creative work, a person skilled in the art should, in all of the embodiments, make optional combinations of technical features and experimental changes of specific parameters, or make a routine replacement of the disclosed technical means by using the prior art in the technical field to form specific embodiments, which belong to the content implicitly disclosed by the present invention.
Claims (6)
1. A method of identifying characteristic primitives in an electrical drawing, comprising the steps of:
acquiring a bitmap image of a gray scale mode of electrical drawing;
extracting devices in the bitmap image using a YOLOv5 neural network;
extracting a text label in the bitmap image by using a convolution cyclic network;
associating the device with the text label; selecting character tags with the distance to the equipment within a candidate threshold value as candidate tags, and matching the equipment and the character tags according to the equipment type and the content of the candidate tags.
2. The method for identifying feature primitives in electrical drawings as claimed in claim 1, wherein said YOLOv5 neural network adopts a one-stage structure, consisting of four parts of input end, Backbone network, sock layer and Prediction layer.
3. The method for recognizing characteristic primitives in electrical drawings as claimed in claim 2, wherein said Prediction layer utilizes an anchor frame mechanism to ensure that the inference result is consistent with the result of labeled training data, mainly implemented by a Loss function CIOU _ Loss, which is as follows:
where IOU represents the ratio of the overlapping portion of two rectangular frames, and v is a parameter for measuring the uniformity of the aspect ratio.
4. A method for identifying characteristic primitives in an electrical drawing as claimed in claim 1 wherein extracting text labels in said bitmap image using a convolutional recurrent network comprises the steps of:
building a CRNN network model;
training a deep learning network model;
evaluating the trained model;
and extracting the text information in the bitmap image by using the trained model.
5. The method of claim 4, wherein the sample data used to train the deep learning network model is generated in the form of a library of pictures generated using a specified font, subject to the limitations of nouns in power grid transformation.
6. The method for recognizing characteristic primitives of claim 4, wherein naive Bayes classifier is used to classify text labels and label the device types identified thereby.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210427416.6A CN114821599A (en) | 2022-04-21 | 2022-04-21 | Method for identifying characteristic graphic element in electrical drawing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210427416.6A CN114821599A (en) | 2022-04-21 | 2022-04-21 | Method for identifying characteristic graphic element in electrical drawing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114821599A true CN114821599A (en) | 2022-07-29 |
Family
ID=82504904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210427416.6A Pending CN114821599A (en) | 2022-04-21 | 2022-04-21 | Method for identifying characteristic graphic element in electrical drawing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114821599A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310765A (en) * | 2023-05-23 | 2023-06-23 | 华雁智能科技(集团)股份有限公司 | Electrical wiring graphic primitive identification method |
CN116978052A (en) * | 2023-07-21 | 2023-10-31 | 安徽省交通规划设计研究总院股份有限公司 | Subgraph layout recognition method of bridge design diagram based on improved YOLOv5 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309807A (en) * | 2019-07-08 | 2019-10-08 | 西北工业大学 | CAD diagram paper intelligent identification Method |
CN112287773A (en) * | 2020-10-10 | 2021-01-29 | 国家电网有限公司 | Primary wiring diagram primitive identification method based on convolutional neural network |
-
2022
- 2022-04-21 CN CN202210427416.6A patent/CN114821599A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110309807A (en) * | 2019-07-08 | 2019-10-08 | 西北工业大学 | CAD diagram paper intelligent identification Method |
CN112287773A (en) * | 2020-10-10 | 2021-01-29 | 国家电网有限公司 | Primary wiring diagram primitive identification method based on convolutional neural network |
Non-Patent Citations (2)
Title |
---|
张宏群等: "基于YOLOv5的遥感图像舰船的检测方法", 《电子测量技术》, vol. 44, no. 08, 30 April 2021 (2021-04-30), pages 87 - 92 * |
马景法: "基于深度学习的场景文字检测与识别", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 06, 15 June 2018 (2018-06-15), pages 4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310765A (en) * | 2023-05-23 | 2023-06-23 | 华雁智能科技(集团)股份有限公司 | Electrical wiring graphic primitive identification method |
CN116310765B (en) * | 2023-05-23 | 2023-09-01 | 华雁智能科技(集团)股份有限公司 | Electrical wiring graphic primitive identification method |
CN116978052A (en) * | 2023-07-21 | 2023-10-31 | 安徽省交通规划设计研究总院股份有限公司 | Subgraph layout recognition method of bridge design diagram based on improved YOLOv5 |
CN116978052B (en) * | 2023-07-21 | 2024-04-09 | 安徽省交通规划设计研究总院股份有限公司 | Subgraph layout recognition method of bridge design diagram based on improved YOLOv5 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Automatic fault diagnosis of infrared insulator images based on image instance segmentation and temperature analysis | |
CN111382717B (en) | Table identification method and device and computer readable storage medium | |
CN108304835B (en) | character detection method and device | |
CN114821599A (en) | Method for identifying characteristic graphic element in electrical drawing | |
JP2023541532A (en) | Text detection model training method and apparatus, text detection method and apparatus, electronic equipment, storage medium, and computer program | |
CN111860348A (en) | Deep learning-based weak supervision power drawing OCR recognition method | |
US11721088B2 (en) | Image translation for image recognition to compensate for source image regional differences | |
CN110598698B (en) | Natural scene text detection method and system based on adaptive regional suggestion network | |
CN114022900A (en) | Training method, detection method, device, equipment and medium for detection model | |
CN110210468A (en) | A kind of character recognition method based on the migration of convolutional neural networks Fusion Features | |
CN111898566B (en) | Attitude estimation method, attitude estimation device, electronic equipment and storage medium | |
CN113947188A (en) | Training method of target detection network and vehicle detection method | |
CN114863437A (en) | Text recognition method and device, electronic equipment and storage medium | |
CN116403235A (en) | Electrical wiring diagram recognition system and method based on computer vision | |
CN115294577A (en) | Model training method and device, computer equipment and storage medium | |
Yang et al. | Intelligent digitization of substation one-line diagrams based on computer vision | |
CN115359505A (en) | Electric power drawing detection and extraction method and system | |
Wang et al. | Textformer: Component-aware text segmentation with transformer | |
CN114596487A (en) | Switch on-off state identification method based on self-attention mechanism | |
Beltaief et al. | Deep fcn for Arabic scene text detection | |
Han et al. | Instance Segmentation of Transmission Line Images Based on an Improved D-SOLO Network | |
CN114708429A (en) | Image processing method, image processing device, computer equipment and computer readable storage medium | |
CN114359948A (en) | Power grid wiring diagram primitive identification method based on overlapping sliding window mechanism and YOLOV4 | |
CN113971750A (en) | Key information extraction method, device, equipment and storage medium for bank receipt | |
CN116863509B (en) | Method for detecting human-shaped outline and recognizing gesture by using improved polar mask |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |