CN116597436A - Method and device for recognizing characters of nameplate of switch cabinet of power distribution room - Google Patents

Method and device for recognizing characters of nameplate of switch cabinet of power distribution room Download PDF

Info

Publication number
CN116597436A
CN116597436A CN202310679315.2A CN202310679315A CN116597436A CN 116597436 A CN116597436 A CN 116597436A CN 202310679315 A CN202310679315 A CN 202310679315A CN 116597436 A CN116597436 A CN 116597436A
Authority
CN
China
Prior art keywords
image
nameplate
text
sample
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310679315.2A
Other languages
Chinese (zh)
Inventor
刘秦铭
陈申宇
陈泽涛
苏崇文
王增煜
任杰
芮庆涛
黄海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202310679315.2A priority Critical patent/CN116597436A/en
Publication of CN116597436A publication Critical patent/CN116597436A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19147Obtaining sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

According to the method and the device for recognizing the nameplate characters of the switch cabinet of the power distribution room, when the nameplate characters of the switch cabinet of the power distribution room are recognized, the switch cabinet image containing the nameplate in the power distribution room can be firstly obtained, and the nameplate detection model, the text box detection model and the character recognition model are respectively determined, wherein the nameplate area can be extracted from the switch cabinet image through the nameplate detection model to obtain the text box image, each character in the text box image can be marked by the text box detection model, so that a character box image is obtained, and then the character block image can be input into the character recognition model, so that the text information of the output of the character recognition model is obtained. According to the application, three models are utilized to carry out nameplate character recognition on the switch cabinet image of the power distribution room, and recognition areas in the switch cabinet image can be reduced step by step, so that interference of irrelevant areas is eliminated, the area of a recognition target is enlarged, and the character recognition accuracy is improved.

Description

Method and device for recognizing characters of nameplate of switch cabinet of power distribution room
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for recognizing characters of a nameplate of a switch cabinet of a power distribution room.
Background
Along with the continuous increase of the demand of production life on electric power energy, the quality requirement on the electric power supply provided by a power supply department is also higher and higher, and a power distribution room is the most important power supply node in a power distribution system, wherein the switch cabinet nameplate of the power distribution room contains important information such as the model, specification, production date, rated voltage, rated current and the like of equipment, so that the management and maintenance of the power distribution room are important in identifying and extracting the text information of the switch cabinet nameplate.
For large-scale distribution systems, the workload of manually searching and identifying the nameplate of the switch cabinet is very large, so that the neural network model is trained by adopting the deep learning technology at the present stage so as to perform character identification on the nameplate of the switch cabinet in a distribution room, thereby improving the working efficiency and accuracy, however, when the switch cabinet image acquired by monitoring equipment is identified, the condition that character information outside the nameplate area is identified and characters in the nameplate are missed when the characters in the nameplate are crowded is easy to appear, and the character identification accuracy is lower.
Disclosure of Invention
The application aims to at least solve one of the technical defects, and particularly the technical defects that character information outside a nameplate area is easily identified and characters in the nameplate are not detected when the characters are crowded when the images of the switch cabinet acquired through monitoring equipment are identified in the prior art, so that the character identification accuracy is low.
The application provides a method for recognizing characters of a nameplate of a switch cabinet of a power distribution room, which comprises the following steps:
acquiring a switch cabinet image containing a nameplate in a power distribution room, and determining a nameplate detection model, a text box detection model and a character recognition model;
extracting a nameplate area from the switch cabinet image through the nameplate detection model to obtain a text box image;
labeling each character in the text box image based on the text box detection model to obtain a character box image;
and inputting the character frame image into the character recognition model to obtain the text information output by the character recognition model, and displaying the text information in the switch cabinet image.
Optionally, the acquiring the switch cabinet image including the nameplate in the power distribution room includes:
acquiring an initial image of a switch cabinet in a power distribution room through a camera, and carrying out data enhancement on the initial image to obtain a switch cabinet image containing a nameplate;
the data enhancement comprises image screening, brightness adjustment, size adjustment and denoising.
Optionally, the determining the nameplate detection model includes:
constructing an initial nameplate detection model; the initial nameplate detection model comprises an input layer, a backbone network, a path aggregation network and a general detection layer; the path aggregation network comprises a convolution attention module;
Inputting a sample switch cabinet image obtained in advance into the backbone network through the input layer, and extracting feature mapping of the sample switch cabinet image;
according to the feature mapping, a plurality of feature images of the sample switch cabinet image are obtained through the backbone network, and the convolution attention module is utilized to adjust the channel and the space position of each feature image, so that a fusion feature image is obtained;
inputting the fusion feature map into the universal detection layer for prediction to obtain a predicted text box image output by the universal detection layer;
and taking the predicted text block image as a target, which is close to the real text block image of the sample switch cabinet image, and performing iterative training on the initial nameplate detection model by utilizing a CLoU Loss function until the initial nameplate detection model meets the preset training ending condition, so as to obtain the nameplate detection model.
Optionally, the determining the text box detection model includes:
dividing a pre-acquired sample text block image into a training set and a testing set, and marking each character in the sample text block image in the training set to obtain a sample marking image corresponding to the sample text block image;
Taking a sample text box image in the training set as a training sample, taking the sample label image as a sample label, and calculating a loss value of the sample text box image in a preset initial text box detection model;
updating parameters of the initial text box detection model according to the loss value of each sample text box image in the training set to obtain a target text box detection model;
and carrying out iterative training on the target text box detection model by using the sample text block diagrams in the test set until the target text box detection model meets the preset training ending condition, so as to obtain the text box detection model.
Optionally, the iteratively training the target text box detection model by using the sample text block diagram in the test set includes:
inputting the sample text box image in the test set into the target text box detection model to obtain a predicted sample character box image which is output by the target text box detection model and contains a plurality of character boxes;
calculating the confidence probability of the sample text box image according to the number of character boxes in the sample character box image;
and calculating a loss value of the sample text box image by taking the sample text box image as a training sample and taking the predicted sample character box image as a sample label, and optimizing the loss value by utilizing the confidence probability so as to update parameters of the target text box detection model according to the optimized loss value.
Optionally, the determining the text recognition model includes:
inputting a pre-acquired sample character frame image into a preset initial character recognition model to obtain predicted text information output by the initial character recognition model;
training the initial character recognition model by using a CTC loss function by taking the real text information of the predicted text information approaching to the sample character frame image as a target;
and when the initial character recognition model meets the preset training ending condition, taking the initial character recognition model after training as a character recognition model.
Optionally, the initial word recognition model includes a convolutional neural network, a recurrent neural network, and a transcribing neural network;
inputting the pre-acquired sample character frame image into a preset initial character recognition model to obtain predicted text information output by the initial character recognition model, wherein the method comprises the following steps:
inputting a pre-acquired sample character frame image into the convolutional neural network, extracting image features of the sample character frame image, and converting the image features into a feature matrix;
extracting a characteristic sequence of the characteristic matrix by using the cyclic neural network, and performing deep bidirectional processing on the characteristic sequence to obtain a character sequence of the sample character frame image;
And inputting the text sequence into the transcription neural network to obtain the predicted text information output by the transcription neural network.
The application also provides a device for recognizing the characters of the nameplate of the switch cabinet of the power distribution room, which comprises the following components:
the model determining module is used for acquiring a switch cabinet image containing a nameplate in the power distribution room and determining a nameplate detection model, a text box detection model and a character recognition model;
the text box extraction module is used for extracting a nameplate area from the switch cabinet image through the nameplate detection model to obtain a text box image;
the character frame extraction module is used for marking each character in the text frame image based on the text frame detection model to obtain a character frame image;
and the text recognition module is used for inputting the character box image into the text recognition model to obtain the text information output by the text recognition model, and displaying the text information in the switch cabinet image.
The present application also provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room cubicle nameplate text identification method as described in any of the above embodiments.
The present application also provides a computer device comprising: one or more processors, and memory;
the memory has stored therein computer readable instructions that, when executed by the one or more processors, perform the steps of the power distribution room switch cabinet nameplate text identification method of any of the above embodiments.
From the above technical solutions, the embodiment of the present application has the following advantages:
according to the method and the device for recognizing the nameplate characters of the switch cabinet of the power distribution room, when the nameplate characters of the switch cabinet of the power distribution room are recognized, the switch cabinet image containing the nameplate in the power distribution room can be firstly obtained, and the nameplate detection model, the text box detection model and the character recognition model are respectively determined, wherein the nameplate area is extracted from the switch cabinet image through the nameplate detection model to obtain the text box image, the interference of irrelevant image areas can be eliminated, the data volume of subsequent image processing is reduced, so that the accuracy of recognizing the nameplate characters is improved, the text box detection model can label each character in the text box image, so that a character box image is obtained, then a character block image can be input into the character recognition model, so that the text information output by the character recognition model is obtained, and the omission ratio under the condition of crowding characters can be reduced in a character recognition mode of marking each character in the text box image. According to the application, three models are utilized to carry out nameplate character recognition on the switch cabinet image of the power distribution room, and recognition areas in the switch cabinet image can be reduced step by step, so that interference of irrelevant areas is eliminated, the area of a recognition target is enlarged, and the character recognition accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for recognizing characters of a nameplate of a switch cabinet of a power distribution room, which is provided by an embodiment of the application;
fig. 2 is a schematic structural diagram of a nameplate character recognition device of a switch cabinet of a power distribution room, which is provided by the embodiment of the application;
fig. 3 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For large-scale distribution systems, the workload of manually searching and identifying the nameplate of the switch cabinet is very large, so that the neural network model is trained by adopting the deep learning technology at the present stage so as to perform character identification on the nameplate of the switch cabinet in a distribution room, thereby improving the working efficiency and accuracy, however, when the switch cabinet image acquired by monitoring equipment is identified, the condition that character information outside the nameplate area is identified and characters in the nameplate are missed when the characters in the nameplate are crowded is easy to appear, and the character identification accuracy is lower.
Based on the above, the application provides the following technical scheme, which specifically comprises the following steps:
in one embodiment, as shown in fig. 1, fig. 1 is a schematic flow chart of a method for identifying characters of a nameplate of a switch cabinet of a power distribution room, which is provided by the embodiment of the application; the application provides a method for recognizing characters of a nameplate of a switch cabinet of a power distribution room, which specifically comprises the following steps:
s110: and acquiring a switch cabinet image containing a nameplate in the power distribution room, and determining a nameplate detection model, a text box detection model and a character recognition model.
In the step, when the switch cabinet image is subjected to character recognition, the switch cabinet in the power distribution room can be shot through a camera, a field image is acquired, the switch cabinet image containing the switch cabinet nameplate is screened out from the acquired field image, and then a model for detecting and recognizing the switch cabinet image can be determined, wherein the model can comprise a nameplate detection model, a text box detection model and a character recognition model.
The switch cabinet is a cabinet for storing power equipment in a power distribution room, a nameplate is arranged on the cabinet surface of the switch cabinet, and equipment information corresponding to the power equipment in the switch cabinet, such as model, specification, production date, rated voltage, rated current and the like, is recorded on the nameplate.
It can be understood that each model of the application performs a corresponding function in nameplate character recognition, for example, a nameplate detection model is used for detecting nameplate areas in the switch cabinet images, a text box detection model is used for detecting character positions in the nameplate areas, and a character recognition model is used for recognizing and outputting characters at the character positions; according to the application, three models are utilized to carry out nameplate character recognition on the switch cabinet image of the power distribution room, and recognition areas in the switch cabinet image can be reduced step by step, so that interference of irrelevant areas is eliminated, the area of a recognition target is enlarged, and the character recognition accuracy is improved.
S120: and extracting a nameplate area from the switch cabinet image through a nameplate detection model to obtain a text box image.
In this step, after the switch cabinet image is obtained and the nameplate detection model is determined through S110, the switch cabinet image may be input into the nameplate detection model, so that the nameplate detection model detects the area of the nameplate in the switch cabinet image, and the switch cabinet image is cut according to the detected area, thereby obtaining a text box image and outputting the text box image.
It can be understood that the nameplate detection model is obtained by training a preset initial nameplate detection model by taking a sample switch cabinet image as a training sample and taking a real text box image corresponding to the sample switch cabinet image as a training label, in the training process, the sample switch cabinet image can be input into the initial nameplate detection model to obtain a predicted text box image output by the initial nameplate detection model, then the initial nameplate detection model can be iteratively trained by taking the real text box image of which the predicted text box image approaches to the sample switch cabinet image as a target until the initial nameplate detection model meets a preset training ending condition, so that the nameplate detection model is obtained, wherein the preset training ending condition can be a loss value threshold or training iteration times, and the method is not limited.
S130: and labeling each character in the text box image based on the text box detection model to obtain a character box image.
In this step, after determining the text box detection model and the text box image through S110 and S120, the text box image may be input into the text box detection model, so that the text box detection model detects a position of each character in the nameplate in the text box image, and marks a character box on each character in the switch cabinet image according to the detected position, thereby obtaining a character box image and outputting the character box image.
It can be understood that the text box detection model in the application is obtained by training a preset initial text box detection model by taking a sample text box image as a training sample and taking a real character box image corresponding to the sample text box image as a training label, in the training process, the sample switch cabinet image can be divided into a training set and a test set, the training set can be firstly input into the initial text box detection model to obtain a predicted character box image output by the initial text box detection model, then the real character box image of which the predicted character box image approaches to the sample switch cabinet image can be taken as a target, the initial nameplate detection model can be trained to obtain a target nameplate detection model, then the test set can be utilized to carry out iterative training on the target nameplate detection model until the target text box detection model meets a preset training end condition, the preset training end condition can be a loss value threshold or training iteration times, and the preset training end condition can be a loss value threshold or training iteration times.
S140: inputting the character frame image into the character recognition model to obtain the text information of the output of the character recognition model, and displaying the text information in the switch cabinet image.
In this step, after determining the character recognition model and the character frame image through S110 and S130, the character frame image may be input into the character recognition model, so that the character recognition model sequentially recognizes the characters in each character frame in the nameplate, and the recognized characters are sequentially combined, so as to obtain text information on the nameplate, and then the text information may be displayed in the switch cabinet image and output.
It can be understood that the text recognition model in the application is obtained by training the preset initial text recognition model by taking the sample character frame image as a training sample and the real text information corresponding to the sample character frame image as a training label, in the training process, the sample character frame image can be input into the initial text recognition model to obtain the predicted text information output by the initial text recognition model, then the initial text recognition model can be iteratively trained by taking the predicted text information approaching to the real text information of the sample character frame image as a target until the initial text recognition model meets the preset training ending condition, so as to obtain the text recognition model, wherein the preset training ending condition can be a loss value threshold value or training iteration times, and the method is not limited.
Further, after the text information of the character frame image is identified, the text information may be displayed on a corresponding area in the switch cabinet image, where the area may be an area adjacent to the nameplate or an area in the nameplate, and the area is not limited herein. Furthermore, the font and the font size of the text information display can be adjusted according to the specific background in the switch cabinet image, so that operation and maintenance personnel can see the equipment information of corresponding equipment in the switch cabinet at a glance.
In the above embodiment, when the nameplate of the switch cabinet of the power distribution room is subjected to character recognition, the switch cabinet image containing the nameplate in the power distribution room can be firstly obtained, and the nameplate detection model, the text box detection model and the character recognition model are respectively determined, wherein the nameplate area is extracted from the switch cabinet image through the nameplate detection model to obtain the text box image, the interference of irrelevant image areas can be eliminated, the data volume of subsequent image processing is reduced, so that the accuracy of character recognition of the nameplate is improved, the text box detection model can label each character in the text box image, so that a character box image is obtained, then the character box image can be input into the character recognition model, so that the text information of the output of the character recognition model is obtained, and the omission ratio under the condition of crowded characters can be reduced by the way of character recognition after each character in the text box image is respectively labeled. According to the application, three models are utilized to carry out nameplate character recognition on the switch cabinet image of the power distribution room, and recognition areas in the switch cabinet image can be reduced step by step, so that interference of irrelevant areas is eliminated, the area of a recognition target is enlarged, and the character recognition accuracy is improved.
In one embodiment, the acquiring, in S110, the switch cabinet image including the nameplate in the power distribution room may include:
s111: the method comprises the steps of collecting an initial image of a switch cabinet in a power distribution room through a camera, and carrying out data enhancement on the initial image to obtain a switch cabinet image containing a nameplate.
In this embodiment, when obtaining the switch cabinet image, can adopt the camera to carry out the multi-angle shooting to the switch cabinet in the electricity distribution room to obtain the initial image of switch cabinet, and carry out data enhancement to the initial image, obtain the switch cabinet image that contains the data plate.
It can be understood that when the images of the switch cabinets are acquired, the cameras can be used for shooting the switch cabinets in the power distribution room in real time, the interval time for image capture is set for the cameras, namely, each time the interval time is set, the cameras automatically capture an initial image, further, after the initial image is acquired, the data enhancement can be carried out on the images to be identified, and the preprocessing process comprises, but is not limited to, image screening, brightness adjustment, size adjustment and denoising.
It should be noted that, because the light of the power distribution room is different and the angles of the cameras are different, the obtained initial image quality is different, the obtained image can be subjected to data enhancement, wherein the image screening refers to removing the initial image which is not shot on the nameplate, the initial image containing the nameplate is reserved, the brightness adjustment refers to adjusting the brightness value in the initial image to a preset brightness value, the size adjustment refers to adjusting the size and the resolution of the initial image, and the proportion of the shot switch cabinet in the initial image is also different because the positions of the cameras in the power distribution room are different, so that the size adjustment can be performed on the initial image according to the proportion of the switch cabinet in the initial image, the denoising process refers to the process of reducing the noise in the digital image, the generally shot image comprises noise which is an important cause of image interference, and the denoising process can be performed.
In one embodiment, S110 determines a nameplate detection model, which may include:
s1121: constructing an initial nameplate detection model; the initial nameplate detection model comprises an input layer, a backbone network, a path aggregation network and a general detection layer; the path aggregation network includes a convolution attention module.
S1122: and inputting the pre-acquired sample switch cabinet image into a backbone network through an input layer, and extracting the characteristic mapping of the sample switch cabinet image.
S1123: according to the feature mapping, a plurality of feature images of the sample switch cabinet image are obtained through a backbone network, and the channel and the space position of each feature image are adjusted by utilizing a convolution attention module, so that a fusion feature image is obtained.
S1124: and inputting the fusion feature map into a universal detection layer for prediction to obtain a predicted text box image output by the universal detection layer.
S1125: and iteratively training the initial nameplate detection model by using the CLoU Loss function until the initial nameplate detection model meets the preset training ending condition to obtain the nameplate detection model.
In this embodiment, the initial nameplate detection model can be built based on YOLOv5m, the model comprises an input layer, a backbone network, a path aggregation network and a general detection layer, a convolution attention module can be added in the path aggregation network, so that the channel characteristics of a target detection object in the model are focused more, the detection capability of the small target in the model is enhanced, meanwhile, a CLoU Loss function is adopted as a target Loss function of the initial nameplate detection model, the regression stability of a target boundary frame is enhanced, the accuracy of high target prediction is improved, and after the initial nameplate detection model is built, the initial nameplate detection model can be trained by utilizing a pre-acquired sample switch cabinet image and a real text block diagram of the sample switch cabinet image.
It is understood that YOLOv5m is a model in YOLOv5 series, and the application can also use models of YOLOv5s, YOLOv5l, YOLOv5x, etc. to construct the initial nameplate detection model, without limitation. The convolution attention module in the initial nameplate detection model refers to an attention mechanism module used in image processing and computer vision tasks, the feature images can be adaptively adjusted to improve the performance and accuracy of the model, the convolution attention module mainly comprises channel attention and space attention, so that the channel and space positions of the feature images can be adaptively adjusted, the CLoU Loss function refers to a Loss function used in target detection, the weight between positive and negative samples is balanced by introducing a positioning uncertainty factor, the detection accuracy of the model for small targets is improved, and the weight of the samples can be adaptively adjusted when the positioning Loss is calculated by the CLoU Loss function so as to better solve the imbalance problem between the positive and negative samples.
In the process of training an initial nameplate detection model, firstly, a sample switch cabinet image which is acquired in advance can be input into a backbone network through an input layer, so that the backbone network carries out slicing operation and convolution operation on the sample switch cabinet image, feature mapping of a target detection object in the sample switch cabinet image is extracted and output, then a path aggregation network can acquire a plurality of feature images of the sample switch cabinet image according to the feature mapping, a convolution attention module is utilized to adjust channels and space positions of the feature images to obtain a plurality of fusion feature images, the fusion feature images are input into a prediction layer to obtain a predicted text box image output by a general detection layer, finally, a Loss value between the predicted text box image and a real text box image of the sample switch cabinet image can be calculated by utilizing a CLoU Loss function, and parameters of the initial nameplate detection model are updated according to the Loss value, so that training of the initial nameplate detection model is achieved.
In one embodiment, S110 determines a text box detection model, which may include:
s1131: dividing a pre-acquired sample text block image into a training set and a testing set, and marking each character in the sample text block image in the training set to obtain a sample marking image corresponding to the sample text block image.
S1132: and taking the sample text box image in the training set as a training sample, taking the sample label image as a sample label, and calculating the loss value of the sample text box image in a preset initial text box detection model.
S1133: and updating parameters of the initial text box detection model according to the loss value of each sample text box image in the training set to obtain a target text box detection model.
S1134: and carrying out iterative training on the target text box detection model by using the sample text block images in the test set until the target text box detection model meets the preset training ending condition, so as to obtain the text box detection model.
In this embodiment, an initial text box detection model can be built based on a CRAFT, and the model adopts an up-sampling structure of VGG16 and a similar UNet, where VGG16 is a classical convolutional neural network structure, and includes 13 convolutional layers and 3 full-connection layers, so that multi-level feature extraction can be performed on an input image, and thus detection accuracy and robustness of the model are improved, and similar UNet is a commonly used image segmentation network structure, and feature graphs with different scales can be adaptively fused, so as to improve accuracy and robustness of the model. The CRATT model applies the UNet-like structure to text detection, so that the structural features of shallow layers and the semantic features of deep layers can be reserved, and the detection capability of the model on texts with different scales and shapes is improved. After the initial text box detection model is built, training the initial nameplate detection model by utilizing the pre-acquired sample text box image and the real labeling image of the sample switch cabinet image.
Specifically, in the process of training the initial text box detection model, firstly, a pre-collected sample text box image can be divided into a training set and a testing set, wherein each character in the sample text box image in the training set can be marked to obtain a sample marked image corresponding to the sample text box image, then each sample text box image in the training set can be input into the initial text box detection model to obtain a plurality of prediction marked images output by the initial text box detection model, then a loss value between each prediction marked image and the sample marked image of the corresponding sample text box image can be calculated by utilizing a target loss function of the initial text box detection model, so that parameters of the initial text box detection model are updated according to each loss value, and the target text box detection model is obtained through training.
Further, in the initial stage of model training, the training set used comprises a marked image after marking characters, and the marked image has accurate marking information of character frames, so that the marked image can be directly used, the marked image has similarity but is not completely the same as the data characteristics of an unlabeled image, and limited help can be provided for model training, so that after the model is trained to have a certain prediction capability, the unlabeled image can be used for further training the model, and the method is equivalent to adopting the test set for further training the target text box detection model.
In one embodiment, iteratively training the target text box detection model using the sample text block diagrams in the test set in S1134 may include:
s1341: and inputting the sample text box image in the test set into a target text box detection model to obtain a predicted sample character box image which is output by the target text box detection model and contains a plurality of character boxes.
S1342: and calculating the confidence probability of the sample text box image according to the number of the character boxes in the sample character box image.
S1343: and calculating a loss value of the sample text box image by taking the sample text box image as a training sample and taking the predicted sample character box image as a sample label, and optimizing the loss value by using the confidence probability so as to update the parameters of the target text box detection model according to the optimized loss value.
In this embodiment, when performing iterative training on a target text box detection model, first, sample text box images in a test set may be sequentially input into the target text box detection model to obtain predicted sample text box images including a plurality of character boxes output by the target text box detection model, where each character box includes a character, then, according to the number of character boxes in the sample text box images, a confidence probability of the sample text box images may be calculated, and the sample text box images are used as label images of the sample text box images, that is, the text box images are used as training samples, and the predicted sample text box images are used as sample labels to re-train the model.
It can be understood that, because the predicted sample character frame image is obtained by predicting after the sample character frame image is input into the model, the accuracy of the number and the position of the character frames in the sample character frame image is not guaranteed, so that the accuracy of the prediction can be measured by utilizing the difference between the predicted and the actual number of the characters.
Further, in order to ensure effectiveness of iterative training, sample character frame images and corresponding sample labeling images in a training set can be added randomly in a training process, and the ratio of the added sample labeling images to the predicted sample character frame images can be 1: 5. 1:4, without limitation. According to the application, the model is trained through incompletely accurate marking data, so that the cost and difficulty of the marking data can be reduced, and the training efficiency and generalization capability of the model are improved.
In one embodiment, determining the text recognition model in S110 may include:
s1141: and inputting the pre-acquired sample character box image into a preset initial character recognition model to obtain predictive text information output by the initial character recognition model.
S1142: and training the initial character recognition model by using a CTC loss function by taking the real text information of which the predicted text information approaches to the sample character frame image as a target.
S1143: when the initial character recognition model meets the preset training ending condition, the training-completed initial character recognition model is used as a character recognition model.
In this embodiment, when determining the predicted text information, an initial text recognition model may be constructed based on CRNN, and the initial text recognition model may be trained, in the training process, a sample character frame image may be first obtained, the sample character frame image is marked with corresponding text information, and the sample label is real text information of the sample character frame image, so that after the sample character frame image is input to the initial text recognition model, the predicted text information output by the initial text recognition model may be obtained, and then, the present application may use the predicted text information approaching to the real text information of the sample character frame image as a target, and train the initial text recognition model by using CTC loss function, and when the initial text recognition model meets a preset training end condition, the trained initial text recognition model is used as a text recognition model, so as to determine the text recognition model.
It can be understood that when the initial character recognition model is trained, the model can be firstly constructed based on a CRNN, wherein the CRNN is a convolution cyclic neural network structure and is mainly used for recognizing text sequences with indefinite length end to end and converting the text recognition into a time sequence one-place sequence learning problem, and the method comprises three network structures, namely a convolution neural network, a cyclic neural network and a transcription neural network. In the application, a resnet residual network can be adopted in the convolutional neural network, so that the convolutional neural network is used for improving the speed reduction space, and an Attention (Attention mechanism) structure is added in the convolutional neural network in an improved way, so that the model can pay more Attention to important information in an input sequence, and the performance and effect of the model are improved.
In addition, the application can also adopt CTC loss function as the target loss function of the initial character recognition model, which can map the input sequence to the output sequence and consider the alignment relationship between the input sequence and the output sequence; specifically, the CTC penalty function takes the alignment between the input sequence and the output sequence as hidden variables, trains the model by maximizing the alignment probability, automatically learns the alignment during training, and maps the output of the model to the correct labeling sequence.
Furthermore, the application can also preprocess the sample character frame image before inputting the sample character frame image into the initial character recognition model, such as normalization processing, sharpening processing, denoising processing and other operations, so that the sample character frame image is scaled to a proper size, the definition of the image is effectively improved, and the training efficiency of the model is conveniently improved.
In one embodiment, the initial word recognition model includes a convolutional neural network, a recurrent neural network, and a transcribing neural network; s1141, inputting a pre-acquired sample character frame image into a preset initial character recognition model to obtain predicted text information output by the initial character recognition model, which may include:
s1411: inputting a pre-acquired sample character frame image into a convolutional neural network, extracting image features of the sample character frame image, and converting the image features into a feature matrix.
S1412: and extracting the characteristic sequence of the characteristic matrix by using the cyclic neural network, and performing deep bidirectional processing on the characteristic sequence to obtain the character sequence of the sample character frame image.
S1413: and inputting the text sequence into a transcription neural network to obtain the predicted text information output by the transcription neural network.
In this embodiment, when a sample character frame image obtained in advance is input into an initial text model, the sample character frame image may be input into a convolutional neural network in the initial text model, so that the convolutional neural network extracts image features of the sample character frame, and further converts the extracted image features into feature rectangles, then the feature rectangles may be input into a cyclic neural network, a feature sequence of a feature matrix may be extracted by using the cyclic neural network, and a depth bi-directional processing may be performed on the feature sequence to obtain a text sequence of the sample character frame image, and finally the text sequence may be input into a transcriptional neural network to obtain predicted text information output by the transcriptional neural network.
It can be understood that after the image features of the sample character frame image are extracted, the image features can be divided into different grids, then the feature vectors of each grid are spliced into a feature matrix, and the number of rows and columns of the feature matrix can be adjusted according to the actual conditions of the construction parameters of the model, the training progress and the like, so that the method is not limited.
In addition, when the feature sequence is subjected to deep bi-directional processing, forward processing and backward processing may be performed on the input feature sequence, specifically, the forward processing is to process the input data of each time step in the sequence one by one from the first time step of the sequence until the last time step, and then the backward processing is to process the input data of each time step in the sequence in reverse order from the last time step of the sequence until the first time step. The modeling capability of the model on the sequence data and the language data can be realized by the information interaction and fusion of the forward direction and the backward direction.
The description of the nameplate character recognition device of the switch cabinet of the power distribution room provided by the embodiment of the application is provided below, and the nameplate character recognition device of the switch cabinet of the power distribution room described below and the nameplate character recognition method of the switch cabinet of the power distribution room described above can be correspondingly referred to each other.
In one embodiment, as shown in fig. 3, fig. 3 is a schematic structural diagram of a nameplate character recognition device of a switch cabinet of a power distribution room, provided by the embodiment of the application; the application also provides a device for recognizing the nameplate words of the switch cabinet of the power distribution room, which can comprise a model determining module 210, a text box extracting module 220, a character box extracting module 230 and a text recognizing module 240, and specifically comprises the following steps:
the model determining module 210 is configured to obtain an image of a switch cabinet including a nameplate in the power distribution room, and determine a nameplate detection model, a text box detection model, and a text recognition model.
And the text box extraction module 220 is used for extracting a nameplate area from the switch cabinet image through the nameplate detection model to obtain a text box image.
And a character frame extraction module 230, configured to label each character in the text frame image based on the text frame detection model, so as to obtain a character frame image.
The text recognition module 240 is configured to input the character box image into the text recognition model, obtain output text information of the text recognition model, and display the text information in the switch cabinet image.
In the above embodiment, when the nameplate of the switch cabinet of the power distribution room is subjected to character recognition, the switch cabinet image containing the nameplate in the power distribution room can be firstly obtained, and the nameplate detection model, the text box detection model and the character recognition model are respectively determined, wherein the nameplate area is extracted from the switch cabinet image through the nameplate detection model to obtain the text box image, the interference of irrelevant image areas can be eliminated, the data volume of subsequent image processing is reduced, so that the accuracy of character recognition of the nameplate is improved, the text box detection model can label each character in the text box image, so that a character box image is obtained, then the character box image can be input into the character recognition model, so that the text information of the output of the character recognition model is obtained, and the omission ratio under the condition of crowded characters can be reduced by the way of character recognition after each character in the text box image is respectively labeled. According to the application, three models are utilized to carry out nameplate character recognition on the switch cabinet image of the power distribution room, and recognition areas in the switch cabinet image can be reduced step by step, so that interference of irrelevant areas is eliminated, the area of a recognition target is enlarged, and the character recognition accuracy is improved.
In one embodiment, the present application also provides a storage medium having stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room switchgear nameplate text identification method as described in any of the above embodiments.
In one embodiment, the present application also provides a computer device having stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room switch cabinet nameplate text identification method as set forth in any of the above embodiments.
Schematically, as shown in fig. 3, fig. 3 is a schematic internal structure of a computer device according to an embodiment of the present application, and the computer device 300 may be provided as a server. Referring to FIG. 3, a computer device 300 includes a processing component 302 that further includes one or more processors, and memory resources represented by memory 301, for storing instructions, such as applications, executable by the processing component 302. The application program stored in the memory 301 may include one or more modules each corresponding to a set of instructions. In addition, the processing component 302 is configured to execute instructions to perform the power distribution room cubicle nameplate text identification method of any of the embodiments described above.
The computer device 300 may also include a power supply component 303 configured to perform power management of the computer device 300, a wired or wireless network interface 304 configured to connect the computer device 300 to a network, and an input output (I/O) interface 305. The computer device 300 may operate based on an operating system stored in memory 301, such as Windows Server TM, mac OS XTM, unix TM, linux TM, free BSDTM, or the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The utility model provides a distribution room cubical switchboard nameplate character recognition method which characterized in that, the method includes:
acquiring a switch cabinet image containing a nameplate in a power distribution room, and determining a nameplate detection model, a text box detection model and a character recognition model;
extracting a nameplate area from the switch cabinet image through the nameplate detection model to obtain a text box image;
Labeling each character in the text box image based on the text box detection model to obtain a character box image;
and inputting the character frame image into the character recognition model to obtain the text information output by the character recognition model, and displaying the text information in the switch cabinet image.
2. The method for recognizing the nameplate text of the switch cabinet of the power distribution room according to claim 1, wherein the step of obtaining the switch cabinet image containing the nameplate in the power distribution room comprises the following steps:
acquiring an initial image of a switch cabinet in a power distribution room through a camera, and carrying out data enhancement on the initial image to obtain a switch cabinet image containing a nameplate;
the data enhancement comprises image screening, brightness adjustment, size adjustment and denoising.
3. The method for recognizing nameplate characters of a switch cabinet of a power distribution room according to claim 1, wherein the determining a nameplate detection model comprises:
constructing an initial nameplate detection model; the initial nameplate detection model comprises an input layer, a backbone network, a path aggregation network and a general detection layer; the path aggregation network comprises a convolution attention module;
inputting a sample switch cabinet image obtained in advance into the backbone network through the input layer, and extracting feature mapping of the sample switch cabinet image;
According to the feature mapping, a plurality of feature images of the sample switch cabinet image are obtained through the backbone network, and the convolution attention module is utilized to adjust the channel and the space position of each feature image, so that a fusion feature image is obtained;
inputting the fusion feature map into the universal detection layer for prediction to obtain a predicted text box image output by the universal detection layer;
and taking the predicted text block image as a target, which is close to the real text block image of the sample switch cabinet image, and performing iterative training on the initial nameplate detection model by utilizing a CLoU Loss function until the initial nameplate detection model meets the preset training ending condition, so as to obtain the nameplate detection model.
4. The method for recognizing nameplate words of a switch cabinet of a power distribution room according to claim 1, wherein the determining a text box detection model includes:
dividing a pre-acquired sample text block image into a training set and a testing set, and marking each character in the sample text block image in the training set to obtain a sample marking image corresponding to the sample text block image;
taking a sample text box image in the training set as a training sample, taking the sample label image as a sample label, and calculating a loss value of the sample text box image in a preset initial text box detection model;
Updating parameters of the initial text box detection model according to the loss value of each sample text box image in the training set to obtain a target text box detection model;
and carrying out iterative training on the target text box detection model by using the sample text block diagrams in the test set until the target text box detection model meets the preset training ending condition, so as to obtain the text box detection model.
5. The method for recognizing nameplate text of a switch cabinet in a power distribution room according to claim 4, wherein the iterative training of the target text box detection model by using the sample text block diagram in the test set comprises:
inputting the sample text box image in the test set into the target text box detection model to obtain a predicted sample character box image which is output by the target text box detection model and contains a plurality of character boxes;
calculating the confidence probability of the sample text box image according to the number of character boxes in the sample character box image;
and calculating a loss value of the sample text box image by taking the sample text box image as a training sample and taking the predicted sample character box image as a sample label, and optimizing the loss value by utilizing the confidence probability so as to update parameters of the target text box detection model according to the optimized loss value.
6. The method for recognizing characters of nameplates of switch cabinets in power distribution rooms according to claim 1, wherein the determining of the character recognition model comprises:
inputting a pre-acquired sample character frame image into a preset initial character recognition model to obtain predicted text information output by the initial character recognition model;
training the initial character recognition model by using a CTC loss function by taking the real text information of the predicted text information approaching to the sample character frame image as a target;
and when the initial character recognition model meets the preset training ending condition, taking the initial character recognition model after training as a character recognition model.
7. The method for recognizing the nameplate text of the switch cabinet of the power distribution room according to claim 6, wherein the initial text recognition model comprises a convolutional neural network, a cyclic neural network and a transcription neural network;
inputting the pre-acquired sample character frame image into a preset initial character recognition model to obtain predicted text information output by the initial character recognition model, wherein the method comprises the following steps:
inputting a pre-acquired sample character frame image into the convolutional neural network, extracting image features of the sample character frame image, and converting the image features into a feature matrix;
Extracting a characteristic sequence of the characteristic matrix by using the cyclic neural network, and performing deep bidirectional processing on the characteristic sequence to obtain a character sequence of the sample character frame image;
and inputting the text sequence into the transcription neural network to obtain the predicted text information output by the transcription neural network.
8. The utility model provides a switch cabinet nameplate character recognition device of electricity distribution room, its characterized in that includes:
the model determining module is used for acquiring a switch cabinet image containing a nameplate in the power distribution room and determining a nameplate detection model, a text box detection model and a character recognition model;
the text box extraction module is used for extracting a nameplate area from the switch cabinet image through the nameplate detection model to obtain a text box image;
the character frame extraction module is used for marking each character in the text frame image based on the text frame detection model to obtain a character frame image;
and the text recognition module is used for inputting the character box image into the text recognition model to obtain the text information output by the text recognition model, and displaying the text information in the switch cabinet image.
9. A storage medium, characterized by: the storage medium has stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the power distribution room cubicle nameplate text identification method of any of claims 1 to 7.
10. A computer device, comprising: one or more processors, and memory;
stored in the memory are computer readable instructions which, when executed by the one or more processors, perform the steps of the power distribution room switch cabinet nameplate text identification method of any one of claims 1 to 7.
CN202310679315.2A 2023-06-08 2023-06-08 Method and device for recognizing characters of nameplate of switch cabinet of power distribution room Pending CN116597436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310679315.2A CN116597436A (en) 2023-06-08 2023-06-08 Method and device for recognizing characters of nameplate of switch cabinet of power distribution room

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310679315.2A CN116597436A (en) 2023-06-08 2023-06-08 Method and device for recognizing characters of nameplate of switch cabinet of power distribution room

Publications (1)

Publication Number Publication Date
CN116597436A true CN116597436A (en) 2023-08-15

Family

ID=87608212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310679315.2A Pending CN116597436A (en) 2023-06-08 2023-06-08 Method and device for recognizing characters of nameplate of switch cabinet of power distribution room

Country Status (1)

Country Link
CN (1) CN116597436A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958998A (en) * 2023-09-20 2023-10-27 四川泓宝润业工程技术有限公司 Digital instrument reading identification method based on deep learning
CN117173646A (en) * 2023-08-17 2023-12-05 金陵科技学院 Highway obstacle detection method, system, electronic device and storage medium
CN117274438A (en) * 2023-11-06 2023-12-22 杭州同花顺数据开发有限公司 Picture translation method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173646A (en) * 2023-08-17 2023-12-05 金陵科技学院 Highway obstacle detection method, system, electronic device and storage medium
CN116958998A (en) * 2023-09-20 2023-10-27 四川泓宝润业工程技术有限公司 Digital instrument reading identification method based on deep learning
CN116958998B (en) * 2023-09-20 2023-12-26 四川泓宝润业工程技术有限公司 Digital instrument reading identification method based on deep learning
CN117274438A (en) * 2023-11-06 2023-12-22 杭州同花顺数据开发有限公司 Picture translation method and system
CN117274438B (en) * 2023-11-06 2024-02-20 杭州同花顺数据开发有限公司 Picture translation method and system

Similar Documents

Publication Publication Date Title
CN116597436A (en) Method and device for recognizing characters of nameplate of switch cabinet of power distribution room
US10853695B2 (en) Method and system for cell annotation with adaptive incremental learning
CN111797771B (en) Weak supervision video behavior detection method and system based on iterative learning
CN110569843B (en) Intelligent detection and identification method for mine target
CN109359697A (en) Graph image recognition methods and inspection system used in a kind of power equipment inspection
CN112069970B (en) Classroom teaching event analysis method and device
CN111145222A (en) Fire detection method combining smoke movement trend and textural features
CN111461121A (en) Electric meter number identification method based on YO L OV3 network
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN116580285B (en) Railway insulator night target identification and detection method
CN110287970B (en) Weak supervision object positioning method based on CAM and covering
CN115546735B (en) System and method for detecting and identifying icing of cooling tower and storage medium
CN116681961A (en) Weak supervision target detection method based on semi-supervision method and noise processing
CN116385465A (en) Image segmentation model construction and image segmentation method, system, equipment and medium
CN115937492A (en) Transformer equipment infrared image identification method based on feature identification
US20230267779A1 (en) Method and system for collecting and monitoring vehicle status information
CN113569650A (en) Unmanned aerial vehicle autonomous inspection positioning method based on electric power tower label identification
CN112733708A (en) Hepatic portal vein detection positioning method and system based on semi-supervised learning
CN117454987B (en) Mine event knowledge graph construction method and device based on event automatic extraction
CN116503674B (en) Small sample image classification method, device and medium based on semantic guidance
Lestary et al. Deep Learning Implementation for Snail Trails Detection in Photovoltaic Module
CN113191148B (en) Rail transit entity identification method based on semi-supervised learning and clustering
CN117291921B (en) Container sporadic damage sample mining and learning method, device, equipment and medium
Park et al. Attention! is recycling artificial neural network effective for maintaining renewable energy efficiency?

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination