CN115482503A - Power transformation abnormal object monitoring method and system based on image AI technology - Google Patents
Power transformation abnormal object monitoring method and system based on image AI technology Download PDFInfo
- Publication number
- CN115482503A CN115482503A CN202211062504.7A CN202211062504A CN115482503A CN 115482503 A CN115482503 A CN 115482503A CN 202211062504 A CN202211062504 A CN 202211062504A CN 115482503 A CN115482503 A CN 115482503A
- Authority
- CN
- China
- Prior art keywords
- image
- monitoring
- model
- sub
- technology
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a system for monitoring a power transformation abnormal object based on an image AI technology, comprising the steps of obtaining a monitoring image; preprocessing the monitoring image to obtain a preprocessed image; determining whether the preprocessed image changes or not based on the preprocessed image and the reference image, and if so, acquiring a sub-monitoring image of a changed area; inputting the sub-monitoring image into a detection model, and outputting an alarm type by the model; with the segmentation and the detection technique that utilize the image, monitor in the station behind the tour, promote high definition video equipment utilization ratio to report an emergency and ask for help or increased vigilance when monitoring to unusual, make the fortune dimension personnel can in time discover hidden danger, and handle, ensure the safety of equipment and circuit in the station.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for monitoring a power transformation abnormal object based on an image AI technology.
Background
Along with the wide application of high definition video in electric power inspection, the target detection technology of system's in station inspection based on degree of deep learning and machine vision discerns and analyzes the interior defect of transformer substation, carries out under the condition of 2 times of patrols a day, still has a large amount of idle times to the target detection technology based on video highly occupies computational resource when discerning, can't accomplish many scenes and simultaneous and continuous discernment.
In view of this, the invention provides a method and a system for monitoring a power transformation abnormal object based on an image AI technology, so as to improve the utilization rate of a high-definition video device in a transformer substation and the detection efficiency and accuracy of an abnormal situation.
Disclosure of Invention
The invention aims to provide a power transformation abnormal object monitoring method based on an image AI technology, which comprises the steps of obtaining a monitoring image; preprocessing the monitoring image to obtain a preprocessed image; determining whether the preprocessed image changes or not based on the preprocessed image and the reference image, and if so, acquiring a sub-monitoring image of a changed area; and inputting the sub-monitoring image into a detection model, and outputting an alarm type by the model.
Further, the acquiring of the sub-monitoring image of the change area includes inputting the preprocessed image and the reference image into a segmentation model, and outputting a segmentation image by the model; the segmentation image is an image of a region with difference in the preprocessing image and the reference image; and controlling the image pickup device to acquire a sub monitoring image including the divided image based on the divided image.
Further, the segmentation model is obtained through training, and the method comprises the steps of obtaining a first training sample group, wherein the first training sample group comprises a sample reference image and a sample monitoring image; acquiring a first label of the first training sample group, wherein the first label is an image of an area with difference between the sample reference image and the sample monitoring image; inputting the first training sample group into an initial segmentation model, and iteratively updating parameters of the initial segmentation model based on the output of the model and the first label to obtain a trained segmentation model.
Further, the initial segmentation model is a twin neural network and a change detection network of the FCN.
Further, the method also comprises the steps of sending out a warning based on the alarm type, combining the segmentation images and outputting the combined segmentation images.
Further, the preprocessing comprises gray processing, noise judgment and removal and/or binarization processing.
Further, the detection model is obtained through training, including obtaining a second training sample; the second training sample comprises an image of an abnormal condition; acquiring a second label of the second training sample, wherein the second label is an alarm type corresponding to an image in an abnormal condition; inputting the second training sample into an initial detection model, and iteratively updating parameters of the initial detection model based on the output of the model and the second label to obtain a trained detection model.
Further, the initial detection model is YOLOV4.
Further, abnormal situations in the substation are analyzed during the non-patrol task of the monitoring equipment.
The invention aims to provide a power transformation abnormal object monitoring system based on an image AI technology, which comprises an acquisition module, a preprocessing module, a sub-monitoring image acquisition module and a determination module; the acquisition module is used for acquiring a monitoring image; the preprocessing module is used for preprocessing the monitoring image to obtain a preprocessed image; the sub-monitoring image acquisition module is used for determining whether the preprocessed image changes or not based on the preprocessed image and the reference image, and acquiring sub-monitoring images of a changed area if the preprocessed image changes; the determining module is used for inputting the sub-monitoring image into the detection model and outputting the alarm type by the model.
The technical scheme of the embodiment of the invention at least has the following advantages and beneficial effects:
some embodiments in this specification monitor large-scale oil-filled equipment such as a transformer (reactor) in a station, a main entrance and an exit and a patrol passage by a silence monitoring method and by using an image segmentation and detection technology, and effectively improve the utilization rate of high-definition video equipment. And when monitoring that unusual, report an emergency and ask for help or increased vigilance, the operation and maintenance personnel can discover hidden danger in time, and handle, ensure the safety of equipment and circuit.
Some embodiments in this specification monitor the inside of the substation by using a segmentation technology of a twin neural network and a change detection network of an FCN and a target detection technology based on a YOLO V4 network, so as to improve the utilization rate of video equipment in the substation and improve the accuracy of abnormal condition detection.
Some embodiments in this specification may improve monitoring efficiency and accuracy of abnormal conditions by segmenting the monitored image to obtain a segmented image of the difference region, and then detecting the segmented image to obtain the alarm type, thereby avoiding processing too much irrelevant data.
Drawings
Fig. 1 is an exemplary flowchart of a method for monitoring an abnormal object of power transformation based on an image AI technique according to some embodiments of the present invention;
FIG. 2 is an exemplary diagram of a trained segmentation model provided by some embodiments of the present invention;
FIG. 3 is an exemplary diagram of training a detection model provided by some embodiments of the invention;
fig. 4 is an exemplary block diagram of a system for monitoring an abnormal object of power transformation based on an image AI technique according to some embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Fig. 1 is an exemplary flowchart of a method for monitoring an abnormal object of power transformation based on an image AI technique according to some embodiments of the present invention. In some embodiments, the process 100 may be performed by the system 400. As shown in fig. 1, the process 100 includes the following steps:
and step 110, acquiring a monitoring image. In some embodiments, step 110 may be performed by acquisition module 410.
The monitored image may refer to an image obtained by monitoring the target area. The target area may refer to an area that needs to be monitored. For example, the target area may include one or more of an area where large oil-filled equipment such as a transformer is located, a main doorway, an inspection passage, and the like. In some embodiments, the monitored images may be acquired by various image acquisition devices. For example, the image acquisition device may monitor the operation state of the device in the target area, the operation environment of the substation, and the behavior of the entrance and exit personnel at a frequency of not more than 2 minutes each time during the execution of the non-patrol task, and may give an alarm to an abnormal situation.
And step 120, preprocessing the monitoring image to obtain a preprocessed image. In some embodiments, step 120 may be performed by the pre-processing module 420.
In some embodiments, the pre-processing may include, but is not limited to, one or more of grayscale processing, noise determination and removal, binarization processing, and the like.
The gray processing may be to perform gray processing on the color image and perform weighted average processing on the three colors of R, G, and B, thereby obtaining a gray image. For example, the picture has coordinates of (X, Y), and X ∈ (0,H) 1 ),Y∈(0,W 1 ). The picture abscissa is intercepted asThe ordinate is truncated intoAnd normalizing the image matrix to obtain a Gray level picture matrix Mat2Gray]Grey scale pictureWidth of W 2 Height is H 2 。
The noise point judgment and removal can be realized by using a Canny algorithm to suppress noise and accurately determine the position of an image edge through a filtering method. In some embodiments, the threshold (low threshold T) is calculated using a dual threshold method L And a high threshold value T H ) And if the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point. If the edge gradient value is less than the high threshold and greater than the low threshold, the edge point is marked as a weak edge point. Points below the low threshold are suppressed.
T H =β*T L
Wherein, beta represents the proportion parameter of the high threshold and the low threshold, and the high threshold and the low threshold are always in proportion of two to one or three to one.
In some embodiments, to get the most efficient threshold, β =2.25 by contrast in different color spaces.
In some embodiments, it may be based on a low threshold T L And a high threshold value T h And obtaining a strong edge image and a weak edge image, and obtaining a long edge by using connectivity analysis so as to form a final edge image, so as to achieve the purpose of smoothing and denoising the image.
The binarization process may refer to letting only two colors appear in the image. For example, the gray value of each pixel in the pixel matrix in the picture is 0 or 255, where 0 is black and 255 is white. The processed image only exhibits a black and white effect. In some embodiments, the composition of the image after the binarization process includes only the device and the background that need to be detected.
The reference image may refer to an image in which the target region is in a normal state. In some embodiments, the reference image may be obtained by the monitoring device capturing the target area in a normal state. The monitoring apparatus may be various image pickup apparatuses, for example, a camera, a video camera, and the like.
In some embodiments, each image capturing apparatus may be associated with a plurality of other image capturing apparatuses, and after monitoring a change area, the master image capturing apparatus may control the slave image capturing apparatus to acquire a sub monitoring image including the change area. The sub monitor image may refer to an image captured by the image capturing apparatus for a change area.
In some embodiments, one master imaging apparatus may associate a plurality of slave imaging apparatuses, the master imaging apparatus and the slave imaging apparatuses may acquire a monitor image, determine an area where there is a difference when it is determined that the difference between the monitor image and the reference image is greater than a preset threshold, the master imaging apparatus may mark information such as a position, a size, and the like of the area where there is a difference, and then control an appropriate slave imaging apparatus to acquire an image of the area. In some embodiments, a suitable slave imaging device may refer to the imaging device closest to the area where the discrepancy occurs.
In some embodiments, the pre-processed image and the reference image may be input to a segmentation model, which outputs a segmented image; dividing the image into an image of a region with difference in the preprocessed image and the reference image; based on the divided image, the image pickup apparatus is controlled to acquire a sub monitor image including the divided image.
The segmentation image is an image of a region where there is a difference between the pre-processed image and the reference image. In some embodiments, there may be a plurality of differences between the reference image and the preprocessed image, and the preprocessed image may be segmented along the plurality of differences to obtain a plurality of segmented images; the master image pickup apparatus may control the plurality of slave image pickup apparatuses to acquire images of areas where the plurality of divided images are located. For more on the segmentation model, see fig. 2 and its associated description.
And 140, inputting the sub-monitoring image into the detection model, and outputting the alarm type by the model. In some embodiments, step 140 may be performed by determination module 440.
The alarm type may refer to the type to which the abnormal condition belongs. The alarm types may include behavior (e.g., improper wearing of safety helmets, crossing/breaking, not wearing long sleeves, etc.), environment (e.g., field fireworks, standing water, foreign objects, small animals, etc.), equipment (e.g., equipment fireworks, oil leakage, equipment deformation, equipment breakage, equipment tilting, foreign object intrusion, etc.). For more on the detection model, see fig. 3 and its associated description.
In some embodiments, a plurality of camera devices may collectively analyze and process the sub-monitoring image to obtain the alarm type. For example, when the slave imaging device is still in a busy state after acquiring the sub monitoring image (e.g., other monitoring images need to be acquired), the slave imaging device may send the acquired sub monitoring image to the master imaging device, and the master imaging device may send the sub monitoring image to the slave imaging device in an idle state for analysis processing based on the idle state of the slave imaging device, so as to obtain the alarm type.
In some embodiments, the method further comprises issuing a warning based on the alarm type, and combining the segmentation images and outputting the combined segmentation images. For example, different alarm types may have different alarm modes (e.g., pop-up window, buzzer, etc.), and the worker may be prompted to handle the abnormal condition by different alarm modes. In some embodiments, the locations of the plurality of segmented images in the monitored image may be identified, and then images at other locations in the monitored image are normalized to output a difference image that retains only difference information. Wherein, the normalization may refer to deleting images in other positions or taking a single color.
Some embodiments in this specification improve detection efficiency by grouping the image capturing apparatuses, so that a sub-monitoring image can be detected by each group of image capturing apparatuses, and the split image can be detected by using other idle image capturing apparatuses associated with the image capturing apparatus during an operating period of the image capturing apparatus that acquires the sub-monitoring image (for example, acquiring the monitoring image), so that the operation of the image capturing apparatus with heavy load can be transferred to other image capturing apparatuses with light load, and the operating pressure of the image capturing apparatus is balanced.
Fig. 2 is an exemplary diagram of a training segmentation model according to some embodiments of the present invention. In some embodiments, the flow 200 illustrated in fig. 2 may be performed by the segmentation module 430. As shown in fig. 2, the process 200 includes the following:
the preprocessed image and the reference image are input into a segmentation model, and the model outputs a segmentation image.
In some embodiments, the segmentation model may be a trained twin neural network and a change detection network of FCNs (ChangeNet).
The network structure of ChangeNet uses ResNet to extract the features of the preprocessed image and the reference image, and combines convolution to output the variable positioning information of different degrees of layering. The same network is then used to discriminate between detected changes and output labeled change detection results at the object level. Wherein, the change detection result can be a detection positioning and classification map of the change area in the preprocessed image.
In some embodiments, the ChangeNet may include a twin neural network (siamese network) and an FCN. Wherein the twin neural network may include a plurality of ResNet residual blocks to extract features of the pre-processed image and the reference image. The FCN may include a plurality of full convolution layers having a convolution kernel size of 1 × 1 to integrate the extracted features. In some embodiments, changeNet may also include a combination layer and a classification layer. So as to add the integrated features and then classify the features.
In some embodiments, for the output of the combined layer, a 1 × 1 convolution kernel may be used to reduce the dimensionality to N, followed by classification using a softmax classifier. A segmented image of size w × h × N, i.e., the difference between the preprocessed image and the reference image, is obtained. Wherein, w and h are the length and width of the style image, and N is the number of the segmentation images. For more on the processed image, the reference image and the segmented image, see fig. 1 and its associated description.
Some embodiments in this description may capture both coarse and fine information of an object by combining the outputs of different levels of convolutional layers.
In some embodiments, the twin neural network may be a parallel weight-sharing network with the same number and value of parameters.
Some embodiments in this specification improve the accuracy of segmentation images by using a parallel weight sharing network for feature extraction so that the same features can be learned from both the pre-processed image and the reference image.
A first training sample set is obtained.
The first training sample set may refer to a combination of images used to train the initial segmentation model. The first training sample set may include a sample reference image and a sample monitor image. The sample reference image may refer to an image used for training an initial segmentation model, and the sample monitor image may be a monitor image corresponding to the sample reference image. For example, the sample monitor image may refer to a monitor image that has been segmented and from which a segmented image was obtained. In some embodiments, the historical monitoring data may be processed to obtain a sample reference image. The sample reference image is obtained in a manner consistent with the manner in which the reference image is obtained, and for more details regarding obtaining the sample reference image, see fig. 1 and its associated description.
A first label of a first training sample set is obtained. The first label can refer to the image of the area with difference between the preprocessed image of the sample monitoring image and the sample reference image. In some embodiments, the first tag may be intercepted manually.
And inputting the first training sample group into the initial segmentation model, and iteratively updating parameters of the initial segmentation model based on the output of the model and the first label to obtain the trained segmentation model.
In some embodiments, the parameters of the initial segmentation model may be obtained by migration learning, and the initial segmentation model may use ResNet50. The residual block of the initial segmentation model mainly consists of convolution layer, batch regularization (BN) and ReLU activation function. During training, a loss function can be constructed based on the first label and the output of the initial segmentation model, and parameters of the initial segmentation model are updated iteratively based on the loss function until the loss function reaches a preset condition, and the initial segmentation model is used as the segmentation model. The preset condition may refer to that the loss function converges or the number of iterations reaches a threshold, etc. In some embodiments, the loss function may be constructed based on edge coordinates of the segmented image output by the initial segmentation model and edge coordinates of the first label.
FIG. 3 is an exemplary diagram of training a detection model provided by some embodiments of the invention. In some embodiments, the flow 300 illustrated in fig. 3 may be performed by the determination module 440. As shown in fig. 3, the process 300 includes the following:
and inputting the segmented image into a detection model, and outputting an alarm type by the model.
In some embodiments, the detection model may be YOLO V4, and the YOLO V4 obtains the alarm type by performing target detection on the segmented image.
A second training sample is obtained.
The second training sample may refer to an image used to train the detection model. The second training sample includes an image of an abnormal situation. In some embodiments, the second training sample may be obtained by collecting images of various types of abnormal situations in the substation.
A second label of a second training sample is obtained.
The second label may be an alarm type corresponding to an image of an abnormal situation. In some embodiments, the alarm type label can be marked on the image of each type of abnormal condition in the substation in a manual marking mode.
And inputting the second training sample into the initial detection model, and iteratively updating parameters of the initial detection model based on the output of the model and the second label to obtain the trained detection model.
In some embodiments, the backhaul BackBone network may be constructed based on the YOLO V4 algorithm to obtain an initial detection model. The BackBone BackBone network comprises a CSPDarknet53 module, a Mish activation function and a Dropblock module. During training, a loss function can be constructed based on the output of the initial detection model and the second label, parameters of the initial detection model are updated based on the loss function in an iteration mode, and when the loss function reaches a preset condition, the initial detection model is used as the detection model. The preset condition may refer to that the loss function converges or the number of iterations reaches a threshold, etc.
Fig. 4 is an exemplary block diagram of a system for monitoring an abnormal object of power transformation based on an image AI technique according to some embodiments of the present invention. As shown in fig. 4, the system 400 includes an acquisition module 410, a pre-processing module 420, a sub-monitoring image acquisition module 430, and a determination module 440.
The acquisition module 410 is used to acquire the monitoring image. For more on the acquisition module 410, refer to fig. 1 and its associated description.
The preprocessing module 420 is configured to preprocess the monitored image to obtain a preprocessed image. For more on the preprocessing module 420, see fig. 1 and its associated description.
The sub-monitoring image obtaining module 430 is configured to determine whether the preprocessed image changes based on the preprocessed image and the reference image, and if so, obtain a sub-monitoring image of a changed region. For more on the sub-monitoring image acquisition module 430, refer to fig. 1 and its associated description.
The determining module 440 is configured to input the segmented image into the detection model, and the model outputs an alarm type. For more of the determination module 440, see fig. 1 and its associated description.
In some embodiments, the system 400 may further include an alarm module configured to issue an alarm based on the alarm type and output the merged segmented images. For more of the alarm module, refer to fig. 1 and its associated description.
The present invention has been described in terms of the preferred embodiment, and it is not intended to be limited to the embodiment. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for monitoring power transformation abnormal objects based on an image AI technology is characterized by comprising
Acquiring a monitoring image;
preprocessing the monitoring image to obtain a preprocessed image;
determining whether the preprocessed image changes or not based on the preprocessed image and the reference image, and if so, acquiring a sub-monitoring image of a changed area;
and inputting the sub-monitoring image into a detection model, and outputting an alarm type by the model.
2. The power transformation abnormal object monitoring method based on image AI technology as claimed in claim 1, wherein the obtaining of the sub-monitoring images of the changed area comprises,
inputting the preprocessed image and the reference image into a segmentation model, and outputting a segmentation image by the model; the segmentation image is an image of a region with difference in the preprocessing image and the reference image;
and controlling the image pickup device to acquire a sub monitoring image including the divided image based on the divided image.
3. The power transformation abnormal object monitoring method based on image AI technology as claimed in claim 2, wherein the segmentation model is obtained by training, comprising,
acquiring a first training sample set, wherein the first training sample set comprises a sample reference image and a sample monitoring image;
acquiring a first label of the first training sample group, wherein the first label is an image of an area with difference between the sample reference image and the sample monitoring image;
inputting the first training sample group into an initial segmentation model, and iteratively updating parameters of the initial segmentation model based on the output of the model and the first label to obtain a trained segmentation model.
4. The method for power transformation abnormal object monitoring based on image AI technology as claimed in claim 3, wherein the initial segmentation model is a twin neural network and a change detection network of FCN.
5. The power transformation abnormal object monitoring method based on image AI technology as claimed in claim 1, further comprising issuing an alarm based on the alarm type and merging the segmented images for output.
6. The power transformation abnormal object monitoring method based on the image AI technology as claimed in claim 1, wherein the pre-processing comprises grey scale processing, noise judgment and removal and/or binarization processing.
7. The power transformation abnormal object monitoring method based on image AI technology as claimed in claim 1, wherein the detection model is obtained by training, comprising,
obtaining a second training sample; the second training sample comprises an image of an abnormal situation;
acquiring a second label of the second training sample, wherein the second label is an alarm type corresponding to the image under the abnormal condition;
inputting the second training sample into an initial detection model, and iteratively updating parameters of the initial detection model based on the output of the model and the second label to obtain a trained detection model.
8. The method for monitoring power transformation abnormal objects based on image AI technology as in claim 7, wherein the initial detection model is YOLOV4.
9. A method for power transformation abnormal object monitoring based on image AI technology according to any of claims 1-8, characterized in that abnormal situations in the substation are analyzed during non-patrol tasks of the monitoring equipment.
10. A power transformation abnormal object monitoring system based on an image AI technology is characterized by comprising an acquisition module, a preprocessing module, a sub-monitoring image acquisition module and a determination module;
the acquisition module is used for acquiring a monitoring image;
the preprocessing module is used for preprocessing the monitoring image to obtain a preprocessed image;
the sub-monitoring image acquisition module is used for determining whether the preprocessed image changes or not based on the preprocessed image and the reference image, and acquiring the sub-monitoring image of the changed area if the preprocessed image changes;
the determining module is used for inputting the sub-monitoring images into the detection model and outputting the alarm type by the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211062504.7A CN115482503A (en) | 2022-09-01 | 2022-09-01 | Power transformation abnormal object monitoring method and system based on image AI technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211062504.7A CN115482503A (en) | 2022-09-01 | 2022-09-01 | Power transformation abnormal object monitoring method and system based on image AI technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115482503A true CN115482503A (en) | 2022-12-16 |
Family
ID=84421673
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211062504.7A Pending CN115482503A (en) | 2022-09-01 | 2022-09-01 | Power transformation abnormal object monitoring method and system based on image AI technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115482503A (en) |
-
2022
- 2022-09-01 CN CN202211062504.7A patent/CN115482503A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102254773B1 (en) | Automatic decision and classification system for each defects of building components using image information, and method for the same | |
CN111080598B (en) | Bolt and nut missing detection method for coupler yoke key safety crane | |
CN104483326B (en) | High-voltage line defects of insulator detection method and system based on depth belief network | |
CN106407928B (en) | Transformer composite insulator casing monitoring method and system based on raindrop identification | |
Bu et al. | Crack detection using a texture analysis-based technique for visual bridge inspection | |
CN111080620A (en) | Road disease detection method based on deep learning | |
CN107679495B (en) | Detection method for movable engineering vehicles around power transmission line | |
CN112308826B (en) | Bridge structure surface defect detection method based on convolutional neural network | |
CN112364740B (en) | Unmanned aerial vehicle room monitoring method and system based on computer vision | |
CN109635823B (en) | Method and device for identifying winding disorder rope and engineering machinery | |
CN110493574B (en) | Security monitoring visualization system based on streaming media and AI technology | |
CN111091110A (en) | Wearing identification method of reflective vest based on artificial intelligence | |
CN114648714A (en) | YOLO-based workshop normative behavior monitoring method | |
CN115065798A (en) | Big data-based video analysis monitoring system | |
CN113808084A (en) | Model-fused online tobacco bale surface mildew detection method and system | |
CN113179389A (en) | System and method for identifying crane jib of power transmission line dangerous vehicle | |
CN113673614B (en) | Metro tunnel foreign matter intrusion detection device and method based on machine vision | |
CN114997279A (en) | Construction worker dangerous area intrusion detection method based on improved Yolov5 model | |
CN114155472A (en) | Method, device and equipment for detecting abnormal state of factory scene empty face protection equipment | |
CN117585553A (en) | Elevator abnormality detection method, elevator abnormality detection device, computer equipment and storage medium | |
CN112241707A (en) | Wind-powered electricity generation field intelligence video identification device | |
CN115482503A (en) | Power transformation abnormal object monitoring method and system based on image AI technology | |
CN115830701A (en) | Human violation behavior prediction method based on small sample learning | |
CN112150453B (en) | Automatic detection method for breakage fault of bolster spring of railway wagon | |
CN114648738A (en) | Image identification system and method based on Internet of things and edge calculation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |