CN118365973B - Multi-feature information fusion-based hybrid line state evaluation method and system - Google Patents
Multi-feature information fusion-based hybrid line state evaluation method and system Download PDFInfo
- Publication number
- CN118365973B CN118365973B CN202410796214.8A CN202410796214A CN118365973B CN 118365973 B CN118365973 B CN 118365973B CN 202410796214 A CN202410796214 A CN 202410796214A CN 118365973 B CN118365973 B CN 118365973B
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- line
- vector
- mixed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 148
- 238000011156 evaluation Methods 0.000 title claims abstract description 54
- 239000013598 vector Substances 0.000 claims abstract description 207
- 238000000605 extraction Methods 0.000 claims abstract description 58
- 238000013210 evaluation model Methods 0.000 claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 42
- 238000012545 processing Methods 0.000 claims description 22
- 230000002159 abnormal effect Effects 0.000 claims description 21
- 238000003066 decision tree Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000003702 image correction Methods 0.000 claims description 5
- 238000013138 pruning Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 2
- 238000003708 edge detection Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000009412 basement excavation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The application relates to the field of power systems, in particular to a mixed-frame line state evaluation method and system based on multi-feature information fusion, wherein the method comprises the following steps: collecting a first image of a mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data; collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data; collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data; fusing the obtained characteristic data to obtain a multi-characteristic fusion vector; and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result. By the method, the reliability and the accuracy of the mixed-rack line state evaluation result are improved.
Description
Technical Field
The application relates to the field of power systems, in particular to a mixed-frame line state evaluation method and system based on multi-feature information fusion.
Background
Along with the rapid development of smart grid technology, information technology and intelligent operation and maintenance technology, the state data of power transmission and transformation equipment gradually show large data characteristics such as large volume, multiple types, rapid growth and the like. How to realize the efficient comprehensive utilization of multi-source heterogeneous data, deep excavation of the association relation between various data and equipment states, fine state evaluation of mixed-frame lines under different operation conditions, and the like have become the problems to be solved urgently. Therefore, the problem of how to improve the reliability and accuracy of the hybrid line state estimation is to be solved.
Disclosure of Invention
The application mainly aims to provide a mixed overhead line state evaluation method and system based on multi-feature information fusion, aiming at improving the accuracy of mixed overhead line state evaluation.
In order to achieve the above object, the present application provides a hybrid line state evaluation method based on multi-feature information fusion, including:
A hybrid link state assessment method based on multi-feature information fusion, the method comprising:
Collecting a first image of a mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data;
Collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data;
collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data;
fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector;
And constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result.
Further, the constructing a multi-feature information fusion mixed line state evaluation model, inputting the multi-feature information fusion vector into the multi-feature information fusion mixed line state evaluation model to evaluate the mixed line state, and obtaining a mixed line state evaluation result, specifically comprising:
Acquiring historical multi-feature information fusion vectors, and marking each fusion feature vector as three states A/T/E, wherein the three states respectively represent a normal state, an early warning state and a fault state;
Dividing the historical multi-feature information fusion vector set into a training set and a testing set, and constructing a decision tree based on the base-Ni index and the information gain rate of the probability distribution of the training set;
Inputting the test set into the decision tree and calculating prediction accuracy;
Pruning the decision tree when the prediction precision is lower than a preset threshold value, and generating a multi-feature information fusion hybrid line state evaluation model;
The method comprises the steps of obtaining a current multi-feature information fusion vector, inputting the current multi-feature information fusion vector into a mixed-frame line state evaluation model for evaluation, classifying state types according to training rules, and outputting the current state of the mixed-frame line, namely normal/early warning/failure, based on the feature data reflected in the current multi-feature information fusion vector set by the mixed-frame line state evaluation model.
Further, the feature extraction is performed on the first image to obtain geometric feature data, which specifically includes:
Denoising the first image by using a Gaussian filter to obtain a fourth image;
Performing binarization processing on the fourth image to obtain a binarized image corresponding to the fourth image;
performing edge extraction on the binarized image to obtain edge information in the binarized image, wherein the edge information is a coordinate point set of the edge of the binarized image;
and acquiring geometric feature data according to the edge information.
Further, the feature extraction is performed on the first image to obtain texture feature data, which specifically includes:
Dividing the first image into a plurality of sub-images, obtaining gradient directions and gradient amplitudes corresponding to all pixel points in each sub-image, constructing a gradient histogram corresponding to each sub-image based on the gradient directions and the gradient amplitudes, and carrying out normalization processing on the gradient histogram to obtain a normalized gradient histogram;
and connecting the normalized gradient histograms corresponding to each sub-image to obtain an HOG texture feature vector of the first image, and taking the HOG texture feature vector as texture feature data.
Further, the acquiring a second image of the hybrid line, and extracting features of the second image to obtain line inclination feature data specifically includes:
performing image correction processing on the second image to obtain a fifth image;
extracting line data points from the fifth image to obtain a line data point set;
Performing least square fitting straight line processing on the line pole data point set to obtain a line fitting straight line;
And obtaining line inclination characteristic data according to the line fitting straight line.
Further, the acquiring a third image of the mixed frame line, and extracting features of the third image to obtain heat distribution feature data specifically includes:
Shooting the mixed-frame line to be evaluated based on the infrared thermal imager to obtain a third image;
Acquiring first temperature values corresponding to all pixel points in the third image; generating a temperature matrix of the third image based on the first temperature values;
And extracting abnormal temperature points from the temperature matrix to obtain abnormal temperature points, and determining heat distribution characteristic data of the third image based on the abnormal temperature points.
Further, the fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the thermal distribution feature data to obtain a multi-feature fusion vector specifically includes:
Converting the geometric feature data, the texture feature data, the connection feature data, the line tilt feature data and the thermal distribution feature data into a geometric feature vector, a texture feature vector, a connection feature vector, a line tilt feature vector and a thermal distribution feature vector, respectively, based on a convolutional neural network;
Respectively giving weights to the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector to obtain corresponding weight coefficients;
Multiplying the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector with the corresponding weight coefficients to obtain corresponding weighted feature vectors;
and linearly adding the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the weighted feature vector corresponding to the thermal distribution feature vector to obtain a multi-feature fusion vector.
A hybrid overhead line state assessment system based on multi-feature information fusion, the system comprising:
The first image feature extraction module is used for collecting a first image of the mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data;
The second image feature extraction module is used for collecting a second image of the mixed frame line, and extracting features of the second image to obtain line inclination feature data;
the third image feature extraction module is used for collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data;
The multi-feature data fusion vector module is used for fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector;
The mixed line state evaluation module is used for constructing a mixed line state evaluation model with multi-feature information fusion, inputting the multi-feature information fusion vector into the mixed line state evaluation model with multi-feature information fusion for mixed line state evaluation, and obtaining a mixed line state evaluation result.
The application also provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of any mixed frame line state evaluation method based on multi-feature information fusion when executing the computer program.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the above-described hybrid line state assessment methods based on multi-feature information fusion.
The application provides a mixed frame line state evaluation method and system based on multi-feature information fusion, wherein the method acquires a first image of a mixed frame line, and performs feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data; collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data; collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data; fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector; and constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result. By the method, the reliability and the accuracy of the mixed-rack line state evaluation result are improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a hybrid line state evaluation method based on multi-feature information fusion according to the present application;
FIG. 2 is a flow chart of an embodiment of a hybrid link state evaluation method based on multi-feature information fusion according to the present application;
FIG. 3 is a flow chart of an embodiment of a hybrid link state evaluation method based on multi-feature information fusion according to the present application;
FIG. 4 is a flow chart of an embodiment of a hybrid link state evaluation method based on multi-feature information fusion according to the present application;
FIG. 5 is a flow chart of an embodiment of a hybrid link state estimation method based on multi-feature information fusion according to the present application;
FIG. 6 is a flow chart of an embodiment of a hybrid link state estimation method based on multi-feature information fusion according to the present application;
FIG. 7 is a flow chart of an embodiment of a hybrid link state estimation method based on multi-feature information fusion according to the present application;
FIG. 8 is a schematic diagram illustrating an embodiment of a hybrid link status assessment system with multi-feature information fusion according to the present application;
FIG. 9 is a schematic block diagram illustrating the construction of an embodiment of a computer device according to the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Referring to fig. 1, an embodiment of the present application provides a hybrid line state evaluation method based on multi-feature information fusion, which includes steps S10-S50, and details of each step of the hybrid line state evaluation method based on multi-feature information fusion are as follows.
S10, acquiring a first image of a mixed frame line by a root, and extracting features of the first image to obtain geometric feature data, texture feature data and connection feature data;
In this embodiment, feature extraction is performed on the first image to obtain geometric feature data, which specifically includes: denoising the first image by using a Gaussian filter to obtain a fourth image; performing binarization processing on the fourth image to obtain a binarized image corresponding to the fourth image; performing edge extraction on the binarized image to obtain edge information in the binarized image, wherein the edge information is a coordinate point set of the edge of the binarized image; and acquiring geometric feature data according to the edge information. Extracting features of the first image to obtain texture feature data, wherein the method specifically comprises the following steps: dividing the first image into a plurality of sub-images, obtaining gradient directions and gradient amplitudes corresponding to all pixel points in each sub-image, constructing a gradient histogram corresponding to each sub-image based on the gradient directions and the gradient amplitudes, and carrying out normalization processing on the gradient histogram to obtain a normalized gradient histogram; and connecting the normalized gradient histograms corresponding to each sub-image to obtain an HOG texture feature vector of the first image, and taking the HOG texture feature vector as texture feature data.
S20, acquiring a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data;
In this embodiment, performing image correction processing on the second image to obtain a fifth image; extracting line data points from the fifth image to obtain a line data point set; performing least square fitting straight line processing on the line pole data point set to obtain a line fitting straight line; and obtaining line inclination characteristic data according to the line fitting straight line.
S30, collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data;
In the embodiment, shooting is performed on the mixed-frame line to be evaluated based on the infrared thermal imager to obtain a third image; acquiring first temperature values corresponding to all pixel points in the third image; generating a temperature matrix of the third image based on the first temperature values; and extracting abnormal temperature points from the temperature matrix to obtain abnormal temperature points, and determining heat distribution characteristic data of the third image based on the abnormal temperature points.
S40, fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector;
In this embodiment, the geometric feature data, the texture feature data, the connection feature data, the line tilt feature data, and the thermal distribution feature data are respectively converted into a geometric feature vector, a texture feature vector, a connection feature vector, a line tilt feature vector, and a thermal distribution feature vector based on a convolutional neural network; respectively giving weights to the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector to obtain corresponding weight coefficients; multiplying the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector with the corresponding weight coefficients to obtain corresponding weighted feature vectors; and linearly adding the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the weighted feature vector corresponding to the thermal distribution feature vector to obtain a multi-feature fusion vector.
S50, constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result.
In the embodiment, a historical multi-feature information fusion vector is obtained, and each fusion feature vector is marked as three states A/T/E, which respectively represent a normal state, an early warning state and a fault state; dividing the historical multi-feature information fusion vector set into a training set and a testing set, and constructing a decision tree based on the base-Ni index and the information gain rate of the probability distribution of the training set; inputting the test set into the decision tree and calculating prediction accuracy; pruning the decision tree when the prediction precision is lower than a preset threshold value, and generating a multi-feature information fusion hybrid line state evaluation model; the method comprises the steps of obtaining a current multi-feature information fusion vector, inputting the current multi-feature information fusion vector into a mixed-frame line state evaluation model for evaluation, classifying state types according to training rules, and outputting the current state of the mixed-frame line, namely normal/early warning/failure, based on the feature data reflected in the current multi-feature information fusion vector set by the mixed-frame line state evaluation model.
In one embodiment, the constructing a multi-feature information fusion hybrid line state evaluation model, inputting the multi-feature information fusion vector into the multi-feature information fusion hybrid line state evaluation model to perform hybrid line state evaluation, and obtaining a hybrid line state evaluation result specifically includes: acquiring historical multi-feature information fusion vectors, and marking each fusion feature vector as three states A/T/E, wherein the three states respectively represent a normal state, an early warning state and a fault state; dividing the historical multi-feature information fusion vector set into a training set and a testing set, and constructing a decision tree based on the base-Ni index and the information gain rate of the probability distribution of the training set; inputting the test set into the decision tree and calculating prediction accuracy; inputting the test set into the decision tree and calculating prediction accuracy; pruning the decision tree when the prediction precision is lower than a preset threshold value, and generating a multi-feature information fusion hybrid line state evaluation model; the method comprises the steps of obtaining a current multi-feature information fusion vector, inputting the current multi-feature information fusion vector into a mixed-frame line state evaluation model for evaluation, classifying state types according to training rules, and outputting the current state of the mixed-frame line, namely normal/early warning/failure, based on the feature data reflected in the current multi-feature information fusion vector set by the mixed-frame line state evaluation model.
In this embodiment, specifically, a complete set of the historical multi-feature information fusion vector is used as a root node, feature attributes screened in the historical multi-feature information fusion vector are used as internal nodes, and classification results are used as leaf nodes, so as to generate a decision tree model. Selecting the characteristic attribute with the maximum information gain as the characteristic attribute of screening, and determining the information gain according to the following formula:(S)=- Wherein E (S) is the information entropy of the known historical multi-feature information fusion vector S; gamma m is the proportion of the m-th sample in the sample data set S; l is the number of categories in the sample data set S, each fusion feature vector is marked as three categories A/T/E, and represents a normal state, an early warning state and a fault state respectively, and the marking is based on known operation data and fault cases, so that the marking accuracy is ensured; assuming that a test set contains 100 samples, the samples are predicted by using a decision tree model, the prediction results of 85 samples are correct, and the prediction results of 15 samples are wrong, the prediction accuracy of the decision tree model on the test set is 85 percent and is higher than a preset threshold value which is set, splitting is performed according to data features from a root node until all leaf nodes meet the condition of stopping splitting, if the leaf nodes are lower than the preset threshold value, splitting is stopped, a multi-feature information fusion mixed line state evaluation model is generated, a current multi-feature information fusion vector is obtained, the current multi-feature information fusion vector is input into the mixed line state evaluation model for evaluation, state classification is performed according to a training rule, and the mixed line state evaluation model outputs the current state of the mixed line based on each feature data reflected in the current multi-feature information fusion vector set, wherein the current state of the mixed line is normal/early warning/fault.
In one embodiment, the feature extraction of the first image to obtain geometric feature data specifically includes: denoising the first image by using a Gaussian filter to obtain a fourth image; performing binarization processing on the fourth image to obtain a binarized image corresponding to the fourth image; performing edge extraction on the binarized image to obtain edge information in the binarized image, wherein the edge information is a coordinate point set of the edge of the binarized image; and acquiring geometric feature data according to the edge information.
In this embodiment, specifically, the fourth image is converted into a gray image, so that each pixel has only one gray value, the gray image is binarized using the global threshold method, the pixel threshold is set to 128, the pixels greater than 128 are set to 255, the pixels are represented as white, and the pixels less than or equal to 128 are set to 0, the pixels are represented as black. A black and white binarized image is obtained, wherein the white part represents the object and the black part represents the background. And processing the black-and-white binary image by using a Canny edge detection algorithm to extract the boundary between the object and the background in the image. The Canny algorithm detects strong edges in the image and displays them in the new image, resulting in white edge lines on a black background. The outline of the edge is found and the length of the edge is calculated. And traversing all the outlines, calculating the length of each outline by using a cv2.arcLength () function, and finally obtaining the total length of the edges in the fourth image as characteristic data to be output.
In one embodiment, the feature extraction of the first image to obtain texture feature data specifically includes: dividing the first image into a plurality of sub-images, obtaining gradient directions and gradient amplitudes corresponding to all pixel points in each sub-image, constructing a gradient histogram corresponding to each sub-image based on the gradient directions and the gradient amplitudes, and carrying out normalization processing on the gradient histogram to obtain a normalized gradient histogram; and connecting the normalized gradient histograms corresponding to each sub-image to obtain an HOG texture feature vector of the first image, and taking the HOG texture feature vector as texture feature data.
In this embodiment, in particular, in an embodiment, the first image is converted into a grayscale image, because the HOG method is better for grayscale images. The first image may then be subjected to some pre-processing, such as resizing, noise removal, etc. Calculating the gradient: gradients are calculated for the preprocessed first image, typically using the Sobel operator or the like to calculate gradients in the horizontal and vertical directions of the image. Calculating a gradient histogram: the first image is divided into 8 x 8 cells, and the direction and magnitude information of the gradient is counted in each cell and is formed into a histogram. In this embodiment, the gradient direction is divided into 9 direction sections, and then the cumulative value of the gradient in each section is counted. Block normalization: the image is divided into larger areas, each block containing a plurality of cells. Within each block, the histograms of all cells are normalized, and feature vectors are spliced: and connecting the normalized histograms in all the blocks into a feature vector, wherein the feature vector is the HOG feature.
Specifically, in another embodiment, feature extraction is performed on the first image to obtain texture feature data, and the method further includes: extracting features of the first image to obtain texture feature data; specifically, the first image is divided into a plurality of sub-images, first LBP values corresponding to all pixel points in each sub-image are obtained, a histogram corresponding to each sub-image is calculated based on the first LBP values, and normalization processing is carried out on the histogram to obtain a normalized histogram; and connecting the normalized histograms corresponding to each sub-image to obtain LBP texture feature vectors of the second image, and taking the LBP texture feature vectors as texture feature data. Specifically, by selecting a first pixel point in each sub-image, acquiring first gray values of 8 pixel points adjacent to the first pixel point based on the first pixel point, and comparing the first gray values with second gray values corresponding to the first pixel point respectively, if the first gray value is larger than the second gray value, the position of the pixel point corresponding to the first gray value is marked as 1, otherwise, the position is marked as 0; based on the operation, binary numbers corresponding to 8 pixel points except the central pixel point in the 3*3 adjacent areas are obtained, and based on the binary numbers corresponding to the 8 pixel points, the LBP value of the central pixel point of the window, namely a first LBP value corresponding to the first pixel point is obtained; repeating the operation until the first LBP values corresponding to all the pixel points in each sub-image are obtained. In one embodiment, a histogram corresponding to each sub-image is calculated based on the first LBP value. Specifically, based on the first LBP values corresponding to all the pixel points in each sub-image, the probability of each first LBP value is counted, and based on the probability, a histogram corresponding to each sub-image is generated.
In one embodiment, the acquiring a second image of the hybrid line, and extracting features from the second image to obtain line inclination feature data specifically includes: performing image correction processing on the second image to obtain a fifth image; extracting line data points from the fifth image to obtain a line data point set; performing least square fitting straight line processing on the line pole data point set to obtain a line fitting straight line; and obtaining line inclination characteristic data according to the line fitting straight line.
In this embodiment, the second image is converted into a gray-scale image, and then a Canny edge detection algorithm is applied to detect the outline of the line, so as to obtain edge information of the line in the second image, after edge detection, the outline of the line in the second image is obtained, the outline is traversed and pixel points on the outline are extracted, which are represented as [ (x 1, y 1), (x 2, y 2),. The term, (xn, yn) ], and a least square fitting method is used to perform straight line fitting on the extracted line data points, so that a fitting straight line equation of the line can be obtained, which is represented as: y=mx+b. And calculating the inclination angle m of the line, the intercept b and other information according to the linear equation obtained by fitting, and describing the inclination condition of the line.
In one embodiment, the acquiring a third image of the hybrid circuit, and extracting features of the third image to obtain heat distribution feature data specifically includes: shooting the mixed-frame line to be evaluated based on the infrared thermal imager to obtain a third image; acquiring first temperature values corresponding to all pixel points in the third image; generating a temperature matrix of the third image based on the first temperature values; and extracting abnormal temperature points from the temperature matrix to obtain abnormal temperature points, and determining heat distribution characteristic data of the third image based on the abnormal temperature points.
In this embodiment, specifically, based on the infrared thermal imager, the hybrid line to be evaluated is photographed to obtain a third image with 320x240 pixels, the pixel value of the third image is converted into a temperature value according to the parameters and calibration information of the thermal imager, the temperature value corresponding to each pixel point is extracted, a 320x240 matrix is created, and all elements are initialized to 0. And filling the temperature value corresponding to the extracted pixel point into the corresponding position of the matrix according to the pixel sequence of the third image. More specifically, assume that we get the following partial temperature values, the temperature value corresponding to the first pixel (0, 0) in the image is 25 ℃, the temperature value corresponding to the second pixel (0, 1) in the image is 26 ℃, the temperature value corresponding to the third pixel (0, 2) in the image is 27 ℃, and so on. A 320x240 temperature matrix is finally obtained, wherein each element represents the temperature value of a corresponding pixel point in the image. Each pixel point in the temperature matrix is traversed. For each pixel, it is checked whether or not its temperature exceeds a set abnormal range, and a temperature range of less than 30 ℃ or more than 40 ℃ is set in the present embodiment. If the temperature exceeds the abnormal range, marking the pixel point as an abnormal temperature point, traversing until all the pixel points are inspected to obtain the abnormal temperature point, performing cluster analysis on the abnormal temperature point, such as K-means clustering, and dividing the abnormal temperature point into different clusters, wherein each cluster represents the heat distribution condition of one line or part of lines. The shape, size and position of each cluster are further analyzed to determine thermal profile data for the third image.
In one embodiment, the fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data, and the thermal distribution feature data to obtain a multi-feature fusion vector specifically includes: converting the geometric feature data, the texture feature data, the connection feature data, the line tilt feature data and the thermal distribution feature data into a geometric feature vector, a texture feature vector, a connection feature vector, a line tilt feature vector and a thermal distribution feature vector, respectively, based on a convolutional neural network; respectively giving weights to the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector to obtain corresponding weight coefficients; multiplying the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector with the corresponding weight coefficients to obtain corresponding weighted feature vectors; and linearly adding the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the weighted feature vector corresponding to the thermal distribution feature vector to obtain a multi-feature fusion vector.
In this embodiment, specifically, there are two feature vectors a and B, which represent two different features of the geometric feature data and the texture feature vector, respectively. Feature vector a is denoted as [0.2,0.3,0.5], feature vector B is denoted as [0.1,0.4,0.6], a weight is assigned to each feature vector, assuming weights of 0.7 and 0.3, respectively, and using a weighted average method we perform feature fusion as follows: multiplying the eigenvector a by weights 0.7, [0.2 x 0.7,0.3 x 0.7,0.5 x 0.7] = [0.14,0.21,0.35], multiplying the eigenvector B by weights 0.3: [0.1×0.3,0.4×0.3,0.6×0.3] = [0.03,0.12,0.18], and adding the two weighted results to obtain a fusion feature vector: finally, 0.14+0.03,0.21+0.12, 0.35+0.18= [0.17,0.33,0.53] we obtain a fusion feature vector [0.17,0.33,0.53], thus obtaining a multi-feature fusion vector.
Referring to fig. 8, the present application provides a hybrid overhead line state assessment system based on multi-feature information fusion, the system comprising:
The first image feature extraction module 10 is configured to collect a first image of the hybrid line, and perform feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data;
The second image feature extraction module 20 is configured to collect a second image of the hybrid line, and perform feature extraction on the second image to obtain line inclination feature data;
A third image feature extraction module 30, configured to collect a third image of the hybrid line, and perform feature extraction on the third image to obtain heat distribution feature data;
A multi-feature data fusion vector module 40, configured to fuse the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data, and the thermal distribution feature data to obtain a multi-feature fusion vector;
The mixed line state evaluation module 50 is configured to construct a mixed line state evaluation model with multi-feature information fusion, input the multi-feature information fusion vector into the mixed line state evaluation model with multi-feature information fusion, and perform mixed line state evaluation to obtain a mixed line state evaluation result.
As described above, it can be understood that each component of the multi-feature information fusion mixed-shelf line state evaluation system provided in the present application may implement the functions of any one of the multi-feature information fusion-based mixed-shelf line state evaluation methods described above.
In one embodiment, the first image feature extraction module 10 further comprises performing:
Denoising the first image by using a Gaussian filter to obtain a fourth image;
Performing binarization processing on the fourth image to obtain a binarized image corresponding to the fourth image;
performing edge extraction on the binarized image to obtain edge information in the binarized image, wherein the edge information is a coordinate point set of the edge of the binarized image;
and acquiring geometric feature data according to the edge information.
In one embodiment, the first image feature extraction module 10 further comprises performing:
Dividing the first image into a plurality of sub-images, obtaining gradient directions and gradient amplitudes corresponding to all pixel points in each sub-image, constructing a gradient histogram corresponding to each sub-image based on the gradient directions and the gradient amplitudes, and carrying out normalization processing on the gradient histogram to obtain a normalized gradient histogram;
and connecting the normalized gradient histograms corresponding to each sub-image to obtain an HOG texture feature vector of the first image, and taking the HOG texture feature vector as texture feature data.
In one embodiment, the second image feature extraction module 20 further comprises performing:
performing image correction processing on the second image to obtain a fifth image;
extracting line data points from the fifth image to obtain a line data point set;
Performing least square fitting straight line processing on the line pole data point set to obtain a line fitting straight line;
And obtaining line inclination characteristic data according to the line fitting straight line.
In one embodiment, the third image feature extraction module 30 further comprises performing:
Shooting the mixed-frame line to be evaluated based on the infrared thermal imager to obtain a third image;
Acquiring first temperature values corresponding to all pixel points in the third image; generating a temperature matrix of the third image based on the first temperature values;
And extracting abnormal temperature points from the temperature matrix to obtain abnormal temperature points, and determining heat distribution characteristic data of the third image based on the abnormal temperature points.
In one embodiment, the multi-feature data fusion vector module 40 further comprises performing:
Converting the geometric feature data, the texture feature data, the connection feature data, the line tilt feature data and the thermal distribution feature data into a geometric feature vector, a texture feature vector, a connection feature vector, a line tilt feature vector and a thermal distribution feature vector, respectively, based on a convolutional neural network;
Respectively giving weights to the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector to obtain corresponding weight coefficients;
Multiplying the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector with the corresponding weight coefficients to obtain corresponding weighted feature vectors;
and linearly adding the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the weighted feature vector corresponding to the thermal distribution feature vector to obtain a multi-feature fusion vector.
In one embodiment, the hybrid line state assessment module 50 further includes performing:
Acquiring historical multi-feature information fusion vectors, and marking each fusion feature vector as three states A/T/E, wherein the three states respectively represent a normal state, an early warning state and a fault state;
Dividing the historical multi-feature information fusion vector set into a training set and a testing set, and constructing a decision tree based on the base-Ni index and the information gain rate of the probability distribution of the training set;
Inputting the test set into the decision tree and calculating prediction accuracy;
Pruning the decision tree when the prediction precision is lower than a preset threshold value, and generating a multi-feature information fusion hybrid line state evaluation model;
The method comprises the steps of obtaining a current multi-feature information fusion vector, inputting the current multi-feature information fusion vector into a mixed-frame line state evaluation model for evaluation, classifying state types according to training rules, and outputting the current state of the mixed-frame line, namely normal/early warning/failure, based on the feature data reflected in the current multi-feature information fusion vector set by the mixed-frame line state evaluation model.
Referring to fig. 9, a computer device is further provided in an embodiment of the present application, and the internal structure of the computer device may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a display device and an input device connected by a system bus. The network interface of the computer device is used for communicating with an external terminal through network connection. The display device of the computer device is used for displaying the interactive page. The input means of the computer device is for receiving input from a user. The computer device is designed with a processor for providing computing and control capabilities. The memory of the computer device includes a non-volatile storage medium. The non-volatile storage medium stores an operating system, computer programs, and a database. The database of the computer device is used for storing the original data. The computer program, when executed by a processor, implements a hybrid line state assessment method based on multi-feature information fusion.
The processor executes the hybrid line state evaluation method based on multi-feature information fusion, and the method comprises the following steps: collecting a first image of a mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data; collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data; collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data; fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector; and constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result. By the method, the reliability and the accuracy of the mixed-rack line state evaluation result are improved.
The application also provides a computer readable storage medium, on which a computer program is stored, the computer program when executed by the processor realizes a mixed frame line state evaluation method based on multi-feature information fusion, a first image of a mixed frame line is collected, feature extraction is carried out on the first image, and geometric feature data, texture feature data and connection feature data are obtained; collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data; collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data; fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector; and constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result.
The computer readable storage medium provides a hybrid line state evaluation method based on multi-feature information fusion, which comprises the following steps: collecting a first image of a mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data; collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data; collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data; fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector; and constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state, so as to obtain a mixed-frame line state evaluation result. By the method, the reliability and the accuracy of the mixed-rack line state evaluation result are improved.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided by the present application and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the application.
Claims (9)
1. The method for evaluating the state of the mixed frame line based on multi-feature information fusion is characterized by comprising the following steps of:
Collecting a first image of a mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data;
Collecting a second image of the mixed frame line, and carrying out feature extraction on the second image to obtain line inclination feature data;
collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data;
fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector;
Constructing a multi-feature information fusion mixed-frame line state evaluation model, and inputting the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model to evaluate the mixed-frame line state to obtain a mixed-frame line state evaluation result;
the construction of a multi-feature information fusion mixed-frame line state evaluation model, the input of the multi-feature information fusion vector into the multi-feature information fusion mixed-frame line state evaluation model for mixed-frame line state evaluation, and the acquisition of mixed-frame line state evaluation results specifically comprises the following steps:
Acquiring historical multi-feature information fusion vectors, and marking each fusion feature vector as three states A/T/E, wherein the three states respectively represent a normal state, an early warning state and a fault state;
Dividing the historical multi-feature information fusion vector set into a training set and a testing set, and constructing a decision tree based on the base-Ni index and the information gain rate of the probability distribution of the training set;
Inputting the test set into the decision tree and calculating prediction accuracy;
Pruning the decision tree when the prediction precision is lower than a preset threshold value, and generating a multi-feature information fusion hybrid line state evaluation model;
The method comprises the steps of obtaining a current multi-feature information fusion vector, inputting the current multi-feature information fusion vector into a mixed-frame line state evaluation model for evaluation, classifying state types according to training rules, and outputting the current state of the mixed-frame line, namely normal/early warning/failure, based on the feature data reflected in the current multi-feature information fusion vector set by the mixed-frame line state evaluation model.
2. The method for evaluating the state of the hybrid line based on multi-feature information fusion according to claim 1, wherein the feature extraction is performed on the first image to obtain geometric feature data, specifically comprising:
Denoising the first image by using a Gaussian filter to obtain a fourth image;
Performing binarization processing on the fourth image to obtain a binarized image corresponding to the fourth image;
performing edge extraction on the binarized image to obtain edge information in the binarized image, wherein the edge information is a coordinate point set of the edge of the binarized image;
and acquiring geometric feature data according to the edge information.
3. The method for evaluating the state of a hybrid line based on multi-feature information fusion according to claim 1, wherein the feature extraction is performed on the first image to obtain texture feature data, specifically comprising:
dividing the first image into a plurality of sub-images, and acquiring gradient directions and gradient amplitudes corresponding to all pixel points in each sub-image;
constructing a gradient histogram corresponding to each sub-image based on the gradient direction and the gradient amplitude, and carrying out normalization processing on the gradient histogram to obtain a normalized gradient histogram;
and connecting the normalized gradient histograms corresponding to each sub-image to obtain an HOG texture feature vector of the first image, and taking the HOG texture feature vector as texture feature data.
4. The method for evaluating the state of the hybrid line based on multi-feature information fusion according to claim 1, wherein the step of acquiring the second image of the hybrid line, and performing feature extraction on the second image to obtain line inclination feature data, comprises the following steps:
performing image correction processing on the second image to obtain a fifth image;
extracting line data points from the fifth image to obtain a line data point set;
Performing least square fitting straight line processing on the line data point set to obtain a line fitting straight line;
And obtaining line inclination characteristic data according to the line fitting straight line.
5. The method for evaluating the state of the hybrid line based on multi-feature information fusion according to claim 1, wherein the step of acquiring a third image of the hybrid line, and performing feature extraction on the third image to obtain thermal distribution feature data, comprises the following steps:
Shooting the mixed-frame line to be evaluated based on the infrared thermal imager to obtain a third image;
Acquiring first temperature values corresponding to all pixel points in the third image;
generating a temperature matrix of the third image based on the first temperature values;
And extracting abnormal temperature points from the temperature matrix to obtain abnormal temperature points, and determining heat distribution characteristic data of the third image based on the abnormal temperature points.
6. The method for evaluating a hybrid line state based on multi-feature information fusion according to claim 1, wherein the fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the thermal distribution feature data to obtain a multi-feature fusion vector specifically comprises:
Converting the geometric feature data, the texture feature data, the connection feature data, the line tilt feature data and the thermal distribution feature data into a geometric feature vector, a texture feature vector, a connection feature vector, a line tilt feature vector and a thermal distribution feature vector, respectively, based on a convolutional neural network;
Respectively giving weights to the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector to obtain corresponding weight coefficients;
Multiplying the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the thermal distribution feature vector with the corresponding weight coefficients to obtain corresponding weighted feature vectors;
and linearly adding the geometric feature vector, the texture feature vector, the connection feature vector, the line inclination feature vector and the weighted feature vector corresponding to the thermal distribution feature vector to obtain a multi-feature fusion vector.
7. A hybrid line status assessment system based on multi-feature information fusion, for implementing the assessment method according to any one of claims 1 to 6, wherein the system comprises:
The first image feature extraction module is used for collecting a first image of the mixed frame line, and carrying out feature extraction on the first image to obtain geometric feature data, texture feature data and connection feature data;
The second image feature extraction module is used for collecting a second image of the mixed frame line, and extracting features of the second image to obtain line inclination feature data;
the third image feature extraction module is used for collecting a third image of the mixed frame line, and carrying out feature extraction on the third image to obtain heat distribution feature data;
The multi-feature data fusion vector module is used for fusing the geometric feature data, the texture feature data, the connection feature data, the line inclination feature data and the heat distribution feature data to obtain a multi-feature fusion vector;
The mixed line state evaluation module is used for constructing a mixed line state evaluation model with multi-feature information fusion, inputting the multi-feature information fusion vector into the mixed line state evaluation model with multi-feature information fusion for mixed line state evaluation, and obtaining a mixed line state evaluation result.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the multi-feature information fusion based hybrid line state assessment method of any one of claims 1 to 6 when the computer program is executed.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the hybrid line state evaluation method based on multi-feature information fusion of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410796214.8A CN118365973B (en) | 2024-06-20 | 2024-06-20 | Multi-feature information fusion-based hybrid line state evaluation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410796214.8A CN118365973B (en) | 2024-06-20 | 2024-06-20 | Multi-feature information fusion-based hybrid line state evaluation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118365973A CN118365973A (en) | 2024-07-19 |
CN118365973B true CN118365973B (en) | 2024-08-23 |
Family
ID=91885257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410796214.8A Active CN118365973B (en) | 2024-06-20 | 2024-06-20 | Multi-feature information fusion-based hybrid line state evaluation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118365973B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310274A (en) * | 2023-02-27 | 2023-06-23 | 深圳供电局有限公司 | State evaluation method for power transmission and transformation equipment |
CN117394311A (en) * | 2023-09-26 | 2024-01-12 | 国网宁夏电力有限公司经济技术研究院 | Power distribution network toughness assessment and emergency control method based on multi-source information fusion |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117421998B (en) * | 2023-12-18 | 2024-03-12 | 国网湖北省电力有限公司经济技术研究院 | Multi-mode data-based power transmission overhead line health state evaluation system |
-
2024
- 2024-06-20 CN CN202410796214.8A patent/CN118365973B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310274A (en) * | 2023-02-27 | 2023-06-23 | 深圳供电局有限公司 | State evaluation method for power transmission and transformation equipment |
CN117394311A (en) * | 2023-09-26 | 2024-01-12 | 国网宁夏电力有限公司经济技术研究院 | Power distribution network toughness assessment and emergency control method based on multi-source information fusion |
Also Published As
Publication number | Publication date |
---|---|
CN118365973A (en) | 2024-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110110799B (en) | Cell sorting method, cell sorting device, computer equipment and storage medium | |
CN112380952B (en) | Power equipment infrared image real-time detection and identification method based on artificial intelligence | |
CN111860670B (en) | Domain adaptive model training method, image detection method, device, equipment and medium | |
CN110059694B (en) | Intelligent identification method for character data in complex scene of power industry | |
CN108520215B (en) | Single-sample face recognition method based on multi-scale joint feature encoder | |
CN113160087B (en) | Image enhancement method, device, computer equipment and storage medium | |
CN108052929A (en) | Parking space state detection method, system, readable storage medium storing program for executing and computer equipment | |
CN114972191A (en) | Method and device for detecting farmland change | |
CN113313169B (en) | Training material intelligent identification method, device and equipment based on deep learning | |
CN113869449A (en) | Model training method, image processing method, device, equipment and storage medium | |
CN112381092B (en) | Tracking method, tracking device and computer readable storage medium | |
CN115239625B (en) | Top cover welding spot cloud defect detection method, device, equipment and storage medium | |
CN111008956B (en) | Beam bottom crack detection method, system, device and medium based on image processing | |
CN116645595A (en) | Remote sensing image building roof contour recognition method, device, equipment and medium | |
Ashraf et al. | Efficient Pavement Crack Detection and Classification Using Custom YOLOv7 Model | |
CN106548195A (en) | A kind of object detection method based on modified model HOG ULBP feature operators | |
CN112614094B (en) | Insulator string abnormity positioning and identifying method based on sequence state coding | |
CN118365973B (en) | Multi-feature information fusion-based hybrid line state evaluation method and system | |
CN117218672A (en) | Deep learning-based medical records text recognition method and system | |
CN112784494A (en) | Training method of false positive recognition model, target recognition method and device | |
CN116612272A (en) | Intelligent digital detection system for image processing and detection method thereof | |
CN114120097A (en) | Distribution network engineering on-site transformer detection method and system based on machine vision | |
CN117197097B (en) | Power equipment component detection method based on infrared image | |
Chen et al. | Urban damage estimation using statistical processing of satellite images: 2003 bam, iran earthquake | |
CN117671496B (en) | Unmanned aerial vehicle application result automatic comparison method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |