CN116245809A - Equipment detection model construction method, device, computer equipment and storage medium - Google Patents
Equipment detection model construction method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN116245809A CN116245809A CN202211720363.3A CN202211720363A CN116245809A CN 116245809 A CN116245809 A CN 116245809A CN 202211720363 A CN202211720363 A CN 202211720363A CN 116245809 A CN116245809 A CN 116245809A
- Authority
- CN
- China
- Prior art keywords
- data
- target
- enhancement
- infrared image
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 238000010276 construction Methods 0.000 title claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 125
- 238000000034 method Methods 0.000 claims abstract description 76
- 239000011159 matrix material Substances 0.000 claims abstract description 37
- 230000000877 morphologic effect Effects 0.000 claims abstract description 21
- 238000003062 neural network model Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims description 38
- 230000008569 process Effects 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 17
- 238000000605 extraction Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 13
- 238000004088 simulation Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 3
- 230000002787 reinforcement Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010219 correlation analysis Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application relates to a device detection model construction method, a device, a computer device and a storage medium. The method comprises the following steps: acquiring equipment operation data of target equipment, obtaining feature data of each dimension according to morphological features of a target object of infrared image data, sequentially determining the feature data of each dimension as the target feature data, determining a corresponding weight matrix according to the target feature data, respectively fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, respectively training according to each sample data to obtain each data enhancement model and corresponding enhancement data, determining the target enhancement data according to similarity between the enhancement data corresponding to each data enhancement model and the infrared image data, fusing the target enhancement data with the infrared image data to obtain training sample data, and training a neural network model according to the training sample data to obtain an equipment detection model. By adopting the method, the detection efficiency of the equipment detection model can be effectively improved.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a device detection model construction method, a device, a computer device, and a storage medium.
Background
With the development of computer technology, a method for detecting and evaluating the operation state of the power equipment by using the computer technology is increasingly widely applied, wherein the analysis of massive infrared image data of the power equipment is utilized, so that the completion of the thermal imaging analysis of the operation state of the power equipment is an important research direction.
In the prior art, the operation state of the power equipment is judged mainly by manually analyzing the infrared image of the power equipment, but the equipment detection efficiency is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a device detection model construction method, apparatus, computer device, and storage medium, which can effectively improve the detection efficiency of a device detection model.
A device detection model construction method comprises the following steps:
acquiring equipment operation data of target equipment, wherein the equipment operation data comprises infrared image data;
according to morphological characteristics of a target object of the infrared image data, obtaining characteristic data of each dimension of the infrared image data;
Sequentially determining each dimension characteristic data as target characteristic data, and determining a corresponding weight matrix according to the target characteristic data;
respectively fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and respectively training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process;
determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
and fusing the target enhancement data with the infrared image data to obtain training sample data, training a neural network model according to the training sample data to obtain a device detection model, wherein the device detection model is used for determining the running state of the target device according to the device running data of the target device.
In one embodiment, before obtaining the feature data of each dimension of the infrared image data according to the morphological feature of the target object of the infrared image data, the method further comprises:
acquiring brightness values of all pixel points of infrared image data;
and determining boundary characteristics corresponding to the target object of the infrared image data according to the brightness value.
In one embodiment, obtaining feature data of each dimension of the infrared image data according to morphological features of a target object of the infrared image data includes:
Determining the position feature, the size feature and the shape feature of the target object according to the boundary feature corresponding to the target object;
and obtaining characteristic data of each dimension of the infrared image data according to the position characteristic, the size characteristic and the shape characteristic of the target object.
In one embodiment, feature data of each dimension is fused with a corresponding weight matrix to generate each sample data, and data enhancement models are trained according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process, including:
acquiring a first weight and a second weight, wherein the first weight is greater than the second weight;
sequentially determining the feature data of each dimension as target feature data, and determining other feature data in the feature data of each dimension as reference feature data;
fusing the target feature data with the first weight to obtain a target fusion item;
fusing the reference characteristic data with the second weight to obtain a reference fusion item;
obtaining target sample data according to the target fusion item and the reference fusion item;
training a data enhancement model according to the target sample data to obtain the target data enhancement model and corresponding enhancement data generated in the training process.
In one embodiment, feature data of each dimension is fused with a corresponding weight matrix to generate each sample data, and data enhancement models are trained according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process, including:
sequentially taking each sample data as input of a generator for generating an countermeasure network to generate data to be authenticated;
inputting the data to be authenticated into an identifier for generating an countermeasure network to obtain an authentication result;
when the authentication result meets the preset condition, stopping training, determining the current generated countermeasure network as a target data enhancement model, and determining the current data to be authenticated as corresponding enhancement data.
In one embodiment, determining target enhancement data according to the similarity of enhancement data corresponding to each data enhancement model and infrared image data includes:
respectively calculating the similarity of each enhancement data and the infrared image data on each dimension characteristic data;
weighting and fusing the similarity of the feature data of each dimension to obtain target similarity;
and determining target enhancement data according to the target similarity.
In one embodiment, fusing the target enhancement data with the infrared image data to obtain training sample data, and training the neural network model according to the training sample data to obtain the device detection model includes:
Carrying out weighted fusion on the target enhancement data and the infrared image data to obtain training sample data;
and inputting training sample data into a convolutional neural network, and training to obtain a device detection model.
An apparatus detection model construction device, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring device operation data of target devices, and the device operation data comprises infrared image data;
the feature extraction module is used for obtaining feature data of each dimension of the infrared image data according to morphological features of a target object of the infrared image data;
the data enhancement module is used for sequentially determining each dimension characteristic data as target characteristic data and determining a corresponding weight matrix according to the target characteristic data; respectively fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and respectively training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process; determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
the model training module is used for fusing the target enhancement data with the infrared image data to obtain training sample data, training the neural network model according to the training sample data to obtain a device detection model, and determining the running state of the target device according to the device running data of the target device.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring equipment operation data of target equipment, wherein the equipment operation data comprises infrared image data;
according to morphological characteristics of a target object of the infrared image data, obtaining characteristic data of each dimension of the infrared image data;
sequentially determining each dimension characteristic data as target characteristic data, and determining a corresponding weight matrix according to the target characteristic data;
respectively fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and respectively training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process;
determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
and fusing the target enhancement data with the infrared image data to obtain training sample data, training a neural network model according to the training sample data to obtain a device detection model, wherein the device detection model is used for determining the running state of the target device according to the device running data of the target device.
A computer readable storage medium storing a computer program which when executed by a processor performs the steps of:
acquiring equipment operation data of target equipment, wherein the equipment operation data comprises infrared image data;
according to morphological characteristics of a target object of the infrared image data, obtaining characteristic data of each dimension of the infrared image data;
sequentially determining each dimension characteristic data as target characteristic data, and determining a corresponding weight matrix according to the target characteristic data;
respectively fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and respectively training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process;
determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
and fusing the target enhancement data with the infrared image data to obtain training sample data, training a neural network model according to the training sample data to obtain a device detection model, wherein the device detection model is used for determining the running state of the target device according to the device running data of the target device.
According to the device operation data of the target device, the feature data of each dimension of the infrared image data are obtained according to the morphological features of the target object of the infrared image data, the feature data of each dimension are sequentially determined to be the target feature data, the corresponding weight matrix is determined according to the target feature data, the feature data of each dimension are respectively fused with the corresponding weight matrix to generate each sample data, the data enhancement model is respectively trained according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process, the target enhancement data are determined according to the similarity of the enhancement data corresponding to each data enhancement model and the infrared image data, the target enhancement data are fused with the infrared image data to obtain training sample data, the neural network model is trained according to the training sample data to obtain the device detection model, in this way, data enhancement is carried out according to the collected small amount of the feature data of each dimension of the infrared image data, correlation analysis is carried out on the enhancement data and the original infrared image data to determine the target enhancement data, the target enhancement data and the infrared image data are respectively trained according to the sample data, the target enhancement data and the infrared image data are effectively analyzed, the detection efficiency of the device can be effectively detected, and the device detection model can be effectively detected, and the device detection efficiency can be effectively detected by using the device detection model is effectively analyzed.
Drawings
FIG. 1 is an application environment diagram of a device detection model building method in one embodiment;
FIG. 2 is a flow diagram of a method of device detection model construction in one embodiment;
FIG. 3 is a flow diagram of determining boundary features of a target object in one embodiment;
FIG. 4 is a flow diagram of determining dimension feature data in one embodiment;
FIG. 5 is a flow diagram of generating enhanced data in one embodiment;
FIG. 6 is a flow diagram of generating enhanced data in one embodiment;
FIG. 7 is a flow diagram of determining target enhancement data in one embodiment;
FIG. 8 is a flow diagram of a device detection model in one embodiment;
FIG. 9 is a block diagram of a device detection model building apparatus in one embodiment;
FIG. 10 is an internal block diagram of a computer device in one embodiment;
FIG. 11 is an internal block diagram of a tree model recursion unit in one embodiment;
FIG. 12 is a flow diagram of a device detection model construction process in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The method for constructing the equipment detection model can be applied to an application environment shown in fig. 1. As shown in fig. 1, the computer device 102 obtains feature data of each dimension of the infrared image data according to morphological features of a target object of the infrared image data by obtaining device operation data of the target device, sequentially determines each dimension feature data as target feature data, determines a corresponding weight matrix according to the target feature data, respectively fuses the feature data of each dimension with the corresponding weight matrix to generate each sample data, respectively trains a data enhancement model according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in a training process, determines target enhancement data according to similarity between the enhancement data corresponding to each data enhancement model and the infrared image data, fuses the target enhancement data with the infrared image data to obtain training sample data, and trains a neural network model according to the training sample data to obtain a device detection model. The wearable device 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, smart cameras, smart watches, finger rings, and portable wearable devices, which are not limited herein.
In one embodiment, as shown in fig. 2, a device detection model construction method is provided, and the method is applied to the computer device 102 in fig. 1 for illustration, and includes the following steps:
in step S202, device operation data of the target device is acquired, where the device operation data includes infrared image data.
The equipment operation data are operation characteristic data generated by the power equipment in operation and are used for representing the operation state of the power equipment, and the operation characteristic data comprise infrared image data.
Step S204, according to the morphological characteristics of the target object of the infrared image data, the characteristic data of each dimension of the infrared image data are obtained.
The target object in the infrared image data is that when the power equipment operates, the temperatures of the positions and the areas of different parts are different, different brightness is displayed in the infrared image data, when a certain part of the power equipment has higher temperature in the operation process, the corresponding infrared image data can generate bright spots, and each bright spot with different positions, sizes and shapes is taken as the morphological characteristic of the target object.
Specifically, after the computer equipment acquires the infrared image data of the power equipment, feature extraction is performed on the position, the size and the shape of a target object of the infrared image data, so that feature data of each dimension of the infrared image data are obtained.
Step S206, determining each dimension characteristic data as target characteristic data in turn, and determining a corresponding weight matrix according to the target characteristic data.
Specifically, the computer device sequentially determines each dimension characteristic data as target characteristic data, and takes other dimension characteristic data except the target characteristic data as reference characteristic data, and gives a larger weight to the target characteristic data and a smaller weight to the reference characteristic data.
Step S208, the feature data of each dimension are respectively fused with the corresponding weight matrix to generate each sample data, and the data enhancement model is trained according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process.
Specifically, the computer equipment sequentially gives the characteristic data of each dimension with larger weight according to the method of the previous step, then respectively fuses the characteristic data of each dimension with a corresponding weight matrix to generate each sample data, trains a data enhancement model according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process, wherein the data enhancement model can be trained by adopting a generated countermeasure network, and further obtains the data enhancement model and the corresponding enhancement data generated in the training process.
Step S210, determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data.
Specifically, the computer device calculates the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data, and a method for calculating the peyer correlation coefficient or the gray scale correlation coefficient can be adopted to analyze the correlation between each enhancement data and the infrared image data, and the enhancement data with the largest correlation coefficient with the infrared image data is taken as target enhancement data.
Step S212, fusing the target enhancement data with the infrared image data to obtain training sample data, training the neural network model according to the training sample data to obtain a device detection model, wherein the device detection model is used for determining the operation state of the target device according to the device operation data of the target device.
In this embodiment, the device operation data of the target device is obtained, feature data of each dimension of the infrared image data is obtained according to the morphological feature of the target object of the infrared image data, each dimension feature data is sequentially determined to be the target feature data, a corresponding weight matrix is determined according to the target feature data, the feature data of each dimension is respectively fused with the corresponding weight matrix to generate each sample data, the data enhancement model is trained according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process, the target enhancement data is determined according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data, the target enhancement data is fused with the infrared image data to obtain training sample data, and a device detection model is obtained according to training sample data training neural network model, so that data enhancement is performed according to each dimension feature data of a small amount of the collected infrared image data, and then the target enhancement data is determined through correlation analysis of each enhancement data and the original infrared image data, the target enhancement data is fused with the training data to finally detect the neural network, and the device detection efficiency can be effectively analyzed by using the device detection model.
In one embodiment, as shown in fig. 3, before obtaining feature data of each dimension of the infrared image data according to the morphological feature of the target object of the infrared image data, the method further includes:
step S302, brightness values of all pixel points of the infrared image data are obtained.
The brightness value of each pixel point of the infrared image data is used for representing the temperature of the power equipment running at the position of the pixel point, when the brightness value is larger, the temperature is higher, and when the brightness value is smaller, the temperature is lower.
Step S304, determining boundary characteristics corresponding to the target object of the infrared image data according to the brightness value.
Specifically, the computer equipment determines the brightness value of each pixel point in the infrared image data according to the steps, and delimits the area with the brightness value change speed larger than the preset threshold value by using a smooth curve, so as to obtain the boundary characteristics corresponding to the target object of the infrared image data.
In this embodiment, the brightness value of each pixel point of the infrared image data is obtained, the boundary feature corresponding to the target object of the infrared image data is determined according to the brightness value, the boundary feature of the target object is effectively and accurately defined according to the brightness value of each pixel point, and the accuracy of determining the position of the target object is effectively improved.
In one embodiment, as shown in fig. 4, according to morphological characteristics of a target object of the infrared image data, feature data of each dimension of the infrared image data is obtained, including:
step S402, determining the position feature, the size feature and the shape feature of the target object according to the boundary feature corresponding to the target object.
Specifically, the computer device determines the position feature, the size feature and the shape feature of the target object according to the boundary feature of the target object, and generates a corresponding position feature vector, a corresponding size feature vector and a corresponding shape feature vector.
And step S404, obtaining characteristic data of each dimension of the infrared image data according to the position characteristic, the size characteristic and the shape characteristic of the target object.
In this embodiment, the position feature, the size feature and the shape feature of the target object are determined according to the boundary feature corresponding to the target object, and the feature data of each dimension of the infrared image data is obtained according to the position feature, the size feature and the shape feature of the target object, so that the feature data of each dimension of the target object in the infrared image data is determined according to the boundary feature of the target object, and the reliability of the feature data is improved.
In one embodiment, as shown in fig. 5, feature data of each dimension is fused with a corresponding weight matrix to generate each sample data, and a data enhancement model is trained according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process, including:
step S502, a first weight and a second weight are obtained, wherein the first weight is greater than the second weight.
Step S504, determining feature data of each dimension as target feature data, and determining other feature data in the feature data of each dimension as reference feature data.
And step S506, fusing the target feature data with the first weight to obtain a target fusion item.
Specifically, the computer device performs weighted fusion on the target feature data and the first weight to obtain a target fusion item, and the fusion mode includes, but is not limited to, adding, subtracting, multiplying, dividing, squaring and the like.
And step S508, fusing the reference characteristic data with the second weight to obtain a reference fusion item.
Specifically, the computer device performs weighted fusion on the target feature data and the second weight to obtain a reference fusion term, wherein the fusion mode includes, but is not limited to, adding, subtracting, multiplying, dividing, squaring and the like.
Step S510, obtaining target sample data according to the target fusion item and the reference fusion item.
Specifically, the computer equipment determines a target fusion item and a reference fusion item according to the steps, and then carries out weighted fusion on the target fusion item and the reference fusion item to obtain target sample data.
Step S512, training the data enhancement model according to the target sample data to obtain the target data enhancement model and corresponding enhancement data generated in the training process.
In this embodiment, a first weight and a second weight are obtained, the first weight is greater than the second weight, feature data of each dimension is sequentially determined to be target feature data, other feature data in the feature data of each dimension is determined to be reference feature data, the target feature data and the first weight are fused to obtain a target fusion item, the reference feature data and the second weight are fused to obtain a reference fusion item, target sample data is obtained according to the target fusion item and the reference fusion item, a data enhancement model is trained according to the target sample data, and corresponding enhancement data is obtained according to the target data enhancement model and generated in the training process, so that the defect of insufficient infrared image data of the power equipment is effectively solved, data enhancement is effectively realized, and the reliability of the sample data is improved.
In one embodiment, as shown in fig. 6, the feature data of each dimension is fused with a corresponding weight matrix to generate each sample data, and the data enhancement model is trained according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process, which includes:
step S602, each sample data is sequentially used as an input of a generator for generating an countermeasure network, and data to be authenticated is generated.
Specifically, the computer device sequentially takes each sample data generated in the previous steps as input of a generator for generating an countermeasure network, and learns the infrared image data of the electric device by using the generator and generates image data to be authenticated.
In step S604, the data to be authenticated is input to the authenticator that generates the countermeasure network, and an authentication result is obtained.
Specifically, the computer equipment inputs the data to be authenticated generated in the previous step into the identifier for authentication, when the authentication result does not pass, the new data to be authenticated is continuously generated by the generator, authentication is performed again until the authentication result passes, and the data to be authenticated, through which the authentication result passes, is further used as enhancement data.
In step S606, when the authentication result meets the preset condition, training is stopped, the currently generated countermeasure network is determined as the target data enhancement model, and the current data to be authenticated is determined as the corresponding enhancement data.
In this embodiment, each sample data is sequentially used as input of a generator for generating an countermeasure network, data to be authenticated is generated, the data to be authenticated is input into an identifier for generating the countermeasure network, an authentication result is obtained, when the authentication result meets a preset condition, training is stopped, the currently generated countermeasure network is determined as a target data enhancement model, and the currently to-be-authenticated data is determined as corresponding enhancement data, so that the network advantages of autonomous learning and autonomous authentication of the generated countermeasure learning network are effectively utilized, and the generated enhancement data is more reliable.
In one embodiment, as shown in fig. 7, determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data includes:
step S702, the similarity between each enhancement data and the infrared image data on each dimension feature data is calculated.
The enhancement data and the infrared image data are the same type of data, namely the enhancement data is also infrared image data, and the enhancement data and the infrared image data have all dimension characteristic data.
Specifically, the computer device calculates the similarity of the enhancement data and the characteristic data of the infrared image data in each dimension respectively to obtain each similarity.
Step S704, carrying out weighted fusion on the similarity of each dimension characteristic data to obtain the target similarity.
Specifically, the computer device performs weighted fusion on the similarity between the enhancement data determined in the previous step and each dimension characteristic data of the infrared image data to obtain a target similarity, and the fusion modes include, but are not limited to, adding, subtracting, multiplying, dividing, squaring and the like.
Step S706, determining target enhancement data according to the target similarity.
Specifically, the computer device determines the similarity of each enhancement data and the infrared image data in each dimension according to the previous steps, and determines the enhancement data corresponding to the maximum similarity as target enhancement data.
In this embodiment, the similarity of each enhancement data and the infrared image data on each dimension feature data is calculated respectively, the similarity on each dimension feature data is weighted and fused to obtain a target similarity, and finally the target enhancement data is determined according to the target similarity, so that the target enhancement data is determined according to the similarity of the feature data of the enhancement data on each dimension and the feature data of the infrared image data, and the accuracy of determining the target enhancement data is improved effectively.
In one embodiment, as shown in fig. 8, fusing the target enhancement data with the infrared image data to obtain training sample data, and training the neural network model according to the training sample data to obtain the device detection model, including:
step S802, carrying out weighted fusion on the target enhancement data and the infrared image data to obtain training sample data.
Step S804, training sample data are input into a convolutional neural network, and a device detection model is obtained through training.
In this embodiment, the target enhancement data and the infrared image data are weighted and fused to obtain training sample data, the training sample data are input into the convolutional neural network, the training is performed to obtain the equipment detection model, the sample data are efficiently expanded by weighted and fused the target enhancement data with high similarity to the infrared image data and the infrared image data, and then the training of the model is performed, so that the semantic information of the sample data is fully mined, and the reliability of the model is effectively improved.
The application scene is applied to the device detection model construction method, and the method is applied to the device detection scene. Specifically, the application of the device detection model construction method in the application scene is as follows:
Step 1: and (5) data acquisition and simulation.
1) Obtaining: the related data of the infrared power equipment can be intercepted by using a video sequence shot by an infrared visible double-mode camera with a fixed machine position, and can also be acquired by using a double-mode camera carried by an unmanned aerial vehicle, so that no special requirement is required for a geographical scene where the power equipment is located, the data can also be acquired in an incremental mode, the data volume is continuously expanded, and an optimization model can be continuously updated. When the method is applied to a target detection task, the labels of the infrared and visible images on the region needing the ROI can be achieved by adopting a manual and semi-AI scheme, namely, manually marking a part of data, and marking the rest of data by adopting a semi-supervised AI model training technology.
2) Simulation and enhancement:
aiming at the problem of rareness of infrared weak and small target data, the invention is to construct a batch of simulation data to support the training and performance verification of the subsequent algorithm on the basis of actually collecting the data.
The infrared electro-mechanical device image is composed of three parts, namely a background B, a target T and noise N. The object simulation process can thus be described as a process of embedding infrared small objects in a background image. In an actual scene, the change of the cross-over area between the edge of the infrared weak and small target and the background is relatively gentle due to factors such as defocusing and blurring. To obtain a smooth infrared image, the target embedding process needs to find a suitable mask, which is calculated as follows
wherein x is 0 ,y 0 Is the center target of the target on the plane; f (f) D Is the generated image, f B Is a background image of the object,is the target of dimension m×n after normalization processing, and the calculation formula thereof follows the following formula 3:
r is the maximum value for generating the target gray scale. When r is smaller than the gray level of the background area corresponding to the area where the target is located, the target is invisible, and in order to ensure the visibility of the target, the calculation formula of r is shown in formula 4:
in consideration of the fact that the infrared power image may be in different forms due to the blurring of atmosphere, defocusing, movement and the like in the image plane, in the simulation process, data enhancement can be actively carried out on the same target image file, so that the simulation target has enough diversity, and is closer to a real target. Therefore, the data enhancement of the target is divided into two steps, wherein the first step is to acquire a fuzzy core according to parameters, and the second step is to acquire a degraded target image by carrying out convolution operation according to the target image of the fuzzy core.
Step 2: building a network generation controller
In the task of target detection, the infrared power equipment has large difference with the public image data, and is difficult to transfer and learn, and the design mainly adopts a Block-level encapsulated network structure, namely, a part of the network structure is encapsulated in a modularized form, and the EANet and the ResNet are selected.
The Residual network (Residual network) is a network structure proposed in 2015 for solving the image classification task of ImageNet, has been widely applied to visual tasks as BackBone, solves the gradient disappearance phenomenon in the training process by introducing a Residual module, and enhances the learning expression capability of a model. Here we focus on EANet. EANet (External-attention) is an External attention mechanism structure, and two learnable shared memory units are set to acquire the spatial position dependency relationship of pixels, specifically expressed as the following formula 5 and formula 6:
F out =M v equation 6
In the method, in the process of the invention,and M v Is two different storage units used for replacing k and v of the traditional attention mechanism, thereby increasing the expression capacity of the model; a is an attention matrix; f is the input signature.
In the self-structure searching process, a flexible RNN is used, and a cyclic neural network is used as a controller to generate structural super parameters of the neural network. For example, when we use the block is a building block, a network initial structure is generated, and the RNN can be used for generating super parameters into a token sequence, and a convolutional neural network can be generated according to the number of blocks 1 and 2, the arrangement mode and the like. If the number of layers exceeds a certain value, the search for the structure is automatically stopped. Once the controller RNN completes the generation of the architecture, the generated neural network of target detections will be applied to the training set. At loss convergence, the accuracy of the network on the validation set will be kept recorded. Based on the accuracy obtained by training, we reverse optimize the parameter θ of RNN c The optimized strategy gradient method will be described in step 3.
Step 3 reinforcement learning training
The RNN predicted token sequence may be considered the "basis" for designing the subnetwork architecture, operation list a 1: After the training converges, it is assumed that the network will achieve a precision R on the validation dataset. The accuracy R can be used in the design of the bonus signal to train the controller using reinforcement learning theory. To search for the optimal structure, we require the controller to maximize its expected return J (θ c ) The specific calculation process is shown in the following formulas 7 to 9:
since the reward signal R is not differentiable, we need to iteratively update θ using a strategic gradient approach c The method comprises the following steps:
the empirical approximation formula is:
distributed training and asynchronous parameter updating in neural structure search for controller parameter θ c Corresponds to training one sub-network to converge. As training the subnetwork may be necessaryFor hours we use distributed training and asynchronous parameter updates to speed up the learning process of the controller. Here we choose to use a parameter server scheme, assuming we have s shard parameter servers for storing the shared parameters of the k controller copies. Each controller replica samples m different sub-architectures trained in parallel. The controller then calculates the gradient from the results of the small batch of m architectures at convergence and sends it to the parameter server to update the weights of all controller copies. In our implementation, convergence is achieved when each sub-network is trained more than a certain number of times.
Step 5: block selection attention mechanism to add jumper connections
In the above steps, we define a resnet with residual compensation structure, but the search space between blocks has no jump connection. To enable RNNs to predict such connections, we use a set selection type of attention based on an attention mechanism. At layer N we add an associated anchor point after N-1 content activation to indicate whether the previous layer needs to be activated.
An activation function, sigmoid, is a function of the current hidden state and the previous hidden state of the RNN:
p (j-th layer is input to i-th layer) =sigmoid (v T tanh(W prev *h j +W curr *h i ) Equation 10)
Wherein h is j Representing the hidden state of the controller at the anchor point of the j-th layer, where j ranges from 0 to N-1, we then sample from these sigmoids to determine which previous layers to use as input to the current layer. Matrix W prev ,W curr And v is a trainable parameter. These connections are also defined by probability distributions, so reinforcement learning methods remain applicable.
In our framework, if a layer has multiple input layers, then all input layers are connected in series in the depth dimension. Skipping a connection may result in "compile failure", where one layer is not compatible with another, or one layer may not have any inputs or outputs.
To avoid these problems, we have employed three simple techniques. First, if a layer is not connected to any input layer, an image is used as the input layer. Second, at the last layer, we take all the unconnected layer outputs and connect them together, and then send the final hidden state to the classifier. Finally, if the input layers to be connected have different sizes, we fill the small layers with zeros so that the connected layers have the same size, as shown in fig. 12.
Step 6: generation of LSTM-like cyclic unit microstructures
Between blocks connected to each other, it is necessary to additionally define a link structure between them. Reference is made herein to the method of LSTM. It is assumed that at each time step t, the controller needs to find a single use x t And h t -1 h as input t A function. The simplest method is to let h t =tanh(W 1 *x t +W 2 *h t-1 )。
The computation of the LSTM-like cyclic unit can be generalized to take the form of x t And h t -1 as input, generating h t As a final output. The controller RNN needs to mark each node in the tree using a combination method (addition, element multiplication, etc.) and an activation function (tanh, sigmoid, etc.) to combine the two inputs and produce one output. The two outputs are then fed as inputs to the next node in the tree. To allow the controller RNN to select these methods and functions, the nodes in the tree are indexed in order so that the controller RNN can access each node one by one and mark the required hyper-parameters, the specific calculation steps are shown in equations 11-15 below:
The controller predicts Add and Tanh modules of tree index 0, which means we need to calculate
a 0 =tanh(W 1 *x t +W 2 *h t-1 ) Equation 11
The controller predicts elemmmult and ReLU for tree index 1, meaning we need to calculate:
a 1 =ReLU((W 3 *x t )⊙(W 4 *h t-1 ) Equation 12)
The controller predicts the second element of "Unit index" as 0, and the elements in "Cell object" as Add and ReLU, meaning we need to calculate:
ElemMult and Sigmoid for controller prediction tree index 2
Since the maximum index in the tree is 2,h t Set to a2.
The controller RNN predicts the first element of the "unit index" to be 1, which means we should have c before activation t Set as the output of the tree at index 1, namely:
c t =(W 3 *x t )⊙(W 4 *h t-1 ) Equation 15
Step 7: data driven generation of self-searching network model
The actually collected infrared electric power data and the data obtained by simulation are mixed and sent into a model, training is carried out, and a self-generated target detection network is obtained after convergence, as shown in fig. 11.
According to the equipment detection model construction method, equipment operation data of the target equipment are obtained, feature data of each dimension of the infrared image data are obtained according to morphological features of the target object of the infrared image data, the feature data of each dimension are sequentially determined to be the target feature data, a corresponding weight matrix is determined according to the target feature data, the feature data of each dimension are respectively fused with the corresponding weight matrix to generate each sample data, the data enhancement model is respectively trained according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process, the target enhancement data are determined according to similarity of the enhancement data corresponding to each data enhancement model and the infrared image data, the target enhancement data are fused with the infrared image data to obtain training sample data, and a neural network model is trained according to the training sample data to obtain an equipment detection model.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
In one embodiment, as shown in fig. 9, there is provided a device detection model building apparatus, which may employ a software module or a hardware module, or a combination of both, as a part of a computer device, and specifically includes: an acquisition module 902, a feature extraction module 904, a data enhancement module 906, a model training module 908, wherein:
An acquiring module 902, configured to acquire device operation data of a target device, where the device operation data includes infrared image data;
the feature extraction module 904 is configured to obtain feature data of each dimension of the infrared image data according to morphological features of a target object of the infrared image data;
the data enhancement module 906 is configured to sequentially determine each dimension feature data as target feature data, and determine a corresponding weight matrix according to the target feature data; respectively fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and respectively training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process; determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
the model training module 908 is configured to fuse the target enhancement data with the infrared image data to obtain training sample data, and train the neural network model according to the training sample data to obtain a device detection model, where the device detection model is configured to determine an operation state of the target device according to the device operation data of the target device.
According to the device detection model construction device, device operation data of the target device are obtained, feature data of each dimension of the infrared image data are obtained according to morphological features of a target object of the infrared image data, the feature data of each dimension are sequentially determined to be the target feature data, a corresponding weight matrix is determined according to the target feature data, the feature data of each dimension are respectively fused with the corresponding weight matrix to generate each sample data, a data enhancement model is respectively trained according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in a training process, the target enhancement data are determined according to similarity of the enhancement data corresponding to each data enhancement model and the infrared image data, the target enhancement data are fused with the infrared image data to obtain training sample data, and a device detection model is obtained according to training sample data.
In one embodiment, the feature extraction module 904 is further configured to obtain brightness values of respective pixels of the infrared image data; and determining boundary characteristics corresponding to the target object of the infrared image data according to the brightness value.
In one embodiment, the feature extraction module 904 is further configured to determine a position feature, a size feature, and a shape feature of the target object according to the boundary feature corresponding to the target object; and obtaining characteristic data of each dimension of the infrared image data according to the position characteristic, the size characteristic and the shape characteristic of the target object.
In one embodiment, the data enhancement module 906 is further configured to obtain a first weight and a second weight, where the first weight is greater than the second weight; sequentially determining the feature data of each dimension as target feature data, and determining other feature data in the feature data of each dimension as reference feature data; fusing the target feature data with the first weight to obtain a target fusion item; fusing the reference characteristic data with the second weight to obtain a reference fusion item; obtaining target sample data according to the target fusion item and the reference fusion item; training a data enhancement model according to the target sample data to obtain the target data enhancement model and corresponding enhancement data generated in the training process.
In one embodiment, the data enhancement module 906 is further configured to sequentially use each sample data as input to a generator that generates an antagonism network, and generate data to be authenticated; inputting the data to be authenticated into an identifier for generating an countermeasure network to obtain an authentication result; when the authentication result meets the preset condition, stopping training, determining the current generated countermeasure network as a target data enhancement model, and determining the current data to be authenticated as corresponding enhancement data.
In one embodiment, the data enhancement module 906 is further configured to calculate a similarity between each enhancement data and the infrared image data in each dimension feature data; weighting and fusing the similarity of the feature data of each dimension to obtain target similarity; and determining target enhancement data according to the target similarity.
In one embodiment, the model training module 908 is further configured to perform weighted fusion on the target enhancement data and the infrared image data to obtain training sample data; and inputting training sample data into a convolutional neural network, and training to obtain a device detection model.
For specific limitations of the device detection model construction means, reference may be made to the above limitations of the device detection model construction method, and no further description is given here. The respective modules in the above-described device detection model construction apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a device detection model building method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (10)
1. A method for building a device detection model, the method comprising:
acquiring equipment operation data of target equipment, wherein the equipment operation data comprises infrared image data;
obtaining feature data of each dimension of the infrared image data according to morphological features of a target object of the infrared image data;
sequentially determining the dimension feature data as target feature data, and determining a corresponding weight matrix according to the target feature data;
respectively fusing the characteristic data of each dimension with a corresponding weight matrix to generate each sample data, and respectively training a data enhancement model according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process;
Determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
and fusing the target enhancement data with the infrared image data to obtain training sample data, training a neural network model according to the training sample data to obtain a device detection model, wherein the device detection model is used for determining the running state of the target device according to the device running data of the target device.
2. The method according to claim 1, wherein before obtaining feature data of each dimension of the infrared image data according to morphological features of the target object of the infrared image data, the method further comprises:
acquiring brightness values of all pixel points of the infrared image data;
and determining boundary characteristics corresponding to the target object of the infrared image data according to the brightness value.
3. The method according to claim 2, wherein the obtaining feature data of each dimension of the infrared image data according to the morphological feature of the target object of the infrared image data includes:
determining the position feature, the size feature and the shape feature of the target object according to the boundary feature corresponding to the target object;
And obtaining feature data of each dimension of the infrared image data according to the position feature, the size feature and the shape feature of the target object.
4. The method according to claim 1, wherein the fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process, respectively, includes:
acquiring a first weight and a second weight, wherein the first weight is greater than the second weight;
sequentially determining the characteristic data of each dimension as target characteristic data, and determining other characteristic data in the characteristic data of each dimension as reference characteristic data;
fusing the target feature data with the first weight to obtain a target fusion item;
fusing the reference characteristic data with the second weight to obtain a reference fusion item;
obtaining target sample data according to the target fusion item and the reference fusion item;
and training a data enhancement model according to the target sample data to obtain a target data enhancement model and enhancement data corresponding to the target data enhancement model in the training process.
5. The method according to claim 1, wherein the fusing the feature data of each dimension with the corresponding weight matrix to generate each sample data, and training the data enhancement model according to each sample data to obtain each data enhancement model and the corresponding enhancement data generated in the training process, respectively, includes:
sequentially taking the sample data as input of a generator for generating an countermeasure network to generate data to be authenticated;
inputting the data to be authenticated into the authenticator for generating the countermeasure network to obtain an authentication result;
and stopping training when the identification result meets a preset condition, determining the current generated countermeasure network as a target data enhancement model, and determining the current data to be identified as corresponding enhancement data.
6. The method of claim 1, wherein said determining target enhancement data based on similarities of enhancement data corresponding to said respective data enhancement models and said infrared image data comprises:
respectively calculating the similarity of each enhancement data and the infrared image data on each dimension characteristic data;
weighting and fusing the similarity of the feature data of each dimension to obtain target similarity;
And determining target enhancement data according to the target similarity.
7. The method of claim 1, wherein fusing the target enhancement data with the infrared image data to obtain training sample data and training a neural network model based on the training sample data to obtain a device detection model, comprising:
carrying out weighted fusion on the target enhancement data and the infrared image data to obtain training sample data;
and inputting the training sample data into a convolutional neural network, and training to obtain the equipment detection model.
8. A device detection model construction apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring device operation data of target devices, and the device operation data comprises infrared image data;
the feature extraction module is used for obtaining feature data of each dimension of the infrared image data according to morphological features of the target object of the infrared image data;
the data enhancement module is used for sequentially determining the dimension feature data as target feature data and determining a corresponding weight matrix according to the target feature data; respectively fusing the characteristic data of each dimension with a corresponding weight matrix to generate each sample data, and respectively training a data enhancement model according to each sample data to obtain each data enhancement model and corresponding enhancement data generated in the training process; determining target enhancement data according to the similarity between the enhancement data corresponding to each data enhancement model and the infrared image data;
The model training module is used for fusing the target enhancement data with the infrared image data to obtain training sample data, training a neural network model according to the training sample data to obtain a device detection model, and determining the running state of the target device according to the device running data of the target device.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211720363.3A CN116245809A (en) | 2022-12-30 | 2022-12-30 | Equipment detection model construction method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211720363.3A CN116245809A (en) | 2022-12-30 | 2022-12-30 | Equipment detection model construction method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116245809A true CN116245809A (en) | 2023-06-09 |
Family
ID=86628797
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211720363.3A Pending CN116245809A (en) | 2022-12-30 | 2022-12-30 | Equipment detection model construction method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116245809A (en) |
-
2022
- 2022-12-30 CN CN202211720363.3A patent/CN116245809A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112052787B (en) | Target detection method and device based on artificial intelligence and electronic equipment | |
CN113570029A (en) | Method for obtaining neural network model, image processing method and device | |
CN112418292B (en) | Image quality evaluation method, device, computer equipment and storage medium | |
CN109754078A (en) | Method for optimization neural network | |
CN109840530A (en) | The method and apparatus of training multi-tag disaggregated model | |
CN114332578A (en) | Image anomaly detection model training method, image anomaly detection method and device | |
CN113807399A (en) | Neural network training method, neural network detection method and neural network detection device | |
CN110222718B (en) | Image processing method and device | |
CN113033507B (en) | Scene recognition method and device, computer equipment and storage medium | |
CN113822315A (en) | Attribute graph processing method and device, electronic equipment and readable storage medium | |
CN112199600A (en) | Target object identification method and device | |
CN113076963B (en) | Image recognition method and device and computer readable storage medium | |
CN116664719A (en) | Image redrawing model training method, image redrawing method and device | |
CN113642400A (en) | Graph convolution action recognition method, device and equipment based on 2S-AGCN | |
CN112966754B (en) | Sample screening method, sample screening device and terminal equipment | |
CN115601692A (en) | Data processing method, training method and device of neural network model | |
CN113536970A (en) | Training method of video classification model and related device | |
CN114861859A (en) | Training method of neural network model, data processing method and device | |
CN115879508A (en) | Data processing method and related device | |
CN115439708A (en) | Image data processing method and device | |
CN115062779A (en) | Event prediction method and device based on dynamic knowledge graph | |
WO2022063076A1 (en) | Adversarial example identification method and apparatus | |
CN113762331A (en) | Relational self-distillation method, apparatus and system, and storage medium | |
CN113705293A (en) | Image scene recognition method, device, equipment and readable storage medium | |
CN116342978A (en) | Target detection network training and target detection method and device, and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |