CN111024147A - Component mounting detection method and device based on CNNs, electronic equipment and storage medium - Google Patents

Component mounting detection method and device based on CNNs, electronic equipment and storage medium Download PDF

Info

Publication number
CN111024147A
CN111024147A CN201911370517.9A CN201911370517A CN111024147A CN 111024147 A CN111024147 A CN 111024147A CN 201911370517 A CN201911370517 A CN 201911370517A CN 111024147 A CN111024147 A CN 111024147A
Authority
CN
China
Prior art keywords
data
imported
training
sampling data
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911370517.9A
Other languages
Chinese (zh)
Inventor
吴文烨
陈烨
张文祥
吴勇
林封世
吴勇明
郭宏记
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dianeng Technology Hangzhou Co Ltd
Original Assignee
Dianeng Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dianeng Technology Hangzhou Co Ltd filed Critical Dianeng Technology Hangzhou Co Ltd
Priority to CN201911370517.9A priority Critical patent/CN111024147A/en
Publication of CN111024147A publication Critical patent/CN111024147A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a component mounting detection method based on CNNs, which comprises the following steps: (a) acquiring sampling data at a preset frequency, wherein the sampling data correspond to various installation states of an element to be tested; (b) segmenting the sampling data according to a preset rule to filter out noise signals; (c) manually classifying the segmented sampling data to obtain data to be imported; (d) importing the data to be imported into a convolutional neural network architecture for model training, and forming a corresponding model file; (e) downloading the trained model file to a detection engine; and (f) acquiring the sampling data again, and returning a corresponding detection result by the detection engine according to the acquired sampling data and the model file. The installation detection method can realize high-performance quick identification and timely response. The invention also provides a component mounting detection device based on the CNNs, electronic equipment and a storage medium.

Description

Component mounting detection method and device based on CNNs, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data detection technologies, and in particular, to a method and an apparatus for detecting component mounting based on a Convolutional Neural Network (CNNs), an electronic device, and a storage medium.
Background
In the field of assembly and production, it is generally necessary to effectively monitor and diagnose the installation of each component on a production line so as to reduce or eliminate the loss and damage caused by the component not being installed in place. The current common detection mode is manual detection. However, the manual detection efficiency is low due to the large number of detections, and timely response cannot be achieved.
Disclosure of Invention
In view of the above, it is desirable to provide a CNNs-based component mounting detection method, device, electronic apparatus and computer-readable storage medium that can quickly identify and respond in time.
The embodiment of the invention provides a component mounting detection method based on CNNs, which comprises the following steps:
(a) acquiring sampling data at a preset frequency, wherein the sampling data correspond to various installation states of an element to be tested;
(b) segmenting the sampling data according to a preset rule to filter out noise signals;
(c) manually classifying the segmented sampling data to obtain data to be imported;
(d) importing the data to be imported into a convolutional neural network architecture for model training, and forming a corresponding model file, wherein the convolutional neural network architecture comprises convolutional layers and full-connection layers, the convolutional layers are used for extracting features from the data to be imported, and the full-connection layers are used for performing multi-element classification on the extracted features;
(e) downloading the trained model file to a detection engine; and
(f) and acquiring the sampled data again, and returning a corresponding detection result by the detection engine according to the acquired sampled data and the model file.
Preferably, the sampling data at least includes a current magnitude and/or a current duration.
Preferably, the convolutional neural network architecture further includes a max-pooling layer for reducing the output dimension of the convolutional layer, and a flat layer for converting the multidimensional array into the same number of one-dimensional vectors before outputting the features to the fully-connected layer.
As a preferred aspect, before performing step (c), the method further comprises:
and carrying out primary classification on the segmented sampling data based on an empirical algorithm to obtain transition data.
As a preferable scheme, the method further comprises outputting a corresponding alarm signal or prompt signal according to the detection result.
As a preferable scheme, before performing the model training of step (d), the method further includes a step of preprocessing the sample data to be imported, where the preprocessing step at least includes:
regulating the length of the data to be imported to a preset length by an interpolation scaling and filling method; and
and adjusting the number of positive samples and the number of negative samples in the data to be imported so as to bias training towards the negative samples.
As a preferred scheme, adjusting the number of positive samples and the number of negative samples in the data to be imported includes:
generating adjacent samples according to the existing negative samples by using an oversampling method so as to increase the number of the negative samples; or
And adjusting the weights of the positive sample data and the negative sample data according to the distribution condition of the positive sample and the negative sample in the data to be imported so as to bias the training to the negative sample data.
As a preferred approach, before performing step (e), the method further comprises the step of optimizing the training model, the optimizing step comprising:
monitoring the training process by adopting a mode of a saving point (Checkpoint) and Early Stopping, and automatically selecting a better model obtained in the training process;
clipping and optimizing a default model (FP32) to FP16 and INT 8; or/and
and (f) taking the sampling data obtained again in the step (f) as a training sample, and importing the training sample into the convolutional neural network architecture again to optimize the model file.
An embodiment of the present invention also provides a component mounting detection apparatus based on CNNs, the apparatus including:
the acquisition module is used for acquiring sampling data at a preset frequency, wherein the sampling data correspond to various installation states of the element to be detected;
the segmentation module is used for segmenting the sampling data according to a preset rule so as to filter out noise signals;
the first classification module is used for manually classifying the segmented sampling data to obtain data to be imported;
the training module is used for importing the data to be imported into a convolutional neural network architecture for model training and forming a corresponding model file, wherein the convolutional neural network architecture comprises a convolutional layer and a full-connection layer, the convolutional layer is used for extracting features from the data to be imported, and the full-connection layer is used for performing multi-element classification on the extracted features;
and the downloading module is used for downloading the trained model file to a detection engine, and the detection engine is used for returning a corresponding detection result according to the re-acquired sampling data and the model file when the acquisition module acquires the sampling data again.
As a preferred scheme, the device further comprises a prompt module, which is used for outputting a corresponding alarm signal or prompt signal according to the detection result.
As a preferable scheme, the apparatus further includes a second classification module, configured to perform primary classification on the segmented sample data based on an empirical algorithm to obtain transition data, and transmit the transition data to the first classification module.
As a preferred solution, the apparatus further comprises a preprocessing module, configured to:
regulating the length of the data to be imported to a preset length by an interpolation scaling and filling method; and
and adjusting the number of positive samples and the number of negative samples in the data to be imported so as to bias training towards the negative samples.
As a preferred scheme, the adjusting the number of positive samples and the number of negative samples in the data to be imported includes:
generating adjacent samples according to the existing negative samples by using an oversampling method so as to increase the number of the negative samples; or
And adjusting the weights of the positive sample data and the negative sample data according to the distribution condition of the positive sample and the negative sample in the data to be imported so as to bias the training to the negative sample data.
As a preferred solution, the apparatus further comprises an optimization module, configured to:
monitoring the training process by adopting a mode of a saving point (Checkpoint) and Early Stopping, and automatically selecting a better model obtained in the training process;
clipping and optimizing a default model (FP32) to FP16 and INT 8; or/and
and taking the obtained sampling data as a training sample, and importing the sampling data into the convolutional neural network architecture again to optimize the model file.
An embodiment of the present invention also provides an electronic device, including:
a processor; and
a memory having a convolutional neural network architecture stored therein, the memory further having computer program instructions stored therein, the computer program instructions being executed by the processor and performing the convolutional neural network-based component mounting detection method described above.
As a preferred scheme, the electronic device is any one of a smart phone, a tablet computer, a laptop convenient computer and a desktop computer, or a server.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement the convolutional neural network-based component mounting detection method described above.
The element installation detection method, the device, the electronic equipment and the storage medium can analyze and identify the sampling data through the convolutional neural network, further detect the installation condition of the element to be detected, and achieve the purposes of high-performance quick identification and timely response.
Drawings
Fig. 1 is a flowchart of a component mounting inspection method based on CNNs in a preferred embodiment of the present invention.
Fig. 2 is a flowchart illustrating sub-steps of step S5 shown in fig. 1.
Fig. 3 is a flowchart illustrating sub-steps of step S7 shown in fig. 1.
Fig. 4 is a schematic view of an application scenario of the component mounting detection method based on CNNs shown in fig. 1.
Fig. 5 is a functional block diagram of a component mounting inspection apparatus based on CNNs in a preferred embodiment of the present invention.
FIG. 6 is a functional block diagram of an electronic device according to a preferred embodiment of the invention.
Description of the main elements
Element installation detection device based on convolutional neural network 100
Acquisition module 101
Cutting module 102
First classification module 103
Second classification module 104
Training module 105
Download module 106
Prompt module 107
Pre-processing module 108
Optimization module 109
Electronic device 200
Memory device 201
Processor with a memory having a plurality of memory cells 202
Computer program 203
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a Convolutional Neural Network (CNNs) based component mounting detection method according to a preferred embodiment of the present invention. The method is used for detecting the installation of the electronic component. It will be understood that the order of the steps in the flow diagrams may be changed and certain steps may be omitted, depending on various requirements. It is understood that, in the present embodiment, the electronic component is taken as an example to specifically describe the method.
Step S1, obtaining sampling data at a preset frequency, where the sampling data corresponds to various mounting states of the device to be tested.
It will be appreciated that in this embodiment, the screws may be mounted by a power screwdriver. Accordingly, in step S1, the sampled data of the screw may be obtained from the electric screwdriver by the preset frequency.
It is to be understood that, in the present embodiment, the sampling data is current data. Of course, in other embodiments, the sampling data is not limited to current data, but may also be voltage data or other types of data.
It is understood that, in the present embodiment, the sampling data correspond to various mounting states of the screws. For example, the sampling data may represent a normal or ok driving of a screw, an idling of the electric screwdriver, a tilting of the screw, a roughening of the screw, and the like, respectively.
It is understood that, in the present embodiment, when the step S1 is executed, the sampling data may be collected at the preset frequency during the operation of the electric screwdriver. That is, the preset frequency may coincide with a workflow cycle of the electric screwdriver. Of course, in other embodiments, the preset frequency may also be set according to specific situations.
Step S2, the sampled data is segmented according to a preset rule to filter out noise signals.
It is understood that, in this embodiment, the preset rule may be set according to the magnitude of the current in the sampling data and/or the duration of the current. For example, the preset rule may be set to start sampling data collection when current values of a preset number of consecutive samples (e.g., three samples) are all greater than a certain preset value (e.g., 50 milliamperes (mA)), and then stop data collection when current values of a preset number of consecutive samples (e.g., three samples) are all less than a certain preset value (e.g., 50 milliamperes (mA)), so as to complete slicing of the sampling data, so as to effectively filter out noise signals. Of course, in other embodiments, the preset rule may also be set according to specific situations.
And step S4, manually classifying the segmented sampling data to obtain data to be imported.
It should be understood that in step S4, the manual classification is to manually determine the current waveform according to the actual screw driving situation, such as screw driving ok, idle driving of a screwdriver, screw tilting, screw roughening, and so on, and further classify the sampled data. For example, the sampled data may be divided into normal and several abnormal states (e.g., idle, error, etc.). It is understood that, in the present embodiment, after the sampling data are divided into normal and abnormal states according to the current waveforms, the data in each current waveform, such as the current magnitude and/or the current duration, can be extracted as the data to be imported. That is, the data to be imported includes current magnitude and/or current duration for normal as well as for several abnormal states.
And step S6, importing the data to be imported into a convolutional neural network architecture for model training, and forming a corresponding model file. It is understood that Convolutional Neural Networks (CNNs) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution calculations and have deep structures, and are one of the representative algorithms for deep learning (deep learning). Convolutional neural Networks have a feature learning (representation) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are also called Shift-Invariant artificial neural Networks (SIANN).
In this embodiment, the convolutional neural network architecture includes a convolutional layer and a fully-connected layer, and the convolutional layer is used to extract features from the data to be imported. And the full connection layer is used for performing multi-element classification on the extracted features.
It is understood that, in the present embodiment, the convolutional neural network architecture further includes a max pooling layer and a flat layer. The maximum pooling layer is used for reducing the output dimension of the convolutional layer, and the flat layer is used for converting the multi-dimensional array into one-dimensional vectors with the same quantity before outputting the features to the full-connection layer. The one-dimensional vector may be an array.
Of course, in other embodiments, when both the number of positive samples and the number of negative samples are one-dimensional vectors, then the flattening layer may be omitted.
In step S8, the trained model file is downloaded to a detection engine.
And step S9, repeating the step S1 to obtain the sampled data again, and returning the corresponding detection result by the detection engine according to the obtained sampled data and the model file.
It is understood that in other embodiments, the method further includes a preliminary classification step, i.e., step S3, before performing step S4.
And step S3, carrying out preliminary classification on the segmented sampling data based on an empirical algorithm to obtain transition data.
It will be appreciated that the empirical algorithm will vary from scenario to scenario. In this embodiment, namely in an application scenario of the electric screwdriver, the empirical algorithm may be to preliminarily classify the divided sampling data according to the current value of the electric screwdriver. For example, when a peak above a certain preset threshold occurs 2 times in the current value, it can be classified as "ok" initially. When a peak above a predetermined threshold occurs only once in the current value, the current value may be initially classified as "idle". Other current conditions are initially classified as "NG".
Obviously, in this embodiment, the empirical algorithm may perform a preliminary classification on the data, which is beneficial to reduce the workload of the secondary manual classification in step S4. Meanwhile, after the primary classification is executed, secondary manual classification is carried out according to the result of the primary classification, so that the error of the primary classification can be effectively corrected.
It is understood that, in other embodiments, before performing step S6, the method further includes a step of preprocessing the sample data to be imported, i.e., step S5.
And step S5, preprocessing the data to be imported.
It is understood that, referring to fig. 5, the step S5 specifically includes the following sub-steps:
and a substep S51, regulating the length of the data to be imported to a preset length by interpolation scaling and padding.
It can be understood that the collected data are different in length, so that before model training is performed, the data are required to be normalized to a proper preset length, and the efficiency of model training is further effectively improved.
And a substep S52, adjusting the number of positive samples and the number of negative samples in the data to be imported to bias the training towards the negative samples.
It is to be understood that, in the present embodiment, the positive sample can be understood to correspond to the case of screwing into ok. The negative sample can be understood as corresponding to abnormal conditions such as idling of a screwdriver, slanting of a screw, roughening of the screw and the like.
It will be appreciated that in this embodiment, the number of positive samples is generally much greater than the number of negative samples in the collected data. Therefore, if the data to be imported is directly trained, the training of the negative samples may be insufficient, and the recognition rate is low, so that the number of positive samples and the number of negative samples in the data to be imported need to be adjusted to compensate for the imbalance of the data.
It is understood that, in the present embodiment, the number of positive samples and the number of negative samples in the data to be imported may be adjusted in the following manner.
The first method is as follows: by means of oversampling, adjacent samples are generated based on the negative samples to increase the number of negative samples.
The second method comprises the following steps: and adjusting the weights of the positive sample data and the negative sample data according to the distribution condition of the positive sample and the negative sample in the data to be imported so as to bias the training to the negative sample data.
It is understood that in other embodiments, the method further comprises a step of optimizing the model file, i.e. step S7, before performing step S8, i.e. before downloading the trained model file to the detection engine.
And step S7, optimizing the training model.
It is understood that, in the embodiment, referring to fig. 3, the step S7 specifically includes the following sub-steps:
and a substep S71, monitoring the training process by adopting a save point (Checkpoint) and Early Stopping method, and automatically selecting a better model obtained in the training process.
It is understood that, in the present embodiment, the model training performed in step S6 is basically a standard neural network training process, which uses Cross Entropy loss Function (Cross Entropy Error Function) and RMSProp optimization algorithm to perform gradient descent. The training data (e.g., collected sample data) is partitioned proportionally into a training set and a validation set. In the training process, a training set is used, and each step obtains a temporary model, namely a Checkpoint (Checkpoint). By verifying the behavior of this temporary model on the verification set, the loss value of the save point (Checkpoint) can be obtained. The save point (Checkpoint) is saved when the current step gets a smaller loss value than the last step. When no improvement is obtained any more in N continuous steps in the training process, a nearest preservation point (Checkpoint) is adopted as a final model. The whole process is called Early Stopping.
Sub-step S72, clip-optimize the default model (FP32) to FP16, INT 8.
It is understood that in sub-step S72, the default model clipping may be optimized to FP16, INT8 by OpenVINO, TensorRT, etc. tools. With the above sub-steps, better performance on low computing power devices like raspberry pi can be achieved relative to the default FP 32.
And a substep S73, taking the sampling data obtained again in the step S9 as training samples, and importing the training samples into the convolutional neural network architecture again to optimize the model file.
It can be understood that, in the substep S73, by using the obtained sampling data as the training samples and importing the training samples into the convolutional neural network architecture again, the number of positive samples or the number of negative samples in the data to be imported may be increased, that is, the number of training samples is effectively increased, so as to optimize or obtain a model file with a better effect.
It is understood that in other embodiments, the method further comprises step S10.
And step S10, outputting corresponding alarm signals or prompt signals according to the detection result.
That is, different corresponding measures may be taken according to the detection result. For example, in the embodiment, the method can be used for detecting whether an operator has an abnormality during screw installation and giving a warning on the production line, so as to reduce quality problems caused by the abnormality in screw installation. The method can also be used for detecting the running condition of the electric screw driver of the production line and carrying out early warning aiming at the abnormal condition of the equipment. Of course, the method may also continuously collect operation records of each operator to assess the skill of each person.
It is understood that in other embodiments, for different models of screwdrivers and different torque parameters of the same model of screwdriver, the above steps S1-S7 may be repeated as required to generate different model files due to different current value sequence characteristics.
Referring to fig. 4, the method for detecting the installation of the component will be further described by taking the detection of the installation of the screw as an example.
Firstly, when a screw is installed by using an electric screwdriver, sampling data of the screw is collected by the electric screwdriver. And then, acquiring sampling data of the screw from the electric screwdriver at a certain preset frequency. The sampling data respectively correspond to various installation states of the screw, such as 'OK', 'idle' and 'error'. And segmenting the sampling data according to a preset rule so as to filter out noise signals. And carrying out primary classification and secondary manual classification on the segmented sampling data to further obtain corresponding current to be led in. In the primary classification, the collected sampling data and the waveform can be classified into normal and several abnormal states, such as idling, error, etc., based on an empirical algorithm. And (4) performing secondary classification by hand due to low accuracy of the result of the primary classification. Generally, an experienced user can identify and classify the type based on the waveform map generated by the acquisition procedure. And then, importing the data to be imported into a convolutional neural network architecture for model training, and forming a corresponding model file. And downloading the trained model file to a detection engine. And acquiring the sampling data of the screw again, returning a corresponding detection result by the detection engine according to the acquired sampling data and the model file, and outputting a corresponding alarm signal or prompt signal to other systems, such as a server.
In normal situations, since normal data is much more than abnormal data, automatic weighting or other processing of the original data, such as training after data normalization, is usually required.
It is understood that referring to fig. 5, another embodiment of the present invention further provides a device 100 for detecting component mounting based on Convolutional Neural Networks (CNNs). The apparatus 100 includes an obtaining module 101, a cutting module 102, a first classification module 103, a training module 105, and a downloading module 106.
The obtaining module 101 is configured to obtain sampling data at a preset frequency, where the sampling data corresponds to various mounting states of the device to be tested.
It will be appreciated that in this embodiment, the screws may be mounted by a power screwdriver. Therefore, the acquiring module 101 may acquire the sampling data of the screw from the electric screwdriver through the preset frequency.
It is to be understood that, in the present embodiment, the sampling data is current data. Of course, in other embodiments, the sampling data is not limited to current data, but may also be voltage data or other types of data.
It is understood that, in the present embodiment, the sampling data correspond to various mounting states of the screws. For example, the sampling data may represent a normal or ok driving of a screw, an idling of the electric screwdriver, a tilting of the screw, a roughening of the screw, and the like, respectively.
It can be understood that, in the present embodiment, the sampling data may be collected at the preset frequency during the operation of the electric screwdriver. That is, the preset frequency may coincide with a workflow cycle of the electric screwdriver. Of course, in other embodiments, the preset frequency may also be set according to specific situations.
The slicing module 102 is configured to slice the sampling data according to a preset rule to filter out a noise signal.
It is understood that, in this embodiment, the preset rule may be set according to the magnitude of the current in the sampling data and/or the duration of the current. For example, the preset rule may be set to start sampling data collection when current values of a preset number of consecutive samples (e.g., three samples) are all greater than a certain preset value (e.g., 50 milliamperes (mA)), and then stop data collection when current values of a preset number of consecutive samples (e.g., three samples) are all less than a certain preset value (e.g., 50 milliamperes (mA)), so as to complete slicing of the sampling data, so as to effectively filter out noise signals. Of course, in other embodiments, the preset rule may also be set according to specific situations.
The first classification module 103 is configured to manually classify the segmented sample data by using a human to obtain data to be imported.
It is understood that the manual classification means that the current waveform is determined manually according to actual screw driving conditions, such as screw driving ok, idle rotation of a screwdriver, screw tilting, screw roughening, and the like, and then the sampled data is classified. For example, the sampled data may be divided into normal and several abnormal states (e.g., idle, error, etc.).
It is understood that, in the present embodiment, after the sampling data are divided into normal and abnormal states according to the current waveforms, the data in each current waveform, such as the current magnitude and/or the current duration, can be extracted as the data to be imported. That is, the data to be imported includes current magnitude and/or current duration for normal as well as for several abnormal states.
The training module 105 is configured to import the data to be imported into a convolutional neural network architecture for model training, and form a corresponding model file.
It is understood that Convolutional Neural Networks (CNNs) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution calculations and have deep structures, and are one of the representative algorithms for deep learning (deep learning). Convolutional neural Networks have a feature learning (representation) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are also called Shift-Invariant artificial neural Networks (SIANN).
In this embodiment, the convolutional neural network architecture includes a convolutional layer and a fully-connected layer, and the convolutional layer is used to extract features from the data to be imported. And the full connection layer is used for performing multi-element classification on the extracted features.
It is understood that, in the present embodiment, the convolutional neural network architecture further includes a max pooling layer and a flat layer. The maximum pooling layer is used for reducing the output dimension of the convolutional layer, and the flat layer is used for converting the multi-dimensional array into one-dimensional vectors with the same quantity before outputting the features to the full-connection layer. The one-dimensional vector may be an array.
Of course, in other embodiments, when the number of positive samples and the number of negative samples are both one-dimensional vectors, then the flattening layer may be omitted.
The downloading module 106 is used for downloading the trained model file to a detection engine. Thus, when the sample data is acquired again, the detection engine can return a corresponding detection result according to the acquired sample data and the model file.
It is understood that in other embodiments, the apparatus 100 further comprises a second classification module 104, a hinting module 107, a pre-processing module 108, and an optimization module 109.
The second classification module 104 is configured to perform preliminary classification on the segmented sample data based on an empirical algorithm to obtain transition data.
It will be appreciated that the empirical algorithm will vary from scenario to scenario. In this embodiment, namely in an application scenario of the electric screwdriver, the empirical algorithm may be to preliminarily classify the divided sampling data according to the current value of the electric screwdriver. For example, when a peak above a certain preset threshold occurs 2 times in the current value, it can be classified as "ok" initially. When a peak above a predetermined threshold occurs only once in the current value, the current value may be initially classified as "idle". Other current conditions are initially classified as "NG".
Obviously, in this embodiment, the empirical algorithm may perform preliminary classification on the data, which is beneficial to reduce the workload of secondary manual classification in the first classification module 103. Meanwhile, after the primary classification is executed, secondary manual classification is carried out according to the result of the primary classification, and therefore the errors of the primary classification can be effectively corrected.
The prompt module 107 is configured to output a corresponding alarm signal or a prompt signal according to the detection result.
That is, different corresponding measures may be taken according to the detection result. For example, in this embodiment, the device may be used to detect whether an operator is abnormal when installing screws on the production line and give a warning, so as to reduce quality problems caused by abnormal installation of screws. The device can also be used for detecting the running condition of the electric screw driver of the production line and carrying out early warning aiming at the abnormal condition of the equipment. Of course, the device may also continuously collect the operation records of each operator to assess the skill of each person.
The preprocessing module 108 is configured to preprocess the to-be-imported sample data. In this embodiment, the sample data to be imported may be preprocessed in the following manner.
Mode 1: and regulating the length of the data to be imported to a preset length by an interpolation scaling and filling method.
It can be understood that the collected data are different in length, so that before model training is performed, the data are required to be normalized to a proper preset length, and the efficiency of model training is further effectively improved.
Mode 2: and adjusting the number of positive samples and the number of negative samples in the data to be imported so as to bias training towards the negative samples.
It is to be understood that, in the present embodiment, the positive sample can be understood to correspond to the case of screwing into ok. The negative sample can be understood as corresponding to abnormal conditions such as idling of a screwdriver, slanting of a screw, roughening of the screw and the like.
It will be appreciated that in this embodiment, the number of positive samples is generally much greater than the number of negative samples in the collected data. Therefore, if the data to be imported is directly trained, the training of the negative samples may be insufficient, and the recognition rate is low, so that the number of positive samples and the number of negative samples in the data to be imported need to be adjusted to compensate for the imbalance of the data.
It is understood that, in the present embodiment, the number of positive samples and the number of negative samples in the data to be imported may be adjusted in the following manner.
The first method is as follows: by means of oversampling, adjacent samples are generated based on the negative samples to increase the number of negative samples.
The second method comprises the following steps: and adjusting the weights of the positive sample data and the negative sample data according to the distribution condition of the positive sample and the negative sample in the data to be imported so as to bias the training to the negative sample data.
The optimization module 109 is configured to optimize the training model. Specifically, the optimization module 109 can optimize the training model in the following manner.
The first method is as follows: and monitoring the training process by adopting a save point (Checkpoint) and Early Stopping method, and automatically selecting a better model obtained in the training process.
It is understood that, in the present embodiment, the model training performed by the training module 105 is basically a standard neural network training process, which uses Cross Entropy loss Function (Cross Entropy Error Function) and RMSProp optimization algorithm to perform gradient descent. The training data (e.g., collected sample data) is partitioned proportionally into a training set and a validation set. In the training process, a training set is used, and each step obtains a temporary model, namely a Checkpoint (Checkpoint). By verifying the behavior of this temporary model on the verification set, the loss value of the save point (Checkpoint) can be obtained. The save point (Checkpoint) is saved when the current step gets a smaller loss value than the last step. When no improvement is obtained any more in N continuous steps in the training process, a nearest preservation point (Checkpoint) is adopted as a final model. The whole process is called Early Stopping.
The second method comprises the following steps: the default model (FP32) was clip optimized to FP16, INT 8.
It is understood that in the second mode, the default model clipping can be optimized to FP16, INT8 by tools such as OpenVINO, TensorRT, and the like. With the above sub-steps, better performance on low computing power devices like raspberry pi can be achieved relative to the default FP 32.
The third method comprises the following steps: and taking the obtained sampling data as a training sample, and importing the sampling data into the convolutional neural network architecture again to optimize the model file.
In the third mode, the sampling data obtained again is used as the training sample and is introduced into the convolutional neural network architecture again, so that the number of positive samples or the number of negative samples in the data to be introduced can be increased, that is, the training samples are effectively increased, and further, the model file with better effect is optimized or obtained.
It is understood that, referring to fig. 6, another embodiment of the invention further provides an electronic device 200. The electronic device 200 comprises a memory 201, a processor 202 and a computer program 203 stored in the memory 201 and executable on the processor 202.
The electronic device 200 may be any one of a smart phone, a tablet computer, a laptop computer, an embedded computer, a desktop computer, a server, and the like. Those skilled in the art will appreciate that the schematic diagram is merely an example of the electronic device 200 and does not constitute a limitation of the electronic device 200, and may include more or less components than those shown, or some components in combination, or different components.
The processor 202 is configured to execute the computer program 203 to implement the steps in the above-mentioned embodiments of the method for detecting component mounting based on Convolutional Neural Networks (CNNs), such as the steps S1-S10 shown in fig. 1. Alternatively, when the processor 202 executes the computer program 203, the functions of the modules/units in the above-mentioned Convolutional Neural Network (CNNs) -based component mounting detection apparatus 100 embodiment are implemented, for example, the obtaining module 101, the segmenting module 102, the first classifying module 103, the second classifying module 104, the training module 105, the downloading module 106, the prompting module 107, the preprocessing module 108 and the optimizing module 109 in fig. 5.
Illustratively, the computer program 203 may be partitioned into one or more modules/units that are stored in the memory 201 and executed by the processor 202 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, the instruction segments describing the execution process of the computer program 203 in the electronic device 200. For example, the computer program 203 may be divided into the acquisition module 101, the segmentation module 102, the first classification module 103, the second classification module 104, the training module 105, the download module 106, the prompt module 107, the pre-processing module 108, and the optimization module 109 in fig. 5.
The Processor 202 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor 202 may be any conventional processor or the like, the processor 202 being the control center of the electronic device 200 and connecting the various parts of the entire electronic device 200 using various interfaces and lines.
The memory 201 may be used to store the computer program 203 and/or the modules/units the processor 202 implements various functions of the electronic device 200 by running or executing the computer program and/or the modules/units stored in the memory 201 and invoking data stored in the memory 201. The memory 201 may mainly include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as video data, audio data, a phonebook, etc.) created according to the use of the electronic apparatus 200, and the like. Further, the memory 201 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The integrated modules/units of the electronic device 200, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and which, when executed by a processor, may implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical signals, and software distribution medium. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the embodiments provided in the present invention, it should be understood that the disclosed electronic device and method can be implemented in other ways. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the modules is only one logical functional division, and there may be other divisions when the actual implementation is performed.
In addition, each functional module in each embodiment of the present invention may be integrated into the same processing module, or each module may exist alone physically, or two or more modules may be integrated into the same module. The integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is to be understood that the word "comprising" does not exclude other modules or steps, and the singular does not exclude the plural. Several modules or electronic devices recited in the electronic device claims may also be implemented by one and the same module or electronic device by means of software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (17)

1. A Convolutional Neural Network (CNNs) -based component mounting detection method, comprising:
(a) acquiring sampling data at a preset frequency, wherein the sampling data correspond to various installation states of an element to be tested;
(b) segmenting the sampling data according to a preset rule to filter out noise signals;
(c) manually classifying the segmented sampling data to obtain data to be imported;
(d) importing the data to be imported into a convolutional neural network architecture for model training, and forming a corresponding model file, wherein the convolutional neural network architecture comprises convolutional layers and full-connection layers, the convolutional layers are used for extracting features from the data to be imported, and the full-connection layers are used for performing multi-element classification on the extracted features;
(e) downloading the trained model file to a detection engine; and
(f) and acquiring the sampled data again, and returning a corresponding detection result by the detection engine according to the acquired sampled data and the model file.
2. The method of claim 1, wherein: the sampled data includes at least a current magnitude and/or a current duration.
3. The method of claim 1, wherein: the convolutional neural network architecture further comprises a maximum pooling layer and a flat layer, wherein the maximum pooling layer is used for reducing the output dimension of the convolutional layer, and the flat layer is used for converting the multi-dimensional array into the same number of one-dimensional vectors before the features are output to the full-connection layer.
4. The method of claim 1, wherein: before performing step (c), the method further comprises:
and carrying out primary classification on the segmented sampling data based on an empirical algorithm to obtain transition data.
5. The method of claim 1, wherein: the method also comprises the step of outputting a corresponding alarm signal or a corresponding prompt signal according to the detection result.
6. The method of claim 1, wherein: before performing the model training of step (d), the method further includes a step of preprocessing the sample data to be imported, the preprocessing step at least including:
regulating the length of the data to be imported to a preset length by an interpolation scaling and filling method; and
and adjusting the number of positive samples and the number of negative samples in the data to be imported so as to bias training towards the negative samples.
7. The method of claim 6, wherein: adjusting the number of positive samples and the number of negative samples in the data to be imported includes:
generating adjacent samples according to the existing negative samples by using an oversampling method so as to increase the number of the negative samples; or
And adjusting the weights of the positive sample data and the negative sample data according to the distribution condition of the positive sample and the negative sample in the data to be imported so as to bias the training to the negative sample data.
8. The method of claim 1, wherein: before performing step (e), the method further comprises the step of optimizing the training model, the optimizing step comprising:
monitoring the training process by adopting a mode of a saving point (Checkpoint) and Early Stopping, and automatically selecting a better model obtained in the training process;
clipping and optimizing a default model (FP32) to FP16 and INT 8; or/and
and (f) taking the sampling data obtained again in the step (f) as a training sample, and importing the training sample into the convolutional neural network architecture again to optimize the model file.
9. A Convolutional Neural Network (CNNs) based component mounting detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring sampling data at a preset frequency, wherein the sampling data correspond to various installation states of the element to be detected;
the segmentation module is used for segmenting the sampling data according to a preset rule so as to filter out noise signals;
the first classification module is used for manually classifying the segmented sampling data to obtain data to be imported;
the training module is used for importing the data to be imported into a convolutional neural network architecture for model training and forming a corresponding model file, wherein the convolutional neural network architecture comprises a convolutional layer and a full-connection layer, the convolutional layer is used for extracting features from the data to be imported, and the full-connection layer is used for performing multi-element classification on the extracted features;
and the downloading module is used for downloading the trained model file to a detection engine, and the detection engine is used for returning a corresponding detection result according to the re-acquired sampling data and the model file when the acquisition module acquires the sampling data again.
10. The apparatus of claim 9, wherein: the device also comprises a prompt module used for outputting a corresponding alarm signal or a prompt signal according to the detection result.
11. The apparatus of claim 9, wherein: the device further comprises a second classification module, wherein the second classification module is used for carrying out primary classification on the segmented sampling data based on an empirical algorithm so as to obtain transition data, and the transition data are transmitted to the first classification module.
12. The apparatus of claim 9, wherein: the apparatus also includes a pre-processing module to:
regulating the length of the data to be imported to a preset length by an interpolation scaling and filling method; and
and adjusting the number of positive samples and the number of negative samples in the data to be imported so as to bias training towards the negative samples.
13. The apparatus of claim 12, wherein: the adjusting the number of positive samples and the number of negative samples in the data to be imported includes:
generating adjacent samples according to the existing negative samples by using an oversampling method so as to increase the number of the negative samples; or
And adjusting the weights of the positive sample data and the negative sample data according to the distribution condition of the positive sample and the negative sample in the data to be imported so as to bias the training to the negative sample data.
14. The apparatus of claim 9, wherein: the apparatus further comprises an optimization module to:
monitoring the training process by adopting a mode of a saving point (Checkpoint) and Early Stopping, and automatically selecting a better model obtained in the training process;
clipping and optimizing a default model (FP32) to FP16 and INT 8; or/and
and taking the obtained sampling data as a training sample, and importing the sampling data into the convolutional neural network architecture again to optimize the model file.
15. An electronic device, characterized in that the electronic device comprises:
a processor; and
a memory having a convolutional neural network architecture stored therein, the memory further having computer program instructions stored therein that are executed by the processor and that perform the convolutional neural network-based component mounting detection method of any one of claims 1-8.
16. The electronic device of claim 15, wherein: the electronic equipment is any one of a smart phone, a tablet computer, a laptop convenient computer, an embedded computer and a desktop computer or a server.
17. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer readable storage medium computer program when executed by a processor implements the convolutional neural network-based component mounting detection method of any one of claims 1-8.
CN201911370517.9A 2019-12-26 2019-12-26 Component mounting detection method and device based on CNNs, electronic equipment and storage medium Pending CN111024147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911370517.9A CN111024147A (en) 2019-12-26 2019-12-26 Component mounting detection method and device based on CNNs, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911370517.9A CN111024147A (en) 2019-12-26 2019-12-26 Component mounting detection method and device based on CNNs, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111024147A true CN111024147A (en) 2020-04-17

Family

ID=70214920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911370517.9A Pending CN111024147A (en) 2019-12-26 2019-12-26 Component mounting detection method and device based on CNNs, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111024147A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597821A (en) * 2020-12-11 2021-04-02 齐鲁工业大学 Mechanical arm action identification method, system, terminal and storage medium
CN115017945A (en) * 2022-05-24 2022-09-06 南京林业大学 Mechanical fault diagnosis method and system based on enhanced convolutional neural network
WO2023232403A1 (en) * 2022-05-30 2023-12-07 British Telecommunications Public Limited Company Automated equipment installation verification

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4361945A (en) * 1978-06-02 1982-12-07 Rockwell International Corporation Tension control of fasteners
US20030040871A1 (en) * 2001-08-24 2003-02-27 Siegel Robert P. Intelligent assembly systems and methods
CN101359228A (en) * 2008-09-09 2009-02-04 张东来 Electromagnetic control element failure diagnosis method and device based on variation of electric current at drive end
CN102335872A (en) * 2011-09-14 2012-02-01 桂林电子科技大学 Artificial neural network-based method and device for automatically trimming grinding wheel of grinding machine
CN106295601A (en) * 2016-08-18 2017-01-04 合肥工业大学 A kind of Safe belt detection method of improvement
CN106323633A (en) * 2016-10-28 2017-01-11 华中科技大学 Diagnosis method for feed shaft assembly faults based on instruction domain analysis
CN106612088A (en) * 2015-10-19 2017-05-03 发那科株式会社 Machine learning apparatus and method, correction value computation apparatus and motor driving apparatus
CN106874957A (en) * 2017-02-27 2017-06-20 苏州大学 A kind of Fault Diagnosis of Roller Bearings
CN106960214A (en) * 2017-02-17 2017-07-18 北京维弦科技有限责任公司 Object identification method based on image
CN107742130A (en) * 2017-10-25 2018-02-27 西南交通大学 High iron catenary based on deep learning supports device fastener failure diagnostic method
CN108466218A (en) * 2018-03-09 2018-08-31 黄山市星河机器人有限公司 Numerical control electric formula torque detects the detection method of spanner and bolt tightening torque value
CN108919228A (en) * 2018-09-25 2018-11-30 鲁东大学 one-dimensional radar data processing method and system
CN109084826A (en) * 2018-07-06 2018-12-25 同济大学 A kind of Intelligent Sensing System for prognostic and health management
CN109842331A (en) * 2017-11-29 2019-06-04 余姚伯傲精工工贸有限公司 Electric tool
JP2019087021A (en) * 2017-11-07 2019-06-06 株式会社豊田中央研究所 Convolutional neural network device and its manufacturing method
CN110175369A (en) * 2019-04-30 2019-08-27 南京邮电大学 A kind of gear method for predicting residual useful life based on two-dimensional convolution neural network
CN110516659A (en) * 2019-09-10 2019-11-29 哈工大机器人(山东)智能装备研究院 The recognition methods of ball-screw catagen phase, device, equipment and storage medium
WO2019239786A1 (en) * 2018-06-12 2019-12-19 オムロン株式会社 Detection method and detection device configuration method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4361945A (en) * 1978-06-02 1982-12-07 Rockwell International Corporation Tension control of fasteners
US20030040871A1 (en) * 2001-08-24 2003-02-27 Siegel Robert P. Intelligent assembly systems and methods
CN101359228A (en) * 2008-09-09 2009-02-04 张东来 Electromagnetic control element failure diagnosis method and device based on variation of electric current at drive end
CN102335872A (en) * 2011-09-14 2012-02-01 桂林电子科技大学 Artificial neural network-based method and device for automatically trimming grinding wheel of grinding machine
CN106612088A (en) * 2015-10-19 2017-05-03 发那科株式会社 Machine learning apparatus and method, correction value computation apparatus and motor driving apparatus
CN106295601A (en) * 2016-08-18 2017-01-04 合肥工业大学 A kind of Safe belt detection method of improvement
CN106323633A (en) * 2016-10-28 2017-01-11 华中科技大学 Diagnosis method for feed shaft assembly faults based on instruction domain analysis
CN106960214A (en) * 2017-02-17 2017-07-18 北京维弦科技有限责任公司 Object identification method based on image
CN106874957A (en) * 2017-02-27 2017-06-20 苏州大学 A kind of Fault Diagnosis of Roller Bearings
CN107742130A (en) * 2017-10-25 2018-02-27 西南交通大学 High iron catenary based on deep learning supports device fastener failure diagnostic method
JP2019087021A (en) * 2017-11-07 2019-06-06 株式会社豊田中央研究所 Convolutional neural network device and its manufacturing method
CN109842331A (en) * 2017-11-29 2019-06-04 余姚伯傲精工工贸有限公司 Electric tool
CN108466218A (en) * 2018-03-09 2018-08-31 黄山市星河机器人有限公司 Numerical control electric formula torque detects the detection method of spanner and bolt tightening torque value
WO2019239786A1 (en) * 2018-06-12 2019-12-19 オムロン株式会社 Detection method and detection device configuration method
CN109084826A (en) * 2018-07-06 2018-12-25 同济大学 A kind of Intelligent Sensing System for prognostic and health management
CN108919228A (en) * 2018-09-25 2018-11-30 鲁东大学 one-dimensional radar data processing method and system
CN110175369A (en) * 2019-04-30 2019-08-27 南京邮电大学 A kind of gear method for predicting residual useful life based on two-dimensional convolution neural network
CN110516659A (en) * 2019-09-10 2019-11-29 哈工大机器人(山东)智能装备研究院 The recognition methods of ball-screw catagen phase, device, equipment and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
刘增良: "《模糊技术与神经网络技术选编(5)》", 31 January 2001, 北京航空航天大学出版社 *
卢旭锦 等: "螺丝自供给的电动螺丝刀装置", 《南方职业教育学刊》 *
惠飞菲 等: "一种基于激光雷达技术的隧道火灾探测方法", 《中国交通信息化》 *
李邦协: "《实用电动工具手册》", 31 May 2001, 机械工业出版社 *
赵晓平等: "基于多任务深度学习的齿轮箱多故障诊断方法", 《振动与冲击》 *
赵耀霞 等: "基于卷积神经网络的复杂构件", 《电子学报》 *
郑树泉 等: "《工业智能技术与应用》", 31 December 2018, 上海科学技术出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597821A (en) * 2020-12-11 2021-04-02 齐鲁工业大学 Mechanical arm action identification method, system, terminal and storage medium
CN115017945A (en) * 2022-05-24 2022-09-06 南京林业大学 Mechanical fault diagnosis method and system based on enhanced convolutional neural network
WO2023232403A1 (en) * 2022-05-30 2023-12-07 British Telecommunications Public Limited Company Automated equipment installation verification

Similar Documents

Publication Publication Date Title
CN111931868B (en) Time series data abnormity detection method and device
CN111024147A (en) Component mounting detection method and device based on CNNs, electronic equipment and storage medium
CN112287986B (en) Image processing method, device, equipment and readable storage medium
CN111444072B (en) Abnormality identification method and device for client, computer equipment and storage medium
CN112114986A (en) Data anomaly identification method and device, server and storage medium
US11579208B2 (en) Method, apparatus and device for evaluating the state of a distribution transformer, and a medium and a program
CN111626360B (en) Method, apparatus, device and storage medium for detecting boiler fault type
CN114254673A (en) Denoising countermeasure self-encoder-based spectrum anomaly detection method
CN114764774A (en) Defect detection method, device, electronic equipment and computer readable storage medium
CN110985425A (en) Information detection method, electronic equipment and computer readable storage medium
CN115879354A (en) Abnormality detection system, abnormality detection method, electronic device, and storage medium
CN112445687A (en) Blocking detection method of computing equipment and related device
CN113518058B (en) Abnormal login behavior detection method and device, storage medium and computer equipment
US20210089886A1 (en) Method for processing data based on neural networks trained by different methods and device applying method
CN109086207B (en) Page response fault analysis method, computer readable storage medium and terminal device
CN114764867A (en) Fan fault diagnosis system and method based on image main feature extraction and application
CN112968968B (en) Internet of things equipment flow fingerprint identification method and device based on unsupervised clustering
CN115996133B (en) Industrial control network behavior detection method and related device
CN116863957B (en) Method, device, equipment and storage medium for identifying operation state of industrial equipment
CN116150666B (en) Energy storage system fault detection method and device and intelligent terminal
CN112069359A (en) Method for dynamically filtering abnormal data of comparison result of snapshot object
CN114329060A (en) Method and system for automatically generating multiple labels of video frame based on neural network model
CN117349620A (en) Application method and system for predicting safety state of nuclear power plant equipment
CN113190844A (en) Detection method, related method and related device
CN116757870A (en) Intelligent energy monitoring data processing method and system for energy Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination