CN116109897A - Robot fish sensor fault detection method and device based on airspace image fusion - Google Patents

Robot fish sensor fault detection method and device based on airspace image fusion Download PDF

Info

Publication number
CN116109897A
CN116109897A CN202310394870.0A CN202310394870A CN116109897A CN 116109897 A CN116109897 A CN 116109897A CN 202310394870 A CN202310394870 A CN 202310394870A CN 116109897 A CN116109897 A CN 116109897A
Authority
CN
China
Prior art keywords
fusion
robot fish
fault
sensor
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310394870.0A
Other languages
Chinese (zh)
Other versions
CN116109897B (en
Inventor
邓赛
范绪青
范俊峰
吴正兴
周超
谭民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310394870.0A priority Critical patent/CN116109897B/en
Publication of CN116109897A publication Critical patent/CN116109897A/en
Application granted granted Critical
Publication of CN116109897B publication Critical patent/CN116109897B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The application provides a robot fish sensor fault detection method and device based on airspace image fusion, and relates to the field of artificial intelligence. The method comprises the following steps: generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish; and inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model. According to the robot fish sensor fault detection method and device based on airspace image fusion, the one-dimensional time sequence signal of the robot fish sensor is converted into the two-dimensional picture, and the optimal picture fusion coefficient is determined through repeated iteration of the gray wolf algorithm, so that high-accuracy fault detection for the robot fish sensor is realized.

Description

Robot fish sensor fault detection method and device based on airspace image fusion
Technical Field
The application relates to the field of artificial intelligence, in particular to a robot fish sensor fault detection method and device based on airspace image fusion.
Background
Compared with the traditional underwater propeller, the robot fish has the advantages of low noise, high maneuver, high efficiency and the like. The robot fish usually works in unstructured severe environments such as rapid flow, narrow and the like, the severe environments increase the probability of faults, and particularly the sensors arranged on the surface of the fish body are prone to faults caused by collision with the outside; in addition, the mechanical structure of the robot fish is complex, and the nonlinearity of the control system is strong, so that the failure of the subsystem can cause chain reaction, and the whole system is destructively influenced.
Limited by constraints of robot fish space, quality and power consumption, the redundancy of sensors configured inside the robot fish is not high. Particularly, the depth sensor is easy to fail due to unexpected situations such as impurity blockage, robot fish collision and the like in turbid water quality, and the fault detection by utilizing the data of the depth sensor has great practical significance. The depth sensor collects timing signals, and the timing signals can be directly processed to capture short-time correlations, but global correlations are difficult to extract, which can lead to low accuracy of fault detection.
Disclosure of Invention
The embodiment of the application provides a robot fish sensor fault detection method and device based on airspace image fusion, which are used for solving the technical problem of low fault detection accuracy of a robot fish sensor in the related technology.
In a first aspect, an embodiment of the present application provides a method for detecting a fault of a robot fish sensor based on airspace image fusion, including:
generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
In some embodiments, the method further comprises:
acquiring an initial value of a wolf algorithm, and determining a second fusion coefficient by using the wolf algorithm; the second fusion coefficient is a fusion coefficient determined in the iterative process of the gray wolf algorithm;
training the convolutional neural network model based on the second fusion coefficient, training sample data and training sample data labels;
and stopping iteration under the condition that the iteration times reach a preset value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the first fusion coefficient and the model parameters of the convolutional neural network model.
In some embodiments, the method further comprises:
acquiring sample data and a sample data tag; dividing sample data into training sample data and test sample data according to a preset proportion, and dividing verification sample data in the training sample data according to the preset proportion;
in each iteration process of the gray wolf algorithm, the verification sample data is utilized to verify the detection accuracy of the convolutional neural network model; and taking the detection accuracy as an adaptability function value of the gray wolf algorithm.
In some embodiments, the generating a fused picture based on the first fusion coefficient and sensor data of the robotic fish includes:
Converting sensor data of the robot fish into a Grami angle and field picture and a Grami angle difference field picture;
and fusing the glatiramer angle field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient.
In some embodiments, the converting the sensor data of the robotic fish into a glatiramer angle sum field picture and a glatiramer angle difference field picture comprises:
converting sensor data of the robot fish into data under a polar coordinate system;
and determining a Grami angle and field picture and a Grami angle difference field picture based on the data in the polar coordinate system.
In some embodiments, a calculation formula for fusing the glatiramer angle and field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient is as follows:
Figure SMS_1
wherein ,
Figure SMS_2
for fusing pixel values of pictures, +.>
Figure SMS_3
For the first fusion coefficient, +.>
Figure SMS_4
Pixel values for glatiramer angle and field picture,
Figure SMS_5
Is the pixel value of the glatiramer angle difference field picture.
In some embodiments, the method further comprises:
and carrying out data amplification on the sensor data of the robot fish by utilizing a sliding window method.
In some embodiments, the method further comprises:
and carrying out normalization processing on the sensor data of the robot fish.
In some embodiments, the fault detection results include no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault.
In a second aspect, an embodiment of the present application further provides a robot fish sensor fault detection device based on airspace image fusion, including:
the first generation module is used for generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
the first acquisition module is used for inputting the fusion picture into a convolutional neural network model and acquiring a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
In some embodiments, the apparatus further comprises:
the first determining module is used for acquiring an initial value of a gray wolf algorithm and determining a second fusion coefficient by using the gray wolf algorithm; the second fusion coefficient is a fusion coefficient determined in the iterative process of the gray wolf algorithm;
The first training module is used for training the convolutional neural network model based on the second fusion coefficient, training sample data and training sample data labels;
and the second determining module is used for stopping iteration under the condition that the iteration times reach a preset value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the first fusion coefficient and the model parameters of the convolutional neural network model.
In some embodiments, the apparatus further comprises:
the first dividing module is used for acquiring sample data and a sample data tag; dividing sample data into training sample data and test sample data according to a preset proportion, and dividing verification sample data in the training sample data according to the preset proportion;
the first detection module is used for detecting the detection accuracy of the convolutional neural network model by using the verification sample data in each iteration process of the gray wolf algorithm; and taking the detection accuracy as an adaptability function value of the gray wolf algorithm.
In some embodiments, the first generation module comprises a first conversion sub-module, a first fusion sub-module, wherein:
the first conversion sub-module is used for converting sensor data of the robot fish into a Grami angle and field picture and a Grami angle difference field picture;
The first fusion submodule is used for fusing the glatiramer angle field picture and the glatiramer angle difference field picture into a fusion picture based on the first fusion coefficient.
In some embodiments, the first acquisition module comprises a second conversion sub-module, a first determination sub-module, wherein:
the second conversion sub-module is used for converting sensor data of the robot fish into data under a polar coordinate system;
the first determination submodule is used for determining a Grami angle and field picture and a Grami angle difference field picture based on the data in the polar coordinate system.
In some embodiments, a calculation formula for fusing the glatiramer angle and field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient is as follows:
Figure SMS_6
wherein ,
Figure SMS_7
for fusing pixel values of pictures, +.>
Figure SMS_8
For the first fusion coefficient, +.>
Figure SMS_9
Pixel values for glatiramer angle and field picture,
Figure SMS_10
Is the pixel value of the glatiramer angle difference field picture.
In some embodiments, the apparatus further comprises:
and the first amplification module is used for carrying out data amplification on the sensor data of the robot fish by utilizing a sliding window method.
In some embodiments, the apparatus further comprises:
and the first processing module is used for carrying out normalization processing on the sensor data of the robot fish.
In some embodiments, the fault detection results include no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for detecting a fault of a robotic fish sensor based on spatial domain image fusion according to any one of the above when executing the program.
In a fourth aspect, embodiments of the present application further provide a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements a method for detecting a fault of a robotic fish sensor based on airspace image fusion as described in any one of the above.
In a fifth aspect, embodiments of the present application further provide a computer program product, including a computer program, where the computer program when executed by a processor implements a method for detecting a fault of a robot fish sensor based on airspace image fusion as described in any one of the above.
According to the robot fish sensor fault detection method and device based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, and the optimal picture fusion coefficient is determined through repeated iteration of the gray wolf algorithm, so that high-accuracy fault detection for the robot fish sensor is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the following description will briefly describe the drawings that are required to be used in the embodiments or the related technical descriptions, and it is obvious that, in the following description, the drawings are some embodiments of the present application, and other drawings may be obtained according to these drawings without any inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a robot fish sensor fault detection method based on airspace image fusion provided in an embodiment of the present application;
fig. 2 is a schematic diagram of fault detection of a robot fish based on airspace image fusion according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a fault detection device of a robot fish sensor based on airspace image fusion provided in an embodiment of the present application;
fig. 4 is a schematic entity structure of an electronic device according to an embodiment of the present application.
Detailed Description
Aiming at the problem of fault detection, scientific researchers provide a fault detection method of a spatial fusion and convolution neural network model aiming at the power grid ground fault, and realize fault positioning. However, the fusion coefficient is 0.5, the coefficient cannot be adjusted for different platforms, and whether the fusion coefficient of 0.5 is the optimal fusion coefficient needs to be further discussed. In addition, the fault detection research for the robot fish sensor is less, and how to intelligently and accurately detect the fault for the robot fish sensor is a necessary means for realizing safe and reliable operation of the robot fish.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Fig. 1 is a flow chart of a method for detecting a fault of a robot fish sensor based on airspace image fusion according to an embodiment of the present application, as shown in fig. 1, where the method for detecting a fault of a robot fish sensor based on airspace image fusion includes:
and step 101, generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish.
Specifically, the sensor data of the robot fish may be a one-dimensional time sequence signal acquired by a robot fish sensor, and the one-dimensional time sequence signal is converted into a two-dimensional picture by using a glatiramer angle field method, so as to generate a glatiramer angle and field (gramian angular summation field, GASF) picture and a glatiramer angle difference field (gramian angular difference field, GADF) picture. The first fusion coefficient is an optimal fusion coefficient determined based on a wolf algorithm and a convolutional neural network (convolutional neural networks, CNN) model for fusing the GASF map and the GADF map to generate a fused picture.
For example, the sensor data of the robot fish may be depth sensor data of the robot fish.
For another example, the sensor data of the robot fish may be speed sensor data of the robot fish.
102, inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
Specifically, a CNN model is built, a GASF diagram and a GADF diagram are divided into a training set and a testing set according to a certain proportion, and a verification set is divided into the training set according to a certain proportion. And searching for a fusion coefficient of the GASF diagram and the GADF diagram by adopting a gray wolf algorithm, and obtaining a fusion picture based on the fusion coefficient. And adjusting parameters of the CNN model by using the fusion picture of the training set to obtain the trained CNN model, testing the trained CNN model by using the fusion picture of the verification set, and taking the correct rate of the verification set as an fitness function, wherein the larger the fitness is, the more accurate the fault detection is. And stopping iterating the gray wolf algorithm under the condition that the iteration times reach the maximum value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and obtaining the optimal fusion coefficient and parameters of the CNN model, thereby obtaining the CNN model after training. And inputting the fused pictures into the CNN model after training, and obtaining various fault detection results output by the CNN model. The maximum iteration number is a preset iteration number.
The training sample data is a two-dimensional picture obtained by converting a one-dimensional time sequence signal of the sensor, and the training sample data labels are various fault detection results of the robot fish, such as no fault, no output fault, drift fault, intermittent fault, output constant fault, jump fault and the like, and the method is not particularly limited.
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, and the optimal picture fusion coefficient is determined through repeated iteration of the gray wolf algorithm, so that high-accuracy fault detection for the robot fish sensor is achieved.
In some embodiments, the method further comprises:
acquiring an initial value of a wolf algorithm, and determining a second fusion coefficient by using the wolf algorithm; the second fusion coefficient is a fusion coefficient determined in the iterative process of the gray wolf algorithm;
training the convolutional neural network model based on the second fusion coefficient, training sample data and training sample data labels;
and stopping iteration under the condition that the iteration times reach a preset value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the first fusion coefficient and the model parameters of the convolutional neural network model.
Specifically, the gray wolf algorithm is used to find the optimal value of the fusion coefficient of the GASF map and the GADF map. Initializing the first 3 wolves with the best population size and position and adaptability and related parameters, and updating the current wolf position and related parameters according to each iteration so as to determine a second fusion coefficient. Training the CNN model according to training sample data, training sample data labels and fusion coefficients of each iteration, testing the trained CNN model by using fusion pictures of the verification set, and taking the correct rate of the verification set as an fitness function, wherein the larger the fitness is, the more accurate the fault detection is. And stopping iteration under the condition that the iteration times reach the maximum value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the optimal value of the fusion coefficient and the model parameters of the CNN model.
For example, in the event that the maximum number of iterations is not reached, the gray wolf algorithm is re-adopted to continue to find the optimal parameters. And under the condition that the maximum iteration times are reached, optimizing is finished, and the optimal fusion coefficient is determined.
According to the robot fish sensor fault detection method based on airspace image fusion, the optimal picture fusion coefficient is determined through multiple iterations of the gray wolf algorithm and the convolutional neural network model, and high-accuracy fault detection for the robot fish sensor is achieved.
In some embodiments, the method further comprises:
acquiring sample data and a sample data tag; dividing sample data into training sample data and test sample data according to a preset proportion, and dividing verification sample data in the training sample data according to the preset proportion;
in each iteration process of the gray wolf algorithm, the verification sample data is utilized to verify the detection accuracy of the convolutional neural network model; and taking the detection accuracy as an adaptability function value of the gray wolf algorithm.
Specifically, the training sample data is a two-dimensional picture obtained by converting a one-dimensional time sequence signal of the sensor, the two-dimensional picture can be a GASF picture and a GADF picture, and the training sample data label is various fault detection results of the robot fish. Dividing the two-dimensional picture and the fault detection result into a training set and a testing set according to a certain proportion, and dividing a verification set in the training set according to a certain proportion. And searching for a fusion coefficient of the GASF diagram and the GADF diagram by adopting a gray wolf algorithm, and obtaining a fusion picture based on the fusion coefficient. And adjusting parameters of the CNN model by using the fusion picture of the training set to obtain the trained CNN model, testing the trained CNN model by using the fusion picture of the verification set, and taking the correct rate of the verification set as an adaptability function of the fusion coefficient, wherein the larger the adaptability is, the more accurate the fault detection is. The verification set is verification sample data, and is part of sample data used for testing in the training set.
For example, for small-scale sample sets (tens of thousands of orders), the usual partitioning ratios are 60% training set, 20% validation set, 20% test set.
For another example, for a large sample set (over a million) if there are 100 tens of thousands of pieces of data, then 1 tens of thousands of validation sets, 1 tens of thousands of test sets, may be left.
According to the robot fish sensor fault detection method based on airspace image fusion, the fault detection accuracy is judged by utilizing the fitness function obtained by the verification set, so that the optimal picture fusion coefficient is determined, and high-accuracy fault detection for the robot fish sensor is realized.
In some embodiments, the generating a fused picture based on the first fusion coefficient and sensor data of the robotic fish includes:
converting sensor data of the robot fish into a Grami angle and field picture and a Grami angle difference field picture;
and fusing the glatiramer angle field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient.
Specifically, a data acquisition platform can be built to acquire sensor data of the robot fish and tag the sensor data. The labels herein may be various fault detection results, such as no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault, etc., without specific limitation herein. And then expanding the acquired data by a sliding window method. And converting the one-dimensional time sequence signal of the sensor into a two-dimensional picture by using a Grami angle field method, wherein the two-dimensional picture can be a GASF picture and a GADF picture. And finally fusing the GASF image and the GADF image into a fused image according to the fusion coefficient and the weighted fusion method. The specific method of fusion can take the GASF diagram as a foreground and the GADF diagram as a background, and carry out weighted fusion on each pixel point.
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, and the obtained two-dimensional pictures are subjected to weighted fusion based on the fusion coefficients to obtain the fusion pictures, so that airspace characteristics can be extracted, and high-accuracy fault detection for the robot fish sensor is realized.
In some embodiments, the converting the sensor data of the robotic fish into a glatiramer angle sum field picture and a glatiramer angle difference field picture comprises:
converting sensor data of the robot fish into data under a polar coordinate system;
and determining a Grami angle and field picture and a Grami angle difference field picture based on the data in the polar coordinate system.
Specifically, the sensor data of the robot fish can be expanded by a sliding window method. And normalizing the one-dimensional time sequence signal of the robot fish sensor to a preset range, and converting normalized data in a rectangular coordinate system into data in a polar coordinate system. To mine the correlation between data points, a GASF map and a GADF map are generated, respectively.
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, so that airspace characteristics can be extracted, and high-accuracy fault detection for the robot fish sensor is achieved.
In some embodiments, a calculation formula for fusing the glatiramer angle and field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient is as follows:
Figure SMS_11
wherein ,
Figure SMS_12
for fusing pixel values of pictures, +.>
Figure SMS_13
For the first fusion coefficient, +.>
Figure SMS_14
Pixel values for glatiramer angle and field picture,
Figure SMS_15
Is the pixel value of the glatiramer angle difference field picture.
Specifically, the sensor one-dimensional time sequence signal is converted into a two-dimensional picture by a glatirami angle field method, and the two-dimensional picture can be a GASF picture and a GADF picture. And finally fusing the GASF image and the GADF image into a fused image according to the fusion coefficient and the weighted fusion method. The specific method of fusion can take the GASF diagram as a foreground and the GADF diagram as a background, and carry out weighted fusion on each pixel point.
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, and the obtained two-dimensional pictures are subjected to weighted fusion based on the fusion coefficients to obtain the fusion pictures, so that airspace characteristics can be extracted, and high-accuracy fault detection for the robot fish sensor is realized.
In some embodiments, the method further comprises:
And carrying out data amplification on the sensor data of the robot fish by utilizing a sliding window method.
Specifically, a data acquisition platform can be built to acquire sensor data of the robot fish and tag the sensor data. And then expanding the acquired data by a sliding window method.
For example, a signal may be truncated using a window of length M to obtain sub-signals, each sliding having a length N, overlapping adjacent sub-signals having a length M-N, and each sub-signal being a new data sample.
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are subjected to data expansion through the sliding window method, so that sample data in a limited time are increased, and the accuracy of robot fish sensor fault detection is improved.
In some embodiments, the method further comprises:
and carrying out normalization processing on the sensor data of the robot fish.
Specifically, the sensor data of the robot fish can be expanded by a sliding window method. And normalizing the one-dimensional time sequence signal of the robot fish sensor to a preset range, and converting normalized data in a rectangular coordinate system into data in a polar coordinate system. To mine the correlation between data points, a GASF map and a GADF map are generated, respectively.
For example, the one-dimensional time series signal number of the robot fish sensor may be normalized to be within the range of [ -1,1 ].
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, and the optimal picture fusion coefficient is determined based on the gray wolf algorithm and the convolutional neural network model, so that high-accuracy fault detection for the robot fish sensor is achieved.
In some embodiments, the fault detection results include no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault.
Specifically, the fault detection result can be used as a training sample data tag for training and testing the CNN model according to the corresponding relation between the sensor signal and the robot fish fault detection result.
The fault detection results may vary from sensor to sensor of the robot fish and may include, but are not limited to, no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault.
According to the robot fish sensor fault detection method based on airspace image fusion, the one-dimensional time sequence signals of the robot fish sensor are converted into two-dimensional pictures, and the optimal picture fusion coefficient is determined through repeated iteration of the gray wolf algorithm, so that high-accuracy fault detection for the robot fish sensor is achieved.
It should be noted that the wolf algorithm may also be called a wolf optimization algorithm and a wolf group optimization algorithm in the embodiments provided in the present application, and the robotic fish is the same as the biomimetic robotic fish.
The method in the above embodiment will be further described below with specific examples.
Fig. 2 is a schematic diagram of fault detection based on airspace image fusion of a robot fish according to an embodiment of the present application, and as shown in fig. 2, the fault detection method based on airspace image fusion of a robot fish according to an embodiment of the present application includes the following steps:
and S100, constructing a data acquisition platform, acquiring depth sensor data of the robot fish, and adding a label.
Specifically, the depth sensor data of the collected robot fish can be divided into six types, and the corresponding labels are no fault, no output fault, drift fault, intermittent fault, output constant fault and jump fault. The HC-12 wireless transmission module can be used for communication, and a data acquisition platform is built. The upper computer sends an operation instruction, the robot fish moves according to the instruction, and sensor data are recorded in the SD card. After the operation is finished, the lower computer transmits the data to the upper computer, and the upper computer receives the data and stores the data in the database.
Step S200, expanding the acquired data by utilizing a sliding window method.
Specifically, a window with length of M may be used to intercept signals to obtain sub-signals, the length of each sliding is N, the length of overlap between adjacent sub-signals is M-N, and each sub-signal is used as a new data sample.
Step S300, converting the one-dimensional time sequence signal of the sensor into a two-dimensional picture by using a Grami angle field method, and generating a GASF picture and a GADF picture.
In particular, the sensor one-dimensional timing signal may be denoted as S,
Figure SMS_16
. First normalize the signal S to [ -1,1]Within the scope, the normalization method is as follows:
Figure SMS_17
wherein ,
Figure SMS_18
is the normalized value of the i-th value in the original signal.
Then, the normalized data in the rectangular coordinate system is converted into a polar coordinate system in the following conversion modes:
Figure SMS_19
wherein ,
Figure SMS_20
is the angle in the polar coordinate system, the value of which is the inverse cosine of the original value;
Figure SMS_21
Is the radius, whose value is a normalization of the timestamp.
To mine the correlation between data points, the generated glatiramer angle and field matrix are:
Figure SMS_22
to mine the correlation between data points, the generated glatiramer angle difference field matrix is:
Figure SMS_23
where I is a row vector of length n, T is the transpose of the matrix,
Figure SMS_24
Is the normalized sensor timing signal.
Step S400 and step S401, dividing the picture data set into a training set and a test set, and dividing the training set into a verification set according to a bit proportion.
Specifically, the picture data set may be divided into a training set, a verification set and a test set according to a preset ratio, such as 6:2:2.
step S500, initializing the first 3 wolves alpha, beta, delta with the best adaptability and parameters a, A and C, wherein the sizes and positions of the wolves are initialized.
Specifically, a wolf algorithm may be used to find the optimal value of the spatial picture fusion coefficient λ. The wolf algorithm simulates the lead level and hunting steps of the natural wolf. The gray wolves have four leading levels, and the highest fitness is alpha wolves, beta wolves and delta wolvesWolf, last omega wolf. The Hunting of wolves is a collective behavior, and the algorithm simulates three main hunting steps: surrounding hunting, hunting and attack hunting.
The behavior of the gray wolf trap is expressed as:
Figure SMS_25
where t is the current number of iterations,
Figure SMS_26
is the distance between the individual and the prey, < +.>
Figure SMS_27
Is the position of the gray wolf at time t+1, < >>
Figure SMS_28
and
Figure SMS_29
Is a coefficient vector, ++>
Figure SMS_30
and
Figure SMS_31
Is the position of the prey and the position of the wolf.
Figure SMS_32
and
Figure SMS_33
The calculation mode of (a) is as follows: />
Figure SMS_34
wherein ,
Figure SMS_35
Is a convergence factor, linearly decreasing from 2 to 0 with iteration number, +.>
Figure SMS_36
and
Figure SMS_37
Is [0,1 ]]Random numbers in between.
The mathematical model of the wolf hunting is:
Figure SMS_38
wherein ,
Figure SMS_41
Figure SMS_43
and
Figure SMS_45
Is the distance between the individual gray wolves and alpha, beta, delta,/and alpha, beta, delta>
Figure SMS_39
Figure SMS_42
and
Figure SMS_46
Is a random vector, +.>
Figure SMS_47
Figure SMS_40
and
Figure SMS_44
Is the current position of alpha, beta, delta.
Figure SMS_48
wherein ,
Figure SMS_49
Figure SMS_50
and
Figure SMS_51
Is omega in wolf groupStep size and direction of progression of individual wolves towards alpha, beta, delta,
Figure SMS_52
Figure SMS_53
and
Figure SMS_54
Is a random vector.
When the hunting stops moving, the wolf attacks the hunting due to
Figure SMS_55
The value of (2) is continuously decreasing, so +.>
Figure SMS_56
The fluctuation range of (2) is reduced correspondingly, when +.>
Figure SMS_57
When the wolves attack the prey, they sink to local optimum.
Step S501, updating the current position of the gray wolf
Figure SMS_58
The parameters a, C are updated.
Step S502, fusing the GASF diagram and the GADF diagram by using a weighted fusion method to generate a fused picture.
Specifically, the GASF map may be used as a foreground, the GADF map may be used as a background, and each pixel point may be subjected to weighted fusion, where the mathematical expression is as follows:
Figure SMS_59
wherein, GAFF is the picture after fusion,
Figure SMS_60
is the fusion coefficient.
Step S503, a CNN model is built, and model parameters are adjusted by training set data.
Specifically, a picture with 4 dimensions (256 ) in batch can be input, the dimensions are reduced to (128 ) after the first-layer convolution and pooling, the number of channels is increased to 8 after the second-time convolution and pooling, the dimensions are reduced to (64, 64), the number of channels is increased to 16 after the third-time convolution and pooling, and the dimensions are reduced to (32, 32). And flattening the convolved image features into vectors, connecting full-connection layers with 1024 dimensions, and dividing sensor data into 6 types through a Softmax function to realize fault diagnosis.
Step S504, calculating the full gray wolf fitness by using the verification set data.
Specifically, the picture data of the verification set is input into a trained CNN model for testing through a weighted fusion process, the correct rate of the verification set is used as a fitness function, and the larger the fitness is, the better the fault detection effect is indicated.
Step S505, updating the wolves α, β, δ with the best fitness.
Specifically, according to the fitness of the wolves, the wolves with the first three fitness values are selected as alpha, beta and delta wolves in each iteration, and the positions and the fitness values of the wolves are updated and stored.
Step S506, judging whether the maximum iteration number is reached, if the maximum iteration number is not reached, returning to S501 for iterating again until the maximum iteration number is reached.
Specifically, if the maximum number of iterations is not reached, the process returns to S501, and the search for the optimal parameters is continued. If the maximum iteration number is reached, the optimization is finished, the position of the alpha wolf is the optimal fusion coefficient, and the adaptability of the alpha wolf is the highest fault diagnosis accuracy. And visualizing the information of the iteration times and the accuracy, continuously improving the accuracy of the model along with the increase of the iteration times, and finally realizing the highest accuracy of 98.61%.
Step S507, outputting the optimal fusion coefficient
Figure SMS_61
Specifically, when the maximum iteration number is reached, the position of the alpha wolf is the optimal fusion coefficient, the adaptability of the alpha wolf is the highest fault diagnosis accuracy, and the value of the output alpha wolf is the optimal fusion coefficient
Figure SMS_62
Step S600, according to the optimal fusion coefficient
Figure SMS_63
And fusing the GASF diagram and the GADF diagram of the test set by using a weighted fusion method to generate a fusion picture.
Specifically, the GASF map may be used as a foreground, the GADF map may be used as a background, and each pixel point may be subjected to weighted fusion, where the mathematical expression is as follows:
Figure SMS_64
wherein, GAFF is the picture after fusion,
Figure SMS_65
is the optimal fusion coefficient.
Step S601, inputting a fused picture of the test set to the trained CNN model.
And step S602, outputting a fault diagnosis result.
Specifically, the error rate of the CNN model may be determined according to the label of the test set data and the output failure diagnosis result.
According to the embodiment provided by the application, the one-dimensional time sequence signal is converted into the two-dimensional picture through the Grami angle field, the GASF picture and the GADF picture are fused through the weighted fusion method, the fused picture is input into the CNN model for picture classification, and fault diagnosis of the robot fish sensor is achieved. By adopting the gray wolf algorithm, the verification set fault diagnosis accuracy is taken as the fitness function, the fusion coefficient is subjected to parameter optimization, the optimal fusion coefficient and the fault diagnosis accuracy are obtained, and the high-accuracy fault diagnosis for the robot fish sensor is realized.
Fig. 3 is a schematic structural diagram of a robot fish sensor fault detection device based on airspace image fusion provided in an embodiment of the present application, and as shown in fig. 3, the robot fish sensor fault detection device based on airspace image fusion provided in an embodiment of the present application includes a first generating module 301, a first obtaining module 302, where:
a first generation module 301, configured to generate a fused picture based on the first fusion coefficient and sensor data of the robot fish;
the first obtaining module 302 is configured to input the fused picture to a convolutional neural network model, and obtain a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
In some embodiments, the apparatus further comprises:
the first determining module is used for acquiring an initial value of a gray wolf algorithm and determining a second fusion coefficient by using the gray wolf algorithm; the second fusion coefficient is a fusion coefficient determined in the iterative process of the gray wolf algorithm;
The first training module is used for training the convolutional neural network model based on the second fusion coefficient, training sample data and training sample data labels;
and the second determining module is used for stopping iteration under the condition that the iteration times reach a preset value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the first fusion coefficient and the model parameters of the convolutional neural network model.
In some embodiments, the apparatus further comprises:
the first dividing module is used for acquiring sample data and a sample data tag; dividing sample data into training sample data and test sample data according to a preset proportion, and dividing verification sample data in the training sample data according to the preset proportion;
the first detection module is used for detecting the detection accuracy of the convolutional neural network model by using the verification sample data in each iteration process of the gray wolf algorithm; and taking the detection accuracy as an adaptability function value of the gray wolf algorithm.
In some embodiments, the first generation module comprises a first conversion sub-module, a first fusion sub-module, wherein:
the first conversion sub-module is used for converting sensor data of the robot fish into a Grami angle and field picture and a Grami angle difference field picture;
The first fusion submodule is used for fusing the glatiramer angle field picture and the glatiramer angle difference field picture into a fusion picture based on the first fusion coefficient.
In some embodiments, the first acquisition module comprises a second conversion sub-module, a first determination sub-module, wherein:
the second conversion sub-module is used for converting sensor data of the robot fish into data under a polar coordinate system;
the first determination submodule is used for determining a Grami angle and field picture and a Grami angle difference field picture based on the data in the polar coordinate system.
In some embodiments, a calculation formula for fusing the glatiramer angle and field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient is as follows:
Figure SMS_66
wherein ,
Figure SMS_67
for fusing pixel values of pictures, +.>
Figure SMS_68
For the first fusion coefficient, +.>
Figure SMS_69
Pixel values for glatiramer angle and field picture,
Figure SMS_70
Is the pixel value of the glatiramer angle difference field picture.
In some embodiments, the apparatus further comprises:
and the first amplification module is used for carrying out data amplification on the sensor data of the robot fish by utilizing a sliding window method.
In some embodiments, the apparatus further comprises:
and the first processing module is used for carrying out normalization processing on the sensor data of the robot fish.
In some embodiments, the fault detection results include no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault.
Specifically, the device for detecting the fault of the robot fish sensor based on the spatial domain image fusion provided by the embodiment of the application can realize all the method steps realized by the embodiment of the method for detecting the fault of the robot fish sensor based on the spatial domain image fusion, and can achieve the same technical effects, and the parts and beneficial effects which are the same as those of the embodiment of the method in the embodiment are not described in detail.
Fig. 4 is a schematic physical structure of an electronic device provided in an embodiment of the present application, as shown in fig. 4, the electronic device may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a robot fish sensor fault detection method based on spatial domain image fusion, the method comprising:
generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
Inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In some embodiments, the method further comprises:
acquiring an initial value of a wolf algorithm, and determining a second fusion coefficient by using the wolf algorithm; the second fusion coefficient is a fusion coefficient determined in the iterative process of the gray wolf algorithm;
training the convolutional neural network model based on the second fusion coefficient, training sample data and training sample data labels;
and stopping iteration under the condition that the iteration times reach a preset value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the first fusion coefficient and the model parameters of the convolutional neural network model.
In some embodiments, the method further comprises:
acquiring sample data and a sample data tag; dividing sample data into training sample data and test sample data according to a preset proportion, and dividing verification sample data in the training sample data according to the preset proportion;
in each iteration process of the gray wolf algorithm, the verification sample data is utilized to verify the detection accuracy of the convolutional neural network model; and taking the detection accuracy as an adaptability function value of the gray wolf algorithm.
In some embodiments, the generating a fused picture based on the first fusion coefficient and sensor data of the robotic fish includes:
Converting sensor data of the robot fish into a Grami angle and field picture and a Grami angle difference field picture;
and fusing the glatiramer angle field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient.
In some embodiments, the converting the sensor data of the robotic fish into a glatiramer angle sum field picture and a glatiramer angle difference field picture comprises:
converting sensor data of the robot fish into data under a polar coordinate system;
and determining a Grami angle and field picture and a Grami angle difference field picture based on the data in the polar coordinate system.
In some embodiments, a calculation formula for fusing the glatiramer angle and field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient is as follows:
Figure SMS_71
wherein ,
Figure SMS_72
for fusing pixel values of pictures, +.>
Figure SMS_73
For the first fusion coefficient, +.>
Figure SMS_74
Pixel values for glatiramer angle and field picture,
Figure SMS_75
Is the pixel value of the glatiramer angle difference field picture.
In some embodiments, the method further comprises:
and carrying out data amplification on the sensor data of the robot fish by utilizing a sliding window method.
In some embodiments, the method further comprises:
and carrying out normalization processing on the sensor data of the robot fish.
In some embodiments, the fault detection results include no fault, no output fault, drift fault, intermittent fault, output constant fault, and kick fault.
Specifically, the electronic device provided in the embodiment of the present application can implement all the method steps implemented by the method embodiment in which the execution subject is the electronic device, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in the embodiment are omitted herein.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer can execute the method for detecting a fault of a robot fish sensor based on airspace image fusion provided by the above methods, and the method includes:
generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
In still another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor, is implemented to perform the method for detecting a fault of a robot fish sensor based on airspace image fusion provided by the above methods, the method comprising:
generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
In addition, it should be noted that: the terms "first," "second," and the like in the embodiments of the present application are used for distinguishing between similar objects and not for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the terms "first" and "second" are generally intended to be used in a generic sense and not to limit the number of objects, for example, the first object may be one or more.
In the embodiment of the application, the term "and/or" describes the association relationship of the association objects, which means that three relationships may exist, for example, a and/or B may be represented: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The term "plurality" in the embodiments of the present application means two or more, and other adjectives are similar thereto.
The term "determining B based on a" in the present application means that a is a factor to be considered in determining B. Not limited to "B can be determined based on A alone", it should also include: "B based on A and C", "B based on A, C and E", "C based on A, further B based on C", etc. Additionally, a may be included as a condition for determining B, for example, "when a satisfies a first condition, B is determined using a first method"; for another example, "when a satisfies the second condition, B" is determined, etc.; for another example, "when a satisfies the third condition, B" is determined based on the first parameter, and the like. Of course, a may be a condition in which a is a factor for determining B, for example, "when a satisfies the first condition, C is determined using the first method, and B is further determined based on C", or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The robot fish sensor fault detection method based on airspace image fusion is characterized by comprising the following steps of:
generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
inputting the fusion picture into a convolutional neural network model, and obtaining a fault detection result output by the convolutional neural network model;
the first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
2. The method for detecting a fault of a robot fish sensor based on airspace image fusion according to claim 1, which further comprises:
acquiring an initial value of a wolf algorithm, and determining a second fusion coefficient by using the wolf algorithm; the second fusion coefficient is a fusion coefficient determined in the iterative process of the gray wolf algorithm;
training the convolutional neural network model based on the second fusion coefficient, training sample data and training sample data labels;
and stopping iteration under the condition that the iteration times reach a preset value or the fitness function value of the gray wolf algorithm is larger than a preset threshold value, and determining the first fusion coefficient and the model parameters of the convolutional neural network model.
3. The method for detecting a fault of a robot fish sensor based on airspace image fusion according to claim 2, which further comprises:
acquiring sample data and a sample data tag; dividing sample data into training sample data and test sample data according to a preset proportion, and dividing verification sample data in the training sample data according to the preset proportion;
in each iteration process of the gray wolf algorithm, the verification sample data is utilized to verify the detection accuracy of the convolutional neural network model; and taking the detection accuracy as an adaptability function value of the gray wolf algorithm.
4. The method for detecting the sensor failure of the robot fish based on the spatial domain image fusion according to claim 1, wherein the generating the fused picture based on the first fusion coefficient and the sensor data of the robot fish comprises:
converting sensor data of the robot fish into a Grami angle and field picture and a Grami angle difference field picture;
and fusing the glatiramer angle field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient.
5. The method for detecting the sensor failure of the robot fish based on the spatial domain image fusion according to claim 4, wherein the converting the sensor data of the robot fish into the glatiramer angle and field picture and the glatiramer angle difference field picture comprises:
converting sensor data of the robot fish into data under a polar coordinate system;
and determining a Grami angle and field picture and a Grami angle difference field picture based on the data in the polar coordinate system.
6. The method for detecting a fault of a robot fish sensor based on spatial domain image fusion according to claim 4, wherein a calculation formula for fusing the glatiramer angle and the field picture and the glatiramer angle difference field picture into a fused picture based on the first fusion coefficient is as follows:
Figure QLYQS_1
wherein ,
Figure QLYQS_2
for fusing pixel values of pictures, +.>
Figure QLYQS_3
For the first fusion coefficient, +.>
Figure QLYQS_4
Pixel values for glatiramer angle and field picture,
Figure QLYQS_5
Is the pixel value of the glatiramer angle difference field picture.
7. The method for detecting a fault in a robot fish sensor based on spatial domain image fusion according to claim 4, further comprising:
and carrying out data amplification on the sensor data of the robot fish by utilizing a sliding window method.
8. The method for detecting a fault in a robot fish sensor based on spatial domain image fusion according to claim 4, further comprising:
and carrying out normalization processing on the sensor data of the robot fish.
9. The method for detecting the fault of the robot fish sensor based on the spatial domain image fusion according to any one of claims 1 to 8, wherein the fault detection result comprises no fault, no output fault, drift fault, intermittent fault, output constant fault and kick fault.
10. The utility model provides a robot fish sensor fault detection device based on airspace image fusion which characterized in that includes:
the first generation module is used for generating a fusion picture based on the first fusion coefficient and sensor data of the robot fish;
the first acquisition module is used for inputting the fusion picture into a convolutional neural network model and acquiring a fault detection result output by the convolutional neural network model;
The first fusion coefficient is a fusion coefficient determined when the iteration times of the wolf algorithm reach a preset value or the fitness function value of the wolf algorithm is larger than a preset threshold value, and the convolutional neural network model is obtained after training based on training sample data and training sample data labels.
CN202310394870.0A 2023-04-14 2023-04-14 Robot fish sensor fault detection method and device based on airspace image fusion Active CN116109897B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310394870.0A CN116109897B (en) 2023-04-14 2023-04-14 Robot fish sensor fault detection method and device based on airspace image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310394870.0A CN116109897B (en) 2023-04-14 2023-04-14 Robot fish sensor fault detection method and device based on airspace image fusion

Publications (2)

Publication Number Publication Date
CN116109897A true CN116109897A (en) 2023-05-12
CN116109897B CN116109897B (en) 2023-08-15

Family

ID=86265904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310394870.0A Active CN116109897B (en) 2023-04-14 2023-04-14 Robot fish sensor fault detection method and device based on airspace image fusion

Country Status (1)

Country Link
CN (1) CN116109897B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184762A (en) * 2020-09-05 2021-01-05 天津城建大学 Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN113923104A (en) * 2021-12-07 2022-01-11 南京信息工程大学 Network fault diagnosis method, equipment and storage medium based on wavelet neural network
CN114818774A (en) * 2022-03-15 2022-07-29 南京航空航天大学 Intelligent gearbox fault diagnosis method based on multi-channel self-calibration convolutional neural network
CN115575125A (en) * 2022-09-21 2023-01-06 三峡大学 Bearing fault diagnosis method based on GADF-GAN-AVOA-CNN
CN115688049A (en) * 2022-11-02 2023-02-03 重庆邮电大学 Data fusion method, device and storage medium based on improved GWOO optimized BPNN
CN115761398A (en) * 2022-10-31 2023-03-07 重庆邮电大学 Bearing fault diagnosis method based on lightweight neural network and dimension expansion
CN115760380A (en) * 2022-12-09 2023-03-07 百维金科(上海)信息科技有限公司 Enterprise credit assessment method and system integrating electricity utilization information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184762A (en) * 2020-09-05 2021-01-05 天津城建大学 Gray wolf optimization particle filter target tracking algorithm based on feature fusion
CN113923104A (en) * 2021-12-07 2022-01-11 南京信息工程大学 Network fault diagnosis method, equipment and storage medium based on wavelet neural network
CN114818774A (en) * 2022-03-15 2022-07-29 南京航空航天大学 Intelligent gearbox fault diagnosis method based on multi-channel self-calibration convolutional neural network
CN115575125A (en) * 2022-09-21 2023-01-06 三峡大学 Bearing fault diagnosis method based on GADF-GAN-AVOA-CNN
CN115761398A (en) * 2022-10-31 2023-03-07 重庆邮电大学 Bearing fault diagnosis method based on lightweight neural network and dimension expansion
CN115688049A (en) * 2022-11-02 2023-02-03 重庆邮电大学 Data fusion method, device and storage medium based on improved GWOO optimized BPNN
CN115760380A (en) * 2022-12-09 2023-03-07 百维金科(上海)信息科技有限公司 Enterprise credit assessment method and system integrating electricity utilization information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOJTABA MIRZAEI 等: "MEMS gyroscope fault detection and elimination for an underwater robot using the combination of smooth switching and dynamic redundancy method", MICROELECTRONICS RELIABILITY *
徐高飞 等: "水下机器人推进系统自适应故障诊断", 舰船科学技术, vol. 42, no. 6 *

Also Published As

Publication number Publication date
CN116109897B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN110189334B (en) Medical image segmentation method of residual error type full convolution neural network based on attention mechanism
CN110232350B (en) Real-time water surface multi-moving-object detection and tracking method based on online learning
US10204299B2 (en) Unsupervised matching in fine-grained datasets for single-view object reconstruction
CN113902926A (en) General image target detection method and device based on self-attention mechanism
CN113222068B (en) Remote sensing image multi-label classification method based on adjacency matrix guidance label embedding
CN109635763B (en) Crowd density estimation method
CN112116064A (en) Deep network data processing method for spectrum super-resolution self-adaptive weighted attention machine
Yu et al. Modeling spatial extremes via ensemble-of-trees of pairwise copulas
US20090012752A1 (en) Aerodynamic Design Optimization Using Information Extracted From Analysis of Unstructured Surface Meshes
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
CN117576724A (en) Unmanned plane bird detection method, system, equipment and medium
CN112991394B (en) KCF target tracking method based on cubic spline interpolation and Markov chain
CN111275751B (en) Unsupervised absolute scale calculation method and system
CN117853596A (en) Unmanned aerial vehicle remote sensing mapping method and system
CN116109897B (en) Robot fish sensor fault detection method and device based on airspace image fusion
Li et al. Aero-engine sensor fault diagnosis based on convolutional neural network
Estévez et al. Nonlinear time series analysis by using gamma growing neural gas
CN116680639A (en) Deep-learning-based anomaly detection method for sensor data of deep-sea submersible
CN117877587A (en) Deep learning algorithm of whole genome prediction model
CN114120245B (en) Crowd image analysis method, device and equipment based on deep neural network
CN115424275A (en) Fishing boat brand identification method and system based on deep learning technology
CN115147720A (en) SAR ship detection method based on coordinate attention and long-short distance context
CN114706087A (en) Underwater terrain matching and positioning method and system for three-dimensional imaging sonar point cloud
CN111652246B (en) Image self-adaptive sparsization representation method and device based on deep learning
Quazi et al. Image Classification and Semantic Segmentation with Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant