CN111582399A - Multi-sensor information fusion method for sterilization robot - Google Patents

Multi-sensor information fusion method for sterilization robot Download PDF

Info

Publication number
CN111582399A
CN111582399A CN202010410470.0A CN202010410470A CN111582399A CN 111582399 A CN111582399 A CN 111582399A CN 202010410470 A CN202010410470 A CN 202010410470A CN 111582399 A CN111582399 A CN 111582399A
Authority
CN
China
Prior art keywords
layer
sensor
neural network
sterilization robot
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010410470.0A
Other languages
Chinese (zh)
Other versions
CN111582399B (en
Inventor
赵安妮
韩贵东
马志刚
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Province Senxiang Science And Technology Co ltd
Original Assignee
Jilin Province Senxiang Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Province Senxiang Science And Technology Co ltd filed Critical Jilin Province Senxiang Science And Technology Co ltd
Priority to CN202010410470.0A priority Critical patent/CN111582399B/en
Publication of CN111582399A publication Critical patent/CN111582399A/en
Application granted granted Critical
Publication of CN111582399B publication Critical patent/CN111582399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a sterilization robot-oriented multi-sensor information fusion method, and aims to solve the problems that the uncertainty is high, the operating parameters of a sterilization robot in different scenes cannot be accurately determined, the autonomy is poor and the like during multi-sensor information fusion of the sterilization robot in the medical field. The invention comprises the following steps: generating a target classification training data set, acquiring characteristic information by processing a mixed deep neural network fusing multi-source signals, establishing a decision fusion model based on a voting evidence theory, and training to obtain a sterilization robot operation parameter model; the invention organically fuses the visual signals and the sensor signals of the sterilization robot together by combining the deep neural network with the improved evidence theory method through combining the deep learning with the improved evidence theory method, eliminates the possible redundancy and contradiction among the multi-sensor information, complements each other, reduces the uncertainty of the multi-sensor information, and further more accurately outputs the operating parameters of the sterilization robot.

Description

Multi-sensor information fusion method for sterilization robot
Technical Field
The invention relates to an information fusion method, in particular to a sterilization robot-oriented hybrid deep neural network multi-sensor information fusion method.
Background
The sterilization robot takes a robot as a carrier, is applied to a medical sterilization scene, and automatically, efficiently and accurately sterilizes the indoor space according to a set route. The sterilization robot can efficiently and quickly inactivate various microorganisms on the surface of an object and in the air by utilizing the high-voltage ionization polarization light-emitting principle of a pulse xenon lamp through high-strength ultraviolet band light rays, so that the purposes of disinfection and sterilization are achieved.
At present, high-risk cross infection areas such as infectious disease isolation rooms, sickrooms and operating rooms need regular sterilization and disinfection, but the work of the parts is mainly completed manually, the defects of labor consumption, dead corners, secondary pollution, uncontrollable concentration, high cost and the like exist in the traditional indoor sterilization and disinfection work, the sterilization medical robot can perform sterilization more quickly and efficiently, the heavy work of medical workers is shared to a great extent, the sterilization medical robot has the characteristic of infection resistance, can work in various high-risk scenes, and does not occupy medical resources.
The sterilization robot needs to make different judgments according to different scenes such as a ward, a consulting room, an operating room and the like and various environmental factors such as gas, temperature, humidity and the like. The sensor system is a sensing organ of the robot, almost all robots are provided with sensors, the robot inputs information required by the robot into the system, and the sensors control the sensing system. The multi-sensor information fusion technique has a self-evident function as an intelligent information processing technique. At present, the common methods for multi-sensor data fusion can be roughly divided into two main categories, namely random methods and artificial intelligence methods. Different levels of information fusion correspond to different algorithms, including weighted average fusion, Kalman filtering, Bayes estimation, statistical decision theory, probability theory, fuzzy logic reasoning, artificial neural network, D-S evidence theory, and the like. The process of data fusion is as follows: each sensor independently processes and generates target data, each sensor independently senses, and after all the sensors complete generation of the target data, the main processor performs data fusion.
According to the traditional fusion method, when the multi-sensor information of the sterilization robot in the medical treatment field is fused, redundancy and contradiction exist among the multi-sensor information, the uncertainty is high, the operating parameters of the sterilization robot in different scenes cannot be accurately determined, the autonomy is poor, excessive manual intervention and control are needed, and the effect is still to be improved.
Disclosure of Invention
In order to solve the technical problems, the invention further improves the multi-sensor information fusion technology by combining a deep neural network with an improved evidence theory method, improves the application effect of the multi-sensor information fusion technology on the sterilization robot, enhances the survival capability of the system, improves the reliability and the robustness of the whole system, enhances the reliability of data, improves the precision, expands the time and space coverage rate of the system, increases the real-time property and the information utilization rate of the system and the like by applying the multi-sensor information fusion technology in the aspects of solving the problems of detection, tracking, target identification and the like of the sterilization robot. The invention provides a sterilization robot-oriented multi-sensor information fusion method, which comprises the following steps:
step one, generating a target classification training data set:
(1) data acquisition: putting the sterilization robot into different scenes, acquiring a current visual image through a camera of the robot, and acquiring a corresponding data value through a sensor of the sterilization robot; the scene comprises an operating room, a ward, a consulting room or other medical places or places needing sterilization treatment; the sensors of the sterilization robot include a humidity sensor, a temperature sensor, a gas sensor, and the like.
(2) A/D conversion: converting the analog value acquired by the sensor into a usable digital signal to complete the A/D conversion process;
(3) manually labeling the data set: in order to train the neural network, after data acquisition, labels are manually marked according to different scenes and are used for representing the operating parameters of the sterilization robot in the current scene;
(4) equalizing samples: before training, the number of samples of different labels is made to reach a ratio of 1:1 as much as possible;
step two, acquiring characteristic information by processing a mixed deep neural network fusing multi-source signals:
the method for acquiring the feature information of the 1x10 feature vector representation image by using the deep neural network is as follows:
after the matrix signal of the image is input, firstly, a convolution layer with the step size of 2 and the convolution kernel size of 7 x 7 is passed through, wherein the computation process of the convolution layer is represented by the following formula:
Figure BDA0002492966300000031
wherein Z(l,p)P-th feature map output, X, representing the l-th convolutional layer(l-1)For input feature mapping at layer l-1, W(l,p)And b(l,p)Convolution kernel and its offset for the l-th layer;
after one convolution, performing pooling operation through a max posing layer with the step length of 2, after the max posing layer, further extracting features by adopting 8 convolution layers with convolution kernels of 3x3, wherein the number of channels is respectively 64, 128, 256 and 256, the step lengths are respectively 1, 2, 1, 2 and 1, and the convolution layer with the step length of 2 is used for replacing the pooling layer to compress the size of the graph; in order to prevent the problems of gradient disappearance and the like during deep neural network training and simultaneously to recycle the characteristics of the previous layer, a skip connection is added every two convolution layers, the starting position of the skip connection represents the output x of the previous layer, and the ending position represents the input y of the next layer, and a calculation mode is adopted:
y=F(x)+Wx
where F (x) represents the output of the two-layer convolution, and W represents the convolution operation with a convolution kernel size of 1x1, which is used to adjust the channel dimension and width and height of matrix x to ensure the same size as F (x); finally, adding F (x) and Wx pixel by pixel to obtain the input y of the next layer; after passing through 8 convolutional layers, the features are reduced to one-dimensional vectors with the size of 1x512 by one global boosting layer; finally, obtaining a 1x10 vector through a full connection layer to represent the characteristic information of the matrix signal;
converting each signal of different sensors into a feature vector of 1x2 by using a BP neural network, and representing the feature information of the sensor signals by the following method
The humidity sensor, the temperature sensor and the gas sensor respectively correspond to corresponding BP neural networks, and each BP neural network consists of three parts: the BP neural network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer consists of a plurality of equal nodes, and the BP neural network executes the following processes:
(1) network initialization: setting an initial value for connecting the weight vector and the threshold vector, and satisfying the normal distribution that the mean value is 0 and the variance is 1; the number of hidden nodes, the step length, momentum factor and other super parameter values are set manually based on experience, the number of the nodes is usually within 1000, the step length is usually set to be 0.1, and the momentum factor is usually between 0 and 1;
(2) providing input sample data;
(3) calculating the output value of the hidden layer unit: calculating by adopting an s-shaped function;
(4) calculating an output value of the output unit: calculating by adopting an s-shaped function;
the formula of the s-type function is as follows:
Figure BDA0002492966300000041
nonlinear characteristics are introduced into the BP neural network, and each signal is converted into a feature vector of 1x2, which represents the feature information of the sensor signal.
Step three, establishing a decision fusion model based on a voting evidence theory:
based on 2-path voting evidence theory fusion, if multiple paths are adopted, 2-path fusion is adopted for multiple times; the 2-path voting evidence theory fusion process is as follows:
m (a) is the confidence level of the evidence a, p (a) is the voting confidence level of the evidence a, m (B) is the confidence level of the evidence B, and p (B) is the voting confidence level of the evidence B, the fusion result of A, B is:
Figure BDA0002492966300000042
Figure BDA0002492966300000043
wherein K represents a normalization factor reflecting the degree of evidence conflict; c represents fusion evidence of A, B, and m (C) represents credibility of the fusion evidence C.
Inputting the eigenvectors of the matrix signals and the eigenvectors of the variable signals obtained in the step two into a decision fusion model of a voting evidence theory to obtain final output;
step four, training to obtain a sterilization robot operation parameter model:
and acquiring a visual signal as a matrix signal input through a camera of the sterilization robot, inputting various sensor signals as variable signals, manually marking operation parameters of the sterilization robot, and performing end-to-end training to obtain a final model.
The invention has the beneficial effects that:
the invention utilizes the deep neural network to combine the improved evidence theory method, has proposed the mixed deep neural network multisensor information fusion method facing to sterilizing the medical robot, can combine the local data resources that a plurality of homogeneous or heterogeneous sensors that distribute in different positions of sterilizing the robot provide, visual signal and sensor signal of the sterilizing robot, combine the improvement evidence theory method through the deep learning to combine together organically, use the computer technology to analyze it, eliminate the redundancy and contradiction that may exist between the multisensor information, complement, reduce its uncertainty, and then the operating parameter of the more accurate output sterilizing robot;
the deep neural network can effectively distinguish the scene where the robot is located according to the video signal, and the scene recognition by using the deep learning method has the following advantages: firstly, the CNN can automatically extract features containing more semantic and structural information from an input image, and the features become more discriminative after nonlinear transformation in a network structure; secondly, the depth hierarchy can better explain the spatial distribution in the scene;
the voting evidence theory method integrates prior knowledge through voting, can achieve better decision fusion effect, solves the problem of synthesizing two contradictory evidences from the aspect of probability statistics by using the voting method, has good mathematical properties and engineering application, and can improve the accuracy of the whole system;
the robot can achieve end-to-end on the whole, does not need manual intervention in an environment with sufficient marking data, can learn satisfactory effects, can autonomously determine operation parameters in a complex indoor environment, reduces labor cost and reduces harm to disinfection operators.
Drawings
FIG. 1 is a schematic diagram of a hybrid deep neural network multi-sensor information fusion method of the present invention.
Detailed Description
The invention provides a sterilization robot-oriented multi-sensor information fusion method, which comprises the following steps:
step one, generating a target classification training data set:
(1) data acquisition: and acquiring original information to be detected in a target environment by using a multi-source sensor. Putting the sterilization robot in different scenes, acquiring a current visual image through a camera of the robot, and acquiring corresponding data values through humidity, temperature and gas sensors of the sterilization robot; the scene comprises an operating room, a ward, a consulting room or other medical places or places needing sterilization treatment;
(2) A/D conversion: because the original information collected by the sensor is usually non-electric signals such as temperature and the like, before data processing, the data value obtained by the sensor is converted into a usable digital signal (the visual image obtained by the camera does not need to be converted), and the A/D conversion process is completed;
(3) manually labeling the data set: in order to train the neural network, after data acquisition, labels are manually marked aiming at different scenes and are used for representing the operating parameters of the sterilization robot under the current scene, the operating parameters comprise working voltage, working time, illumination frequency, robot angle and robot travel route parameters, and the operating parameters of the robot are different aiming at different scenes;
(4) equalizing samples: before training, the number of samples of different labels is made to reach a ratio of 1:1 as much as possible;
step two, acquiring characteristic information by processing a mixed deep neural network fusing multi-source signals:
the hybrid deep neural network can synchronously process the matrix signal and the univariate signal.
The method for acquiring the characteristic information of the image by using the deep neural network comprises the following steps:
the deep neural network is a network structure with multilayer perceptrons, local connection and weight sharing, and has stronger fault-tolerant, learning and parallel processing capabilities, thereby reducing the complexity of a network model and the number of network connection weights.
After the matrix signal is input, firstly, a convolutional layer with the step size of 2 and the convolutional kernel size of 7 × 7 is passed through, wherein the convolutional layer calculation process is expressed by the following formula:
Figure BDA0002492966300000071
wherein Z(l,p)P-th feature map output, X, representing the l-th convolutional layer(l--1)For input feature mapping at layer l-1, W(l,p)And b(l,p)Convolution kernel and its offset for the l-th layer;
after one convolution, pooling operation is carried out through a max Pooling layer with the step length of 2, and the pooling layer can be used for achieving the purposes of reducing dimension, realizing nonlinearity, expanding a sensing field and realizing invariance (translation invariance, rotation invariance and scale invariance). After the max boosting layer, 8 convolution layers with convolution kernels of 3x3 are adopted to further extract features, the number of channels is 64, 128, 256 and 256 respectively, the step length is 1, 2, 1, 2 and 1 respectively, wherein the convolution layer with the step length of 2 is used for replacing a pooling layer to compress the size of the graph, and the dimension can be reduced and the calculation amount can be reduced while the features are extracted. In order to prevent the problems of gradient disappearance and the like in deep neural network training and to recycle the features of the previous layer, skip connections are added every second convolution layer, as shown by the arcs in fig. 1, wherein there are 3 skip connections. The starting position of each arc line represents the output x of the previous layer, the ending position represents the input y of the next layer, and the calculation mode is adopted:
y=F(x)+Wx
where f (x) represents the output of the two-layer convolution, W represents the convolution operation with a convolution kernel size of 1x1, which is used to adjust the channel dimensions and width and height of matrix x to ensure the same size as f (x) (if x is the same size as f (x), then no convolution operation is required); finally, adding F (x) and Wx pixel by pixel to obtain the input y of the next layer; after passing through 8 convolutional layers, the features are reduced to one-dimensional vectors with the size of 1x512 by one global boosting layer; finally, obtaining a 1x10 vector through a full connection layer to represent the characteristic information of the matrix signal;
the method for acquiring the characteristic information of the sensor by using the BP neural network comprises the following steps:
as shown in the figure, the humidity sensor, the temperature sensor and the gas sensor respectively correspond to corresponding BP neural networks, and each BP neural network consists of three parts: the BP neural network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer consists of a plurality of equal nodes, and the BP neural network executes the following processes:
(1) network initialization: setting an initial value for connecting the weight vector and the threshold vector, and satisfying the normal distribution that the mean value is 0 and the variance is 1; the number of hidden nodes, the step length, momentum factor and other super parameter values are set manually based on experience, the number of the nodes is usually within 1000, the step length is usually set to be 0.1, and the momentum factor is usually between 0 and 1;
(2) providing input sample data;
(3) calculating the output value of the hidden layer unit: calculating by adopting an s-shaped function;
(4) calculating an output value of the output unit: calculating by adopting an s-shaped function;
wherein the formula of the s-type function is as follows:
Figure BDA0002492966300000081
introducing nonlinear characteristics into a BP neural network, converting each signal into a feature vector of 1x2, and representing the feature information of the sensor signal;
for example, a sterilization robot is placed in a ward, the robot acquires a visual image (matrix signal) of a current scene through a camera, acquires a non-electrical signal through a humidity sensor, a temperature sensor and a gas sensor, and converts the non-electrical signal into three single variable signals through A/D conversion; the matrix signals are converted into vectors of 1x10 through a deep neural network, and the three univariate signals are converted into three vectors of 1x2 through three BP neural networks respectively. The four vectors represent characteristic information captured by the sterilization robot in the current scene.
Step three, establishing a decision fusion model based on a voting evidence theory:
in order to effectively fuse multi-source evidence information of a system to be recognized and improve the accuracy of pattern recognition, a decision fusion model based on a voting evidence theory is provided. The method is based on the fact that evidences from different sources have different reliability on the final judgment result, the accuracy of the evidence on the result judgment is converted into voting coefficients, and the uncertainty of the evidence in the pattern recognition process can be weakened to the maximum extent after voting fusion, so that the uncertainty of the pattern recognition is reduced theoretically.
Based on 2-path voting evidence theory fusion, if multiple paths are adopted, 2-path fusion is adopted for multiple times; the 2-path voting evidence theory fusion process is as follows:
m (A) is the confidence level of the evidence A, p (A) is the voting confidence level of the evidence A, m (B) is the confidence level of the evidence B, p (B) is the voting confidence level of the evidence B, and p (A) p (B) is artificially set by the domain expert according to the correlation of the sensor type to the final output result.
The fusion result of A, B is:
Figure BDA0002492966300000091
Figure BDA0002492966300000092
wherein K represents a normalization factor reflecting the degree of evidence conflict; c represents fusion evidence of A, B, and m (C) represents credibility of the fusion evidence C.
Inputting the eigenvectors of the matrix signals and the eigenvectors of the variable signals obtained in the step two into a decision fusion model of a voting evidence theory to obtain final output, and outputting operation parameters of the sterilization robot in the scene;
for example: the output result type is the light frequency, and the domain expert sets the temperature voting credibility to 0.7 and the humidity voting credibility to 0.2. And the input of the temperature and humidity sensor is connected with a BP network, the result is obtained by forward propagation, and then the third step is executed to obtain the voting evidence theory fusion result and output the illumination frequency parameter.
Step four, training to obtain a sterilization robot operation parameter model:
and acquiring a visual signal as a matrix signal input through a camera of the sterilization robot, inputting various sensor signals as variable signals, manually marking operation parameters of the sterilization robot, and performing end-to-end training to obtain a model. The sterilization robot can automatically acquire operation parameters in different environments according to the trained model so as to automatically operate.

Claims (4)

1. A sterilization robot-oriented multi-sensor information fusion method is characterized by comprising the following steps:
step one, generating a target classification training data set:
(1) data acquisition: placing the sterilization robot in different scenes, acquiring a current visual image through a camera of the robot, and acquiring a corresponding sensor data value through a sensor of the sterilization robot;
(2) A/D conversion: converting the data value obtained by the sensor into a digital signal which can be used;
(3) manually labeling the data set: after data are collected, labels are manually marked according to different scenes and used for representing the operation parameters of the sterilization robot in the current scene;
(4) equalizing samples: before training, the number of samples of different labels is made to reach a ratio of 1:1 as much as possible;
step two, acquiring characteristic information by processing a mixed deep neural network fusing multi-source signals:
acquiring feature information of a 1x10 feature vector representation image by using a deep neural network:
converting each signal of different sensors into a feature vector of 1x2 by using a BP neural network, and representing the feature information of the sensor signals;
step three, establishing a decision fusion model based on a voting evidence theory:
inputting the eigenvectors of the matrix signals and the eigenvectors of the variable signals obtained in the step two into a decision fusion model of a voting evidence theory to obtain final output, and outputting the operating parameters of the sterilization robot;
step four, training to obtain a sterilization robot operation parameter model:
and acquiring a visual signal as a matrix signal input through a camera of the sterilization robot, inputting various sensor signals as variable signals, manually marking operation parameters of the sterilization robot, and performing end-to-end training to obtain a final model.
2. The multi-sensor information fusion method for the sterilization robot according to claim 1, wherein: in the first step, the scene comprises an operating room, a ward, a consulting room or other medical places or places needing sterilization treatment; the sensor of the sterilization robot comprises a humidity sensor, a temperature sensor and a gas sensor; the operation parameters of the sterilization robot comprise working voltage, working time, illumination frequency, robot angle and robot travel route parameters.
3. The multi-sensor information fusion method for the sterilization robot according to claim 1, wherein: in the second step, the method for acquiring the characteristic information of the image by using the deep neural network comprises the following steps:
after the matrix signal of the image is input, firstly, a convolution layer with the step size of 2 and the convolution kernel size of 7 x 7 is passed through, wherein the computation process of the convolution layer is represented by the following formula:
Figure FDA0002492966290000021
wherein Z(l,p)P-th feature map output, X, representing the l-th convolutional layer(l-1)For input feature mapping at layer l-1, W(l,p,d)And b(l,p)Convolution kernel and its offset for the l-th layer;
after one convolution, performing pooling operation through a max posing layer with the step length of 2, after the max posing layer, further extracting features by adopting 8 convolution layers with convolution kernels of 3x3, wherein the number of channels is respectively 64, 128, 256 and 256, the step lengths are respectively 1, 2, 1, 2 and 1, and the convolution layer with the step length of 2 is used for replacing the pooling layer to compress the size of the graph; in order to prevent the problems of gradient disappearance and the like during deep neural network training and simultaneously to recycle the characteristics of the previous layer, a skip connection is added every two convolution layers, the starting position of the skip connection represents the output x of the previous layer, and the ending position represents the input y of the next layer, and a calculation mode is adopted:
y=F(x)+Wx
where F (x) represents the output of the two-layer convolution, and W represents the convolution operation with a convolution kernel size of 1x1, which is used to adjust the channel dimension and width and height of matrix x to ensure the same size as F (x); finally, adding F (x) and Wx pixel by pixel to obtain the input y of the next layer; after passing through 8 convolutional layers, the features are reduced to one-dimensional vectors with the size of 1x512 by one global boosting layer; finally, obtaining a 1x10 vector through a full connection layer to represent the characteristic information of the matrix signal;
the method for acquiring the characteristic information of the sensor by using the BP neural network comprises the following steps;
each sensor corresponds to a corresponding BP neural network respectively, and each BP neural network consists of three parts: the BP neural network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer consists of a plurality of equal nodes, and the BP neural network executes the following processes:
(1) initializing a network; setting initial value, hidden node number, step length and momentum item factor of connecting weight vector and threshold vector
(2) Providing input sample data;
(3) calculating an output value of the hidden layer unit; using a function of the type s
(4) Calculating an output value of the output unit; using a function of the type s
The formula of the s-type function is as follows:
Figure FDA0002492966290000031
nonlinear characteristics are introduced into the BP neural network, and each signal is converted into a feature vector of 1x2 to represent feature information of the signal.
4. The multi-sensor information fusion method for the sterilization robot according to claim 1, wherein: step three, establishing a decision fusion model based on a voting evidence theory: based on 2-path voting evidence theory fusion, if multiple paths are adopted, 2-path fusion is adopted for multiple times; the 2-path voting evidence theory fusion process is as follows: m (a) is the confidence level of the evidence a, p (a) is the voting confidence level of the evidence a, m (B) is the confidence level of the evidence B, and p (B) is the voting confidence level of the evidence B, the fusion result of A, B is:
Figure FDA0002492966290000032
Figure FDA0002492966290000033
wherein K represents a normalization factor reflecting the degree of evidence conflict; c represents fusion evidence of A, B, and m (C) represents credibility of the fusion evidence C.
CN202010410470.0A 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot Active CN111582399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010410470.0A CN111582399B (en) 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010410470.0A CN111582399B (en) 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot

Publications (2)

Publication Number Publication Date
CN111582399A true CN111582399A (en) 2020-08-25
CN111582399B CN111582399B (en) 2023-07-18

Family

ID=72110922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010410470.0A Active CN111582399B (en) 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot

Country Status (1)

Country Link
CN (1) CN111582399B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082268A (en) * 2021-03-12 2021-07-09 浙江创力电子股份有限公司 Handheld sterilizer of networking based on 4G
CN113283516A (en) * 2021-06-01 2021-08-20 西北工业大学 Multi-sensor data fusion method based on reinforcement learning and D-S evidence theory
CN117574314A (en) * 2023-11-28 2024-02-20 东风柳州汽车有限公司 Information fusion method, device and equipment of sensor and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135281A (en) * 2017-03-13 2017-09-05 国家计算机网络与信息安全管理中心 A kind of IP regions category feature extracting method merged based on multi-data source
CN107145878A (en) * 2017-06-01 2017-09-08 重庆邮电大学 Old man's anomaly detection method based on deep learning
CN110008843A (en) * 2019-03-11 2019-07-12 武汉环宇智行科技有限公司 Combine cognitive approach and system based on the vehicle target of cloud and image data
CN110045672A (en) * 2018-01-15 2019-07-23 中冶长天国际工程有限责任公司 A kind of method for diagnosing faults and device of belt feeder
CN110569779A (en) * 2019-08-28 2019-12-13 西北工业大学 Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN111000684A (en) * 2019-09-27 2020-04-14 张兴利 Paper diaper external monitoring system and monitoring method based on multi-sensor fusion
CN111209434A (en) * 2020-01-09 2020-05-29 国网江苏省电力有限公司徐州供电分公司 Substation equipment inspection system and method based on multi-source heterogeneous data fusion
CN111311945A (en) * 2020-02-20 2020-06-19 南京航空航天大学 Driving decision system and method fusing vision and sensor information
CN112947147A (en) * 2021-01-27 2021-06-11 上海大学 Fire-fighting robot based on multi-sensor and cloud platform algorithm

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135281A (en) * 2017-03-13 2017-09-05 国家计算机网络与信息安全管理中心 A kind of IP regions category feature extracting method merged based on multi-data source
CN107145878A (en) * 2017-06-01 2017-09-08 重庆邮电大学 Old man's anomaly detection method based on deep learning
CN110045672A (en) * 2018-01-15 2019-07-23 中冶长天国际工程有限责任公司 A kind of method for diagnosing faults and device of belt feeder
CN110008843A (en) * 2019-03-11 2019-07-12 武汉环宇智行科技有限公司 Combine cognitive approach and system based on the vehicle target of cloud and image data
CN110569779A (en) * 2019-08-28 2019-12-13 西北工业大学 Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning
CN111000684A (en) * 2019-09-27 2020-04-14 张兴利 Paper diaper external monitoring system and monitoring method based on multi-sensor fusion
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN111209434A (en) * 2020-01-09 2020-05-29 国网江苏省电力有限公司徐州供电分公司 Substation equipment inspection system and method based on multi-source heterogeneous data fusion
CN111311945A (en) * 2020-02-20 2020-06-19 南京航空航天大学 Driving decision system and method fusing vision and sensor information
CN112947147A (en) * 2021-01-27 2021-06-11 上海大学 Fire-fighting robot based on multi-sensor and cloud platform algorithm

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
刘宝升;陈利飞;刘艳;刘庭;陈健;: "基于证据理论的输电线路带电状态非接触检测方法", 武汉大学学报(工学版), no. 01 *
张池平等: "一种基于神经网络和证据理论的信息融合算法", 《计算机工程与应用》, no. 01, pages 1 - 2 *
张立智等: "CNN和D-S证据理论相结合的齿轮箱复合故障诊断研究", 《机械科学与技术》, no. 010 *
李彬;马国胜;李剑;: "D-S证据理论在公路车辆识别方面的应用", 中国仪器仪表, no. 04 *
赵小川等: "机器人多传感器信息融合研究综述", 《传感器与微系统》, no. 08 *
高源: "多传感器信息融合及其应用研究", 《产业创新研究》, no. 08, pages 168 - 173 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082268A (en) * 2021-03-12 2021-07-09 浙江创力电子股份有限公司 Handheld sterilizer of networking based on 4G
CN113283516A (en) * 2021-06-01 2021-08-20 西北工业大学 Multi-sensor data fusion method based on reinforcement learning and D-S evidence theory
CN113283516B (en) * 2021-06-01 2023-02-28 西北工业大学 Multi-sensor data fusion method based on reinforcement learning and D-S evidence theory
CN117574314A (en) * 2023-11-28 2024-02-20 东风柳州汽车有限公司 Information fusion method, device and equipment of sensor and storage medium

Also Published As

Publication number Publication date
CN111582399B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN111582399B (en) Multi-sensor information fusion method for sterilization robot
CN106951923B (en) Robot three-dimensional shape recognition method based on multi-view information fusion
CN112395945A (en) Graph volume behavior identification method and device based on skeletal joint points
CN111291809B (en) Processing device, method and storage medium
CN110378281A (en) Group Activity recognition method based on pseudo- 3D convolutional neural networks
CN111461063B (en) Behavior identification method based on graph convolution and capsule neural network
CN114638954B (en) Training method of point cloud segmentation model, point cloud data segmentation method and related device
CN114613013A (en) End-to-end human behavior recognition method and model based on skeleton nodes
CN111881802B (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
Jin et al. Target recognition of industrial robots using machine vision in 5G environment
Lin et al. Posting techniques in indoor environments based on deep learning for intelligent building lighting system
CN116246338B (en) Behavior recognition method based on graph convolution and transducer composite neural network
Owoyemi et al. Learning human motion intention with 3D convolutional neural network
Shahbaz et al. Convolutional neural network based foreground segmentation for video surveillance systems
Huang et al. A multiclass boosting approach for integrating weak classifiers in parking space detection
Wang et al. Occupancy detection based on spiking neural networks for green building automation systems
CN112613405B (en) Method for recognizing actions at any visual angle
CN113255514A (en) Behavior identification method based on local scene perception graph convolutional network
Botzheim et al. Growing neural gas for information extraction in gesture recognition and reproduction of robot partners
CN113111721B (en) Human behavior intelligent identification method based on multi-unmanned aerial vehicle visual angle image data driving
Heikkonen et al. Self-organizing maps for visually guided collision-free navigation
CN111627064B (en) Pedestrian interaction friendly monocular obstacle avoidance method
Garov et al. Application and Some Fundamental Study of GNN In Forecasting
Ghimire et al. A study on deep learning architecture and their applications
Murao et al. Incremental State Acquisition for Q-learning by Adaptive Gaussian soft-max neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant