CN111582399B - Multi-sensor information fusion method for sterilization robot - Google Patents

Multi-sensor information fusion method for sterilization robot Download PDF

Info

Publication number
CN111582399B
CN111582399B CN202010410470.0A CN202010410470A CN111582399B CN 111582399 B CN111582399 B CN 111582399B CN 202010410470 A CN202010410470 A CN 202010410470A CN 111582399 B CN111582399 B CN 111582399B
Authority
CN
China
Prior art keywords
layer
sensor
sterilization robot
robot
evidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010410470.0A
Other languages
Chinese (zh)
Other versions
CN111582399A (en
Inventor
赵安妮
韩贵东
马志刚
王旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Province Senxiang Science And Technology Co ltd
Original Assignee
Jilin Province Senxiang Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Province Senxiang Science And Technology Co ltd filed Critical Jilin Province Senxiang Science And Technology Co ltd
Priority to CN202010410470.0A priority Critical patent/CN111582399B/en
Publication of CN111582399A publication Critical patent/CN111582399A/en
Application granted granted Critical
Publication of CN111582399B publication Critical patent/CN111582399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a multi-sensor information fusion method for a sterilization robot, which aims to solve the problems that the sterilization robot in the medical field has higher uncertainty, can not accurately determine the operation parameters of the sterilization robot in different scenes, has poorer autonomy and the like when the multi-sensor information of the sterilization robot is fused. The invention comprises the following steps: generating a target classification training data set, acquiring characteristic information through processing a mixed depth neural network fused with a multi-source signal, establishing a decision fusion model based on a voting evidence theory, and training to obtain a sterilization robot operation parameter model; according to the invention, a visual signal and a sensor signal of the sterilization robot are organically fused together by combining a deep neural network with an improved evidence theory method through the deep learning and the improved evidence theory method, so that possible redundancy and contradiction among multi-sensor information are eliminated, the redundancy and the contradiction are complemented, the uncertainty is reduced, and further the operation parameters of the sterilization robot are output more accurately.

Description

Multi-sensor information fusion method for sterilization robot
Technical Field
The invention relates to an information fusion method, in particular to a mixed depth neural network multi-sensor information fusion method for a sterilization robot.
Background
The sterilization robot is applied to a medical sterilization scene by taking the robot as a carrier, and automatically, efficiently and accurately performs indoor sterilization according to a set route. The sterilization robot can utilize the high-pressure ionization polarization luminescence principle of a pulse xenon lamp, and can effectively and rapidly inactivate various microorganisms on the surface of an object and in the air through high-intensity ultraviolet band light rays, so that the aim of sterilization is achieved.
At present, high-risk cross infection areas such as infectious disease isolation houses, wards and operating rooms need to be sterilized and disinfected regularly, but the work is mainly completed manually, the traditional indoor sterilization and disinfection work has the defects of labor consumption, dead angle, secondary pollution, uncontrollable concentration, high cost and the like, and the sterilization medical robot can be utilized to sterilize more quickly and efficiently, so that the heavy work of medical staff is shared to a great extent, the medical device has the characteristic of being afraid of infection, can work in various high-risk scenes, and does not occupy medical resources.
The sterilization robot is required to make different judgments according to different scenes such as ward, consulting room, operating room and the like and various environmental factors such as gas, temperature, humidity and the like. The sensor system is a sensing organ of the robot, almost all robots are provided with sensors, the robots can input information required by the robots into the system, and the sensing system is controlled by the sensors. The multi-sensor information fusion technology serves as an intelligent information processing technology, and the function of the multi-sensor information fusion technology is self-evident. Currently, common methods for multi-sensor data fusion can be broadly divided into two main categories, random and artificial intelligence methods. Different layers of information fusion correspond to different algorithms, including weighted average fusion, kalman filtering method, bayes estimation, statistical decision theory, probability theory method, fuzzy logic reasoning, artificial neural network, D-S evidence theory and the like. The data fusion process comprises the following steps: each sensor is independently processed to generate target data, each sensor has independent perception, and after all the sensors complete the generation of the target data, the main processor performs data fusion.
According to the traditional fusion method, when the multi-sensor information of the sterilization robot in the medical field is processed and fused, redundancy and contradiction exist between the multi-sensor information, the uncertainty is high, the operation parameters of the sterilization robot in different scenes cannot be accurately determined, the autonomy is poor, excessive manual intervention and control are required, and the effect is still to be improved.
Disclosure of Invention
The invention aims to solve the technical problems, utilizes a deep neural network to combine with an improved evidence theory method, further improves a multi-sensor information fusion technology, improves the application effect of the multi-sensor information fusion technology on a sterilization robot, and utilizes the multi-sensor information fusion technology to solve the problems of detection, tracking, target identification and the like of the sterilization robot, so as to enhance the survivability of the system, improve the reliability and the robustness of the whole system, enhance the credibility of data, improve the precision, expand the time and the space coverage rate of the system, improve the instantaneity and the information utilization rate of the system and the like. The invention provides a multi-sensor information fusion method for a sterilization robot, which comprises the following steps:
step one, generating a target classification training data set:
(1) And (3) data acquisition: putting the sterilization robot into different scenes, acquiring a current visual image through a camera of the robot, and acquiring a corresponding data value through a sensor of the sterilization robot; the scene comprises an operating room, a ward, a clinic or other medical places or places needing sterilization treatment; the sensor of the sterilization robot comprises a humidity sensor, a temperature sensor, a gas sensor and the like.
(2) a/D conversion: converting the analog value acquired by the sensor into a digital signal which can be used, and completing the A/D conversion process;
(3) Manually labeling a dataset: in order to train the neural network, after data acquisition, labels are manually marked for different scenes to represent the operation parameters of the sterilization robot in the current scene;
(4) Equalizing the samples: before training, the number of samples of different labels reaches a ratio of 1:1 as much as possible;
step two, obtaining characteristic information by processing a mixed deep neural network fusing multi-source signals:
the method for acquiring the characteristic information of the 1x10 characteristic vector representation image by using the deep neural network comprises the following steps:
after matrix signals of the image are input, a convolution layer with a step length of 2 and a convolution kernel size of 7*7 is firstly passed, wherein the calculation process of the convolution layer is expressed by the following formula:
wherein Z is (l,p) The p-th feature map output, X, representing the first convolutional layer (l-1) Mapping for input features of layer 1, W (l,p) And b (l,p) A convolution kernel for layer i and its offset;
after one convolution, the pooling operation is carried out through a max pooling layer with the step length of 2, after the max pooling layer, 8 convolution layers with the convolution kernel of 3x3 are adopted to further extract the characteristics, the channel numbers of the convolution layers are respectively 64, 128, 256 and 256, and the step lengths are respectively 1, 2, 1, 2 and 1, wherein the convolution layer with the step length of 2 is used for replacing the pooling layer to compress the size of the graph; in order to prevent the problems of gradient disappearance and the like during deep neural network training, and simultaneously in order to repeatedly utilize the characteristics of the previous layer, a skip connection is added to every two convolution layers, the starting position of the skip connection represents the output x of the previous layer, the ending position represents the input y of the next layer, and a calculation mode is adopted:
y=F(x)+Wx
wherein F (x) represents the output of the same two-layer convolution, W represents the convolution operation with a convolution kernel size of 1x1, and is used for adjusting the channel dimension and width and height of the matrix x to ensure that the size of the matrix x is the same as that of F (x); finally, F (x) and Wx are added pixel by pixel to obtain the input y of the next layer; after passing through 8 convolution layers, the features are reduced in dimension by one global scaling layer into a one-dimensional vector of dimension 1x 512; finally, a 1x10 vector is obtained through a full connection layer and is used for representing the characteristic information of the matrix signal;
each signal of different sensors is converted into a 1x2 characteristic vector by using a BP neural network, and characteristic information of the sensor signals is represented by the following method
Humidity sensor, temperature sensor and gas sensor correspond corresponding BP neural network respectively, and every BP neural network comprises three parts: the input layer, the hidden layer and the output layer, wherein the hidden layer is composed of a plurality of equal nodes, and the BP neural network execution flow is as follows:
(1) Network initialization: setting an initial value of a connecting weight vector and a threshold vector, and meeting normal distribution with a mean value of 0 and a variance of 1; setting super-parameter values such as the number of hidden nodes, the step length and the momentum item factor and the like based on experience manually, wherein the number of nodes is usually less than 1000, the step length is generally set to be 0.1, and the momentum factor is generally between 0 and 1;
(2) Providing input sample data;
(3) Calculating an output value of the hidden layer unit: calculating by adopting an s-type function;
(4) Calculating an output value of the output unit: calculating by adopting an s-type function;
the formula of the s-type function is as follows:
the nonlinear characteristic is introduced into the BP neural network, each signal is converted into a characteristic vector of 1x2, and characteristic information of the sensor signal is represented.
Step three, establishing a decision fusion model based on voting evidence theory:
based on 2 paths of voting evidence theory fusion, if a plurality of paths are adopted, 2 paths of fusion are adopted for a plurality of times; the 2-path voting evidence theory fusion process is as follows:
m (A) is the credibility of the A evidence, p (A) is the voting credibility of the A evidence, m (B) is the credibility of the B evidence, and p (B) is the voting credibility of the B evidence, and the fusion result of A, B is as follows:
wherein K represents a normalization factor and reflects the degree of evidence conflict; c represents the fused evidence of A, B, and m (C) represents the credibility of the fused evidence C.
Inputting the feature vector of the matrix signal and the feature vector of the variable signal obtained in the second step into a decision fusion model of a voting evidence theory to obtain final output;
training to obtain an operation parameter model of the sterilization robot:
visual signals are obtained through a camera of the sterilization robot and are used as matrix signals to be input, various sensor signals are used as variable signals to be input, the operation parameters of the sterilization robot are marked manually, and end-to-end training is carried out to obtain a final model.
The invention has the beneficial effects that:
the invention utilizes a deep neural network to combine an improved evidence theory method, provides a mixed deep neural network multi-sensor information fusion method for a sterilization medical robot, can integrate local data resources provided by a plurality of similar or dissimilar sensors distributed at different positions of the sterilization robot, organically fuses visual signals of the sterilization robot and sensor signals by combining deep learning with the improved evidence theory method, adopts a computer technology to analyze the visual signals and the sensor signals, eliminates redundancy and contradiction possibly existing between multi-sensor information, complements the redundancy and contradiction, reduces the uncertainty of the information, and further outputs the operation parameters of the sterilization robot more accurately;
the deep neural network can effectively distinguish the scene of the robot according to the video signal, and the scene recognition by using the deep learning method has the following advantages: firstly, CNN can automatically extract the characteristics containing more semantic and structural information from an input image, and the CNN becomes more discriminant after nonlinear transformation in a network structure; secondly, the depth hierarchy structure can better explain the spatial distribution in the scene;
the voting evidence theory method integrates priori knowledge through voting, can achieve better decision fusion effect, solves two contradictory evidence synthesis problems from the perspective of probability statistics, has good mathematical properties and engineering application, and can improve the accuracy of the whole system;
the invention can realize end-to-end on the whole, can learn satisfactory effects without manual intervention under the condition of sufficient labeling data, and can autonomously determine the operation parameters in a complex indoor environment, thereby reducing the labor cost and reducing the harm to the disinfection operators.
Drawings
Fig. 1 is a schematic diagram of a hybrid deep neural network multi-sensor information fusion method according to the present invention.
Detailed Description
The invention provides a multi-sensor information fusion method for a sterilization robot, which comprises the following steps:
step one, generating a target classification training data set:
(1) And (3) data acquisition: and acquiring original information to be detected in a target environment by utilizing a multi-source sensor. Putting the sterilization robot into different scenes, acquiring a current visual image through a camera of the robot, and acquiring corresponding data values through humidity, temperature and gas sensors of the sterilization robot; the scene comprises an operating room, a ward, a clinic or other medical places or places needing sterilization treatment;
(2) a/D conversion: because the original information collected by the sensor is often non-electrical signals such as temperature, the data value obtained by the sensor is converted into a usable digital signal (the visual image obtained by the camera is not converted) before the data processing, and the A/D conversion process is completed;
(3) Manually labeling a dataset: in order to train the neural network, after data acquisition, labels are manually marked for different scenes to represent the operation parameters of the sterilization robot in the current scene, wherein the operation parameters comprise working voltage, working time, illumination frequency, robot angle and robot route parameters, and the operation parameters of the robot are different for different scenes;
(4) Equalizing the samples: before training, the number of samples of different labels reaches a ratio of 1:1 as much as possible;
step two, obtaining characteristic information by processing a mixed deep neural network fusing multi-source signals:
the hybrid deep neural network is capable of processing the matrix signal and the univariate signal simultaneously.
The method for acquiring the characteristic information of the image by using the deep neural network comprises the following steps:
the deep neural network is a network structure with multi-layer perceptron, local connection and weight sharing, and has stronger fault tolerance, learning and parallel processing capabilities, so that the complexity of a network model and the number of network connection weights are reduced.
After matrix signal input, firstly passing through a convolution layer with a step length of 2 and a convolution kernel size of 7*7, wherein the calculation process of the convolution layer is expressed by the following formula:
wherein Z is (l,p) The p-th feature map output, X, representing the first convolutional layer (l--1) Mapping for input features of layer 1, W (l,p) And b (l,p) A convolution kernel for layer i and its offset;
after one convolution, the pooling operation is carried out through a max pooling layer with the step length of 2, and the purpose of dimension reduction, nonlinearity realization and invariance (translation invariance, rotation invariance and scale invariance) realization of the expanded sensing field can be achieved by utilizing the pooling layer. After max scaling layer, 8 convolution layers with convolution kernel of 3x3 are adopted to further extract features, the channel numbers are 64, 128, 256 and 256, the step sizes are 1, 2, 1, 2 and 1, wherein the convolution layer with the step size of 2 is used for replacing a pooling layer to compress the size of the graph, and the dimension and the calculation amount can be reduced while the features are extracted. In order to prevent the problems of gradient disappearance and the like during deep neural network training, and simultaneously to recycle the features of the previous layer, a skip connection is added to every other two convolution layers, as shown by an arc line in fig. 1, wherein 3 skip connections exist. The starting position of each arc line represents the output x of the upper layer, the ending position represents the input y of the lower layer, and the calculation mode is adopted:
y=F(x)+Wx
wherein F (x) represents the output of the same two-layer convolution, W represents the convolution operation with a convolution kernel size of 1x1, and is used to adjust the channel dimension and width and height of the matrix x to ensure the same size as F (x) (if x is the same as F (x), no convolution operation is required); finally, F (x) and Wx are added pixel by pixel to obtain the input y of the next layer; after passing through 8 convolution layers, the features are reduced in dimension by one global scaling layer into a one-dimensional vector of dimension 1x 512; finally, a 1x10 vector is obtained through a full connection layer and is used for representing the characteristic information of the matrix signal;
the method for acquiring the characteristic information of the sensor by using the BP neural network comprises the following steps:
as shown in the figure, the humidity sensor, the temperature sensor and the gas sensor correspond to respective BP neural networks, and each BP neural network is composed of three parts: the input layer, the hidden layer and the output layer, wherein the hidden layer is composed of a plurality of equal nodes, and the BP neural network execution flow is as follows:
(1) Network initialization: setting an initial value of a connecting weight vector and a threshold vector, and meeting normal distribution with a mean value of 0 and a variance of 1; setting super-parameter values such as the number of hidden nodes, the step length and the momentum item factor and the like based on experience manually, wherein the number of nodes is usually less than 1000, the step length is generally set to be 0.1, and the momentum factor is generally between 0 and 1;
(2) Providing input sample data;
(3) Calculating an output value of the hidden layer unit: calculating by adopting an s-type function;
(4) Calculating an output value of the output unit: calculating by adopting an s-type function;
wherein, the formula of the s-type function is as follows:
introducing nonlinear characteristics into a BP neural network, and converting each signal into a characteristic vector of 1x2 to represent characteristic information of a sensor signal;
for example, putting the sterilization robot into a ward, acquiring a visual image (matrix signal) of a current scene by a camera, acquiring a non-electric signal by a humidity sensor, a temperature sensor and a gas sensor, and converting the non-electric signal into three univariate signals by A/D (analog to digital) conversion; the matrix signals are converted into vectors of 1x10 through a deep neural network, and three univariate signals are respectively converted into vectors of three 1x2 through three BP neural networks. The four vectors represent the characteristic information captured by the sterilization robot in the current scene.
Step three, establishing a decision fusion model based on voting evidence theory:
in order to effectively fuse multi-source evidence information of a system to be identified and improve accuracy of pattern identification, a decision fusion model based on voting evidence theory is provided. According to the method, based on the fact that the evidence from different sources has different reliability on the final judging result, the accuracy of judging the result by each evidence is converted into the voting coefficient, and the uncertainty of each evidence in the mode identification process can be furthest weakened after voting fusion, so that the uncertainty of the mode identification is reduced theoretically.
Based on 2 paths of voting evidence theory fusion, if a plurality of paths are adopted, 2 paths of fusion are adopted for a plurality of times; the 2-path voting evidence theory fusion process is as follows:
m (A) is the credibility of A evidence, p (A) is the credibility of voting of A evidence, m (B) is the credibility of B evidence, p (B) is the credibility of voting of B evidence, and p (A) p (B) is manually set by field experts according to the relevance of the sensor type to the final output result.
The fusion result of A, B is:
wherein K represents a normalization factor and reflects the degree of evidence conflict; c represents the fused evidence of A, B, and m (C) represents the credibility of the fused evidence C.
Inputting the feature vector of the matrix signal and the feature vector of the variable signal obtained in the second step into a decision fusion model of a voting evidence theory to obtain final output, and outputting the operation parameters of the sterilization robot in the scene;
for example: the output result type is illumination frequency, and the field expert sets the temperature voting reliability to 0.7 and the humidity voting reliability to 0.2. And the temperature and humidity sensor input is connected with the BP network, the result is obtained through forward propagation, the third step is executed, the voting evidence theory fusion result is obtained, and the illumination frequency parameter is output.
Training to obtain an operation parameter model of the sterilization robot:
visual signals are obtained through a camera of the sterilization robot and are used as matrix signals for input, various sensor signals are used as variable signals for input, the operation parameters of the sterilization robot are marked manually, and end-to-end training is carried out to obtain a model. The sterilization robot can automatically acquire operation parameters in different environments according to the trained model so as to run autonomously.

Claims (1)

1. The multi-sensor information fusion method for the sterilization robot is characterized by comprising the following steps of:
step one, generating a target classification training data set:
(1) And (3) data acquisition: putting the sterilization robot into different scenes, acquiring a current visual image through a camera of the robot, and acquiring a corresponding sensor data value through a sensor of the sterilization robot; the scene comprises an operating room, a ward, a clinic or other medical places or places needing sterilization treatment; the sensor of the sterilization robot comprises a humidity sensor, a temperature sensor and a gas sensor; the operation parameters of the sterilization robot comprise working voltage, working time, illumination frequency, robot angle and robot route parameters;
(2) a/D conversion: converting the data values acquired by the sensor into digital signals which can be used;
(3) Manually labeling a dataset: after data acquisition, labels are manually marked for different scenes to represent the operation parameters of the sterilization robot in the current scene;
(4) Equalizing the samples: before training, the number of samples of different labels reaches a ratio of 1:1 as much as possible;
step two, obtaining characteristic information by processing a mixed deep neural network fusing multi-source signals:
the method for acquiring the characteristic information of the image by using the deep neural network comprises the following steps:
after matrix signals of the image are input, a convolution layer with a step length of 2 and a convolution kernel size of 7*7 is firstly passed, wherein the calculation process of the convolution layer is expressed by the following formula:
wherein Z is (l,p) The p-th feature map output, X, representing the first convolutional layer (l-1) Mapping for input features of layer 1, W (l ,p ,d) And b (l,p) A convolution kernel for layer i and its offset;
after one convolution, the pooling operation is carried out through a max pooling layer with the step length of 2, after the max pooling layer, 8 convolution layers with the convolution kernel of 3x3 are adopted to further extract the characteristics, the channel numbers of the convolution layers are respectively 64, 128, 256 and 256, and the step lengths are respectively 1, 2, 1, 2 and 1, wherein the convolution layer with the step length of 2 is used for replacing the pooling layer to compress the size of the graph; in order to prevent the problems of gradient disappearance and the like during deep neural network training, and simultaneously in order to repeatedly utilize the characteristics of the previous layer, a skip connection is added to every two convolution layers, the starting position of the skip connection represents the output x of the previous layer, the ending position represents the input y of the next layer, and a calculation mode is adopted:
y=F(x)+Wx
wherein F (x) represents the output of the same two-layer convolution, W represents the convolution operation with a convolution kernel size of 1x1, and is used for adjusting the channel dimension and width and height of the matrix x to ensure that the size of the matrix x is the same as that of F (x); finally, F (x) and Wx are added pixel by pixel to obtain the input y of the next layer; after passing through 8 convolution layers, the features are reduced in dimension by one global scaling layer into a one-dimensional vector of dimension 1x 512; finally, a 1x10 vector is obtained through a full connection layer and is used for representing the characteristic information of the matrix signal;
the method for acquiring the characteristic information of the sensor by using the BP neural network comprises the following steps of;
each sensor corresponds to a corresponding BP neural network respectively, and each BP neural network consists of three parts: the input layer, the hidden layer and the output layer, wherein the hidden layer is composed of a plurality of equal nodes, and the BP neural network execution flow is as follows:
(1) Initializing a network; setting initial value, hidden node number, step length and momentum term factor of connecting weight vector and threshold vector
(2) Providing input sample data;
(3) Calculating an output value of the hidden layer unit; using s-shaped functions
(4) Calculating an output value of the output unit; using s-shaped functions
The formula of the s-type function is as follows:
introducing nonlinear characteristics into a BP neural network, and converting each signal into a 1x2 characteristic vector to represent characteristic information thereof;
step three, establishing a decision fusion model based on voting evidence theory:
establishing a decision fusion model based on voting evidence theory: based on 2 paths of voting evidence theory fusion, if a plurality of paths are adopted, 2 paths of fusion are adopted for a plurality of times; the 2-path voting evidence theory fusion process is as follows: m (A) is the credibility of the A evidence, p (A) is the voting credibility of the A evidence, m (B) is the credibility of the B evidence, and p (B) is the voting credibility of the B evidence, and the fusion result of A, B is as follows:
wherein K represents a normalization factor and reflects the degree of evidence conflict; c represents the fusion evidence of A, B, and m (C) represents the credibility of the fusion evidence C; inputting the feature vector of the matrix signal and the feature vector of the variable signal obtained in the second step into a decision fusion model of a voting evidence theory to obtain final output, and outputting the operation parameters of the sterilization robot;
training to obtain an operation parameter model of the sterilization robot:
visual signals are obtained through a camera of the sterilization robot and are used as matrix signals to be input, various sensor signals are used as variable signals to be input, the operation parameters of the sterilization robot are marked manually, and end-to-end training is carried out to obtain a final model.
CN202010410470.0A 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot Active CN111582399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010410470.0A CN111582399B (en) 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010410470.0A CN111582399B (en) 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot

Publications (2)

Publication Number Publication Date
CN111582399A CN111582399A (en) 2020-08-25
CN111582399B true CN111582399B (en) 2023-07-18

Family

ID=72110922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010410470.0A Active CN111582399B (en) 2020-05-15 2020-05-15 Multi-sensor information fusion method for sterilization robot

Country Status (1)

Country Link
CN (1) CN111582399B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082268A (en) * 2021-03-12 2021-07-09 浙江创力电子股份有限公司 Handheld sterilizer of networking based on 4G
CN113283516B (en) * 2021-06-01 2023-02-28 西北工业大学 Multi-sensor data fusion method based on reinforcement learning and D-S evidence theory
CN117574314B (en) * 2023-11-28 2024-06-18 东风柳州汽车有限公司 Information fusion method, device and equipment of sensor and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135281A (en) * 2017-03-13 2017-09-05 国家计算机网络与信息安全管理中心 A kind of IP regions category feature extracting method merged based on multi-data source
CN110045672A (en) * 2018-01-15 2019-07-23 中冶长天国际工程有限责任公司 A kind of method for diagnosing faults and device of belt feeder

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145878A (en) * 2017-06-01 2017-09-08 重庆邮电大学 Old man's anomaly detection method based on deep learning
CN110008843B (en) * 2019-03-11 2021-01-05 武汉环宇智行科技有限公司 Vehicle target joint cognition method and system based on point cloud and image data
CN110569779B (en) * 2019-08-28 2022-10-04 西北工业大学 Pedestrian attribute identification method based on pedestrian local and overall attribute joint learning
CN111000684B (en) * 2019-09-27 2022-04-01 张兴利 Paper diaper external monitoring system and monitoring method based on multi-sensor fusion
CN110866887A (en) * 2019-11-04 2020-03-06 深圳市唯特视科技有限公司 Target situation fusion sensing method and system based on multiple sensors
CN111209434B (en) * 2020-01-09 2024-02-13 国网江苏省电力有限公司徐州供电分公司 Substation equipment inspection system and method based on multi-source heterogeneous data fusion
CN111311945B (en) * 2020-02-20 2021-07-09 南京航空航天大学 Driving decision system and method fusing vision and sensor information
CN112947147A (en) * 2021-01-27 2021-06-11 上海大学 Fire-fighting robot based on multi-sensor and cloud platform algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107135281A (en) * 2017-03-13 2017-09-05 国家计算机网络与信息安全管理中心 A kind of IP regions category feature extracting method merged based on multi-data source
CN110045672A (en) * 2018-01-15 2019-07-23 中冶长天国际工程有限责任公司 A kind of method for diagnosing faults and device of belt feeder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
D-S证据理论在公路车辆识别方面的应用;李彬;马国胜;李剑;;中国仪器仪表(第04期);全文 *
基于证据理论的输电线路带电状态非接触检测方法;刘宝升;陈利飞;刘艳;刘庭;陈健;;武汉大学学报(工学版)(01);全文 *

Also Published As

Publication number Publication date
CN111582399A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111582399B (en) Multi-sensor information fusion method for sterilization robot
CN107492121B (en) Two-dimensional human body bone point positioning method of monocular depth video
Mezghan et al. Memory-augmented reinforcement learning for image-goal navigation
CN112733656B (en) Skeleton action recognition method based on multiflow space attention diagram convolution SRU network
CN114613013A (en) End-to-end human behavior recognition method and model based on skeleton nodes
Marinho et al. A novel mobile robot localization approach based on classification with rejection option using computer vision
CN109840518A (en) A kind of visual pursuit method of combining classification and domain adaptation
Chen et al. Object goal navigation with recursive implicit maps
Namasivayam et al. Learning neuro-symbolic programs for language guided robot manipulation
Lin et al. Posting techniques in indoor environments based on deep learning for intelligent building lighting system
Dawood et al. Incremental episodic segmentation and imitative learning of humanoid robot through self-exploration
Zhang 2D Computer Vision
Owoyemi et al. Learning human motion intention with 3D convolutional neural network
CN113255514B (en) Behavior identification method based on local scene perception graph convolutional network
Ferreira et al. Learning visual dynamics models of rigid objects using relational inductive biases
Hu et al. Research on pest and disease recognition algorithms based on convolutional neural network
Guo et al. Dynamic free-space roadmap for safe quadrotor motion planning
Heikkonen et al. Self-organizing maps for visually guided collision-free navigation
CN113111721B (en) Human behavior intelligent identification method based on multi-unmanned aerial vehicle visual angle image data driving
Yazdansepas et al. Room Categorization utilizing Convolutional Neural Network on 2D map obtained by LiDAR
Garov et al. Application and some fundamental study of GNN in forecasting
Kalithasan et al. Learning neuro-symbolic programs for language guided robot manipulation
Prasad et al. Comparative study and utilization of best deep learning algorithms for the image processing
Carpenter et al. Unifying multiple knowledge domains using the ARTMAP information fusion system
Guo Multi-channel Vision Platform of Intelligent Robot Based on Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant