CN115225322A - Unmanned intelligent equipment safety constraint method based on environment side channel information verification - Google Patents

Unmanned intelligent equipment safety constraint method based on environment side channel information verification Download PDF

Info

Publication number
CN115225322A
CN115225322A CN202210669540.3A CN202210669540A CN115225322A CN 115225322 A CN115225322 A CN 115225322A CN 202210669540 A CN202210669540 A CN 202210669540A CN 115225322 A CN115225322 A CN 115225322A
Authority
CN
China
Prior art keywords
unmanned intelligent
verification
channel information
side channel
classification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210669540.3A
Other languages
Chinese (zh)
Other versions
CN115225322B (en
Inventor
张志为
沈玉龙
刘蓉
陈泽瀚
李朝阳
刘成梁
时小丫
王建东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202210669540.3A priority Critical patent/CN115225322B/en
Publication of CN115225322A publication Critical patent/CN115225322A/en
Application granted granted Critical
Publication of CN115225322B publication Critical patent/CN115225322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • H04L63/0245Filtering by information in the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to an unmanned intelligent equipment safety constraint method based on environment side channel information verification, which comprises the following steps: the method comprises the following steps that a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of environments where the unmanned intelligent devices are located; each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result; verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result; and judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result. The method can implicitly and passively utilize the environmental side channel information to carry out safety constraint on the unmanned intelligent equipment, does not need active operation of a user, realizes operation transparency, and meets the equipment authentication requirement under the cooperative working scene of a plurality of unmanned intelligent equipment in the self-organizing network.

Description

Unmanned intelligent equipment safety constraint method based on environment side channel information verification
Technical Field
The invention belongs to the technical field of identity authentication, and particularly relates to an unmanned intelligent equipment safety constraint method based on environment side channel information verification.
Background
Identity authentication technology has been a focus of attention in the security field. Along with the development of the Internet of things and the industrial Internet, intelligent objects can be communicated with each other, objects can be detected with each other, and all objects can interact with each other and can also interact with the external environment. With the complication of scenes, the traditional authentication mechanism using passwords does not meet the equipment authentication requirements in the scene of cooperative work of a plurality of unmanned intelligent equipment in the self-organizing network. Thus, continuous implicit authentication techniques are considered to be a better alternative to the continuous, non-inductive security constraints of unmanned smart devices.
Currently, existing continuous implicit authentication techniques are mainly focused on biometric-based authentication techniques. Xin et al propose a gait biometric feature recognition based on edges, using a deep learning model to authenticate a user; wu et al propose a two-step authentication method based on a home-made fingertip sensor device, by capturing motion data (such as acceleration and angular velocity) and physiological data (such as photoplethysmography, PPG, signals) for authentication; liang et al propose an authentication method based on a multi-layer perception algorithm, which performs authentication and the like through data of a sensor and a touch screen in a smart phone.
The continuous implicit authentication technologies achieve a good authentication effect under the condition of human-computer interaction, but the accuracy of an authentication result is influenced by factors such as personal emotion and touch screen angle, and is supported by interaction between a subject and an object, so that the continuous implicit authentication technology is not suitable for continuous and uninductive security constraint of unmanned intelligent equipment; in addition, the biometric-based authentication technology requires additional device support and a large amount of capital support, is high in cost, and is not suitable for the authentication scene of large-scale devices.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an unmanned intelligent device safety constraint method based on environment side channel information verification. The technical problem to be solved by the invention is realized by the following technical scheme:
the embodiment of the invention provides an unmanned intelligent equipment safety constraint method based on environment side channel information verification, which comprises the following steps:
the method comprises the following steps that a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of the environment where the unmanned intelligent devices are located;
each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result;
verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result;
and judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result.
In one embodiment of the invention, the side channel information comprises one or more of sound information, temperature information, light information, behavior information, time information, location information.
In an embodiment of the present invention, each of the unmanned intelligent devices filters the side channel information, and classifies the filtered side channel information to obtain a classification result, including:
each unmanned intelligent device filters the side channel information by using a wavelet filtering method to obtain filtered side channel information;
and classifying the filtered side channel information by using a multi-classification method based on machine learning to obtain a classification result.
In an embodiment of the present invention, verifying the classification result of the current unmanned intelligent device by using the classification result of the neighboring unmanned intelligent device to obtain a verification result, includes:
and verifying the real-time environment side channel information classification result of the current unmanned intelligent equipment by using the initial environment side channel information classification result of the adjacent unmanned intelligent equipment to obtain a verification result.
In an embodiment of the present invention, verifying the classification result of the current unmanned intelligent device by using the classification result of the neighboring unmanned intelligent device to obtain a verification result, includes:
the current unmanned intelligent equipment respectively sends the classification result of the current unmanned intelligent equipment to a plurality of adjacent unmanned intelligent equipment;
each adjacent unmanned intelligent device compares the classification result with the received classification result of the current unmanned intelligent device for verification; when the classification result of the adjacent unmanned intelligent equipment is consistent with the classification result of the current unmanned intelligent equipment, obtaining a first verification result; when the classification result of the adjacent unmanned intelligent equipment is inconsistent with the classification result of the current unmanned intelligent equipment, obtaining a second verification result;
and each adjacent unmanned intelligent device sends the first verification result or the second verification result to the current unmanned intelligent device.
In an embodiment of the present invention, verifying the classification result of the current unmanned intelligent device by using the classification result of the neighboring unmanned intelligent device to obtain a verification result, includes:
the adjacent unmanned intelligent devices respectively send the classification results of the adjacent unmanned intelligent devices to the current unmanned intelligent device;
the current unmanned intelligent equipment respectively compares and verifies the classification result of the current unmanned intelligent equipment with the received classification result of each adjacent unmanned intelligent equipment; when the classification result of the current unmanned intelligent device is consistent with the classification result of the adjacent unmanned intelligent device, obtaining a first verification result; and when the classification result of the current unmanned intelligent device is inconsistent with the classification result of the adjacent unmanned intelligent device, obtaining a second verification result.
In an embodiment of the present invention, determining whether the current unmanned intelligent device is in a normal working state according to the verification result includes:
and the current unmanned intelligent equipment counts the number of the first verification results and the number of the second verification results, and judges whether the current unmanned intelligent equipment is in a normal working state or not according to the number of the first verification results and the number of the second verification results.
In an embodiment of the present invention, determining whether the current unmanned intelligent device is in a normal working state according to the number of the first verification results and the number of the second verification results includes:
when the number of the first verification results is larger than that of the second verification results, the current unmanned intelligent equipment is in a normal working state;
and when the number of the first verification results is less than or equal to the number of the second verification results, the current unmanned intelligent equipment is in an abnormal working state.
In an embodiment of the present invention, verifying the classification result of the current unmanned intelligent device by using the classification result of the neighboring unmanned intelligent device to obtain a verification result, includes:
and verifying the classification result of the current unmanned intelligent equipment by using the classification result of any one adjacent unmanned intelligent equipment in an unmanned intelligent equipment cluster consisting of a plurality of credible adjacent unmanned intelligent equipment to obtain a first verification result or a second verification result.
In an embodiment of the present invention, determining whether the current unmanned intelligent device is in a normal working state according to the verification result includes:
when the verification result is the first verification result, the current unmanned intelligent equipment is in a normal working state; and when the verification result is the second verification result, the current unmanned intelligent equipment is in an abnormal working state.
Compared with the prior art, the invention has the beneficial effects that:
1. the safety constraint method utilizes the environment side channel information to verify between the adjacent unmanned intelligent equipment and the current unmanned intelligent equipment, can solve the problem of opaque operation of a user in the existing authentication mode, realizes the safety constraint of the unmanned intelligent equipment by implicitly and passively utilizing the environment side channel information, does not need the active operation of the user, realizes the transparency of the operation, and meets the equipment authentication requirement under the cooperative working scene of a plurality of unmanned intelligent equipment in the self-organizing network.
2. The safety constraint method continuously detects the working condition of the unmanned intelligent equipment by acquiring the channel information of the environment side in real time and mutually communicating the unmanned intelligent equipment, thereby realizing continuous detection.
3. The safety constraint method can be realized by utilizing a plurality of unmanned intelligent devices under a cooperative working scene, does not need additional device support and a large amount of capital support, has low cost and is suitable for large-scale device authentication scenes.
Drawings
Fig. 1 is a schematic flowchart of a method for security constraint of an unmanned intelligent device based on environment side channel information verification according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating authentication of a proximity unmanned intelligent device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart of a method for security constraint of an unmanned intelligent device based on environment side channel information verification according to an embodiment of the present invention.
The unmanned intelligent equipment safety constraint method based on environment side channel information verification comprises the following steps:
s1, a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of the environment where the unmanned intelligent devices are located.
Specifically, the environment information is collected in real time through different environment conditions of different tasks executed by the unmanned intelligent equipment, so that whether the task execution condition of the unmanned intelligent equipment is normal or not is authenticated. When the same task is heavy, the multiple unmanned intelligent devices work cooperatively to execute the task together, and in the process of executing the task, the environments of the multiple unmanned intelligent devices of the same task are the same. Further, in the process of executing different tasks, the environment where the unmanned intelligent device is located has differences, such as temperature, sound, light, and the like, and raw data of side channel information of the environment where the unmanned intelligent device is located can be acquired through a plurality of sensors, such as a temperature sensor, a sound sensor, a light sensor, and the like, on each unmanned intelligent device, wherein environment information with large sound signal difference can be acquired through the sound sensor, task scene information with strict temperature requirements can be acquired through the temperature sensor, and the light sensor can detect environment information sensitive to light, and the like. Therefore, the side channel information includes, but is not limited to, one or more of sound information, temperature information, light information, behavior information, time information, and location information, and any single information or combination of information in the environment that can be used to distinguish the operation of a mobile entity, such as a device, can be used as the side channel information.
It should be noted that, in this embodiment, side channel information is collected according to a change of an actual environment, where the collected side channel information may be one type of side channel information or multiple types of side channel information; for example, a change from a quiet environment to a factory environment, sound information may be collected; the change from the road to the tunnel can acquire one or two of light information and position information; the change from the common environment to the low-temperature environment such as a refrigeration house can acquire one or two of temperature information and behavior information; and so on.
And S2, each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result. The method specifically comprises the following steps:
and S21, each unmanned intelligent device filters the side channel information by using a wavelet filtering method to obtain filtered side channel information.
Specifically, when a signal of the environment side channel information is acquired, interference of other signals with different frequencies and random signals exists, and a certain method needs to be adopted to filter the interference signals, so that the accuracy of subsequent authentication is improved. Therefore, in order to prevent data loss or interference of other signals from causing certain interference to the accuracy of the authentication result, the present embodiment utilizes a wavelet filtering method to filter the acquired side channel information, so as to eliminate irrelevant noise interference existing in the original information, obtain mutation information, that is, filtered side channel information, and improve the accuracy of subsequent authentication.
The wavelet filtering method is a time-frequency localization analysis method with fixed window size, namely window area, but variable window shape, and variable time window and frequency window, and has higher frequency resolution and lower time resolution in the low frequency part and higher time resolution and lower frequency resolution in the high frequency part, and is very suitable for detecting the components of the abrupt change signal in the normal signal. It can obtain finer signal information of low frequency with a long time interval and obtain signal information of high frequency with a short time interval. In the embodiment, task execution conditions of the unmanned intelligent equipment are authenticated through sudden changes from a quiet environment to a factory environment, sudden changes from a road to tunnel light, sudden changes from a common environment to a low-temperature environment such as a refrigeration house and the like, the traditional Fourier transform cannot meet the requirement, and the non-stationary signals can be well processed and extracted through a wavelet filtering method.
After the useful signal is wavelet-transformed, its energy will be concentrated on a few wavelet coefficients, while the wavelet coefficients of the noise points are not correlated and distributed on all time axes of various scales. The maximum value point of the modulus under each scale of the wavelet transform is reserved, other points are set to zero or reduced to the maximum degree, and then the wavelet coefficient after processing is subjected to wavelet inverse transform, so that the purpose of suppressing noise can be achieved. And threshold denoising is carried out by comparing and judging the transformation domain coefficient and a threshold, and then the processed coefficient is inversely transformed to reconstruct a denoised image.
The specific process of wavelet denoising is as follows:
and (3) wavelet decomposition of the image, determining a wavelet function and a decomposition level N, and performing N-layer wavelet decomposition on the image.
And (4) threshold processing, namely selecting a threshold for each layer of coefficient obtained by decomposition, and carrying out threshold judgment on detail coefficients.
Image reconstruction: and reconstructing an image by wavelet inverse transformation on the coefficient subjected to threshold processing.
And S22, classifying the filtered side channel information by using a multi-classification method based on machine learning to obtain a classification result.
Specifically, the multi-classification method based on machine learning includes but is not limited to: the method comprises the steps of a multi-classification method based on a Convolutional Neural Network-Long Short-Term Memory Network (CNN-LSTM) model, a multi-classification method based on a random forest algorithm, a multi-classification method based on a Support Vector Machine (SVM), a multi-classification method based on a generation countermeasure Network (GAN) and a multi-classification method based on a Graph Neural □ (Graph Neural Network, GNN).
This embodiment takes a multi-classification method based on the CNN-LSTM network model as an example for explanation.
First, the filtered side channel information is imaged.
Specifically, since the deep learning is not ideal for the processing capability of the one-dimensional data, when the acquired environment-side channel information is the one-dimensional data, the data needs to be processed, imaged, and converted into two-dimensional grayscale image data to be input into the network model.
And then, classifying the two-dimensional gray image data by using the CNN-LSTM network model so as to confirm the current environment condition of the unmanned intelligent equipment.
Specifically, the convolutional neural network is a widely-applied deep learning model, has strong feature extraction capability, can automatically and efficiently extract features from input data, and has good processing capability on two-dimensional data. However, the CNN method is not effective for some classification tasks where input is time-dependent, such as the continuous information authentication in the present embodiment. For such tasks, the state at the previous time may affect the subsequent authentication result, and thus, the authentication requires not only the current information input but also the previous input to be memorized. Based on this, the present embodiment employs a long short term memory storage unit, i.e. LSTM network, in combination with CNN network to classify the side channel information.
LSTM have the ability to remove or add information to the state of a cell through a complex structure called a "gate". The gate is a method for selectively allowing information to pass through and consists of an S-shaped neural network layer and a stagnation multiplication operation. The LSTM unit has three gates: a forgetting gate, an input gate and an output gate for protecting and controlling the state of the unit. The embodiment converts one-dimensional data into two-dimensional gray scale image, but the converted gray scale image is another expression form of one-dimensional data in nature, and time connection still exists between adjacent images, so that an LSTM + softmax layer is used as a classifier of the authentication network instead of a full connection layer in the conventional CNN network. That is to say, the two-dimensional gray image data is classified sequentially through the CNN network, the LSTM network and the softmax layer, so that a classification result is obtained.
And S3, verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating authentication of a proximity unmanned smart device according to an embodiment of the present invention.
Specifically, under the self-organizing network, a plurality of unmanned intelligent devices cooperate with one another to jointly execute a task, and in the process of executing the task, the environments of the plurality of unmanned intelligent devices of the same task are the same. And authenticating the task execution condition of the current unmanned intelligent equipment by utilizing side channel information of adjacent unmanned intelligent equipment in combination with the characteristic that a plurality of unmanned intelligent equipment in the self-organizing network work cooperatively. As shown in fig. 2, in the ad hoc network, a plurality of unmanned intelligent devices work cooperatively, and if a certain unmanned intelligent device is attacked, the whole task may not be performed normally. Therefore, the characteristic that the environments of the unmanned intelligent devices are the same in the process that the unmanned intelligent devices execute the same task is adopted, and the continuous communication is carried out between different devices and adjacent devices, so that the task execution condition of the current unmanned intelligent device is authenticated by using the environment side channel information classification result of the unmanned intelligent devices.
Step S3 specifically includes the steps of:
and S31, the current unmanned intelligent equipment respectively sends the classification results of the current unmanned intelligent equipment to the adjacent unmanned intelligent equipment.
For example, the unmanned smart device a is authenticated. In the task execution process of the unmanned intelligent equipment, various sensors on the unmanned intelligent equipment continuously acquire the side channel information in real time, and the side channel information is processed by the method in the step S2 to obtain the specific classification result of the current side channel information. After that, at intervals of time t, the unmanned intelligent device a transmits the classification result M within the time t to a plurality of neighboring unmanned intelligent devices B, C, D.
It should be noted that the unmanned intelligent device a may also transmit the classification result to a plurality of adjacent unmanned intelligent devices B, C, D in real time.
S32, each adjacent unmanned intelligent device compares the classification result of the adjacent unmanned intelligent device with the received classification result of the current unmanned intelligent device for verification; when the classification result of the adjacent unmanned intelligent equipment is consistent with the classification result of the current unmanned intelligent equipment, obtaining a first verification result; and when the classification result of the adjacent unmanned intelligent equipment is inconsistent with the classification result of the current unmanned intelligent equipment, obtaining a second verification result.
Specifically, when the neighboring unmanned intelligent devices B, C, D receive the classification result M, the classification results M of the neighboring unmanned intelligent devices B, C, D are respectively obtained i (i = b, c, d) is verified against the received classification result M. Further, when M = M i If so, obtaining a first verification result, wherein the verification result is 1; otherwise, the verification result is 0, and a second verification result is obtained.
It should be noted that, when the unmanned intelligent device a transmits the classification result M within the time t to B, C, D, B, C, D compares the classification result within the time t with the received classification result within the time t; when the unmanned intelligent device A transmits the real-time classification result M to B, C, D, B, C, D compares the respective real-time classification result with the received real-time classification result.
And S33, each adjacent unmanned intelligent device sends the first verification result or the second verification result to the current unmanned intelligent device.
Specifically, after the verification is completed, B, C, D returns the obtained first verification result or second verification result to the unmanned intelligent device a.
And S4, judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result.
In this embodiment, whether the current unmanned intelligent device is in a normal working state is determined according to the characteristic of consistency of environments where the plurality of unmanned intelligent devices are located in a collaborative working scene.
In a specific embodiment, the current unmanned intelligent device counts the number of the first verification results and the number of the second verification results, and determines whether the current unmanned intelligent device is in a normal working state according to the number of the first verification results and the number of the second verification results. Further, when the number of the first verification results is larger than that of the second verification results, the current unmanned intelligent device is in a normal working state; and when the number of the first verification results is smaller than or equal to the number of the second verification results, the current unmanned intelligent equipment is in an abnormal working state.
Specifically, the unmanned intelligent device a receives the verification result returned by the neighboring unmanned intelligent device B, C, D in step S3, and determines whether the current unmanned intelligent device is working normally according to the characteristic of consistency of the environments where the plurality of unmanned intelligent devices are located in the collaborative work scene and the ratio of 0 to 1 returned. When the proportion of 1 is larger than half of the proportion, namely the number of 1 is larger than the number of 0, the task execution of the current unmanned intelligent equipment is considered to be normal, otherwise, the current unmanned intelligent equipment is considered to be attacked, emergency measures are taken, and all work is stopped immediately or an alarm is given.
The safety constraint method of the embodiment utilizes the environment side channel information to verify between the adjacent unmanned intelligent equipment and the current unmanned intelligent equipment, can aim at the problem that the user operation in the existing authentication mode is not transparent, realizes the safety constraint by utilizing the environment side channel information under the condition that the unmanned intelligent equipment is not perceived implicitly and passively, does not need the user to operate actively, realizes the operation transparency, and meets the equipment authentication requirement under the cooperative working scene of a plurality of unmanned intelligent equipment in the self-organizing network.
The safety constraint method of the embodiment continuously detects the working condition of the unmanned intelligent equipment through the real-time acquisition of the channel information of the environment side and the mutual communication between the unmanned intelligent equipment, thereby realizing continuous and uninterrupted authentication.
The safety constraint method can be realized by utilizing a plurality of unmanned intelligent devices in a collaborative working scene, does not need additional device support and a large amount of capital support, has low cost and is suitable for a large-scale device authentication scene.
Example two
On the basis of the first embodiment, the present embodiment provides another unmanned intelligent device security constraint method based on environment side channel information verification, including the steps of:
s1, a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of the environment where the unmanned intelligent devices are located.
And S2, each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result.
And S3, verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result. The method specifically comprises the following steps:
and S31, the adjacent unmanned intelligent devices respectively send the classification results of the adjacent unmanned intelligent devices to the current unmanned intelligent device.
For example, the unmanned smart device a is authenticated. During task execution of the unmanned intelligent equipment, various sensors on all the unmanned intelligent equipmentAnd (3) continuously acquiring the side channel information in real time, and processing the side channel information by the method in the step (S2) to obtain a specific classification result of the current side channel information. Then, the adjacent unmanned intelligent devices B, C, D respectively classify the self classification results M i (i = b, c, d) to the unmanned smart device a.
It should be noted that B, C, D may be used to classify the classification result M at time t at intervals of time t i (i = b, c, d) to the unmanned intelligent device a, and the classification result M may also be sent in real time i (i = b, c, d) to the unmanned smart device a.
S32, the current unmanned intelligent device respectively compares and verifies the classification result of the current unmanned intelligent device with the received classification result of each adjacent unmanned intelligent device; when the classification result of the current unmanned intelligent device is consistent with the classification result of the adjacent unmanned intelligent device, obtaining a first verification result; and when the classification result of the current unmanned intelligent device is inconsistent with the classification result of the adjacent unmanned intelligent device, obtaining a second verification result.
Specifically, the unmanned intelligent device A receives the classification result M i (i = b, c, d), the classification result M of the user and the received classification result M are compared i Comparative verification was performed (i = b, c, d). Further, when M = M i If so, obtaining a first verification result, wherein the verification result is 1; otherwise, the verification result is 0, and a second verification result is obtained. After the verification is completed, the unmanned intelligent device A can obtain a first verification result or a second verification result.
And S4, judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result.
Please refer to embodiment one for the specific execution steps of steps S1, S2, and S4, and please refer to embodiment one for the technical effect achieved in this embodiment, which is not described in detail herein.
EXAMPLE III
On the basis of the first embodiment and the second embodiment, the embodiment provides another unmanned intelligent device security constraint method based on environment side channel information verification, and the method comprises the following steps:
s1, a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of the environment where the unmanned intelligent devices are located.
And S2, each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result.
And S3, verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result.
Specifically, the classification result of the current unmanned intelligent device is verified by using the classification result of any one of the adjacent unmanned intelligent devices in an unmanned intelligent device cluster formed by a plurality of credible adjacent unmanned intelligent devices, so that a first verification result or a second verification result is obtained.
It will be appreciated that several neighboring unmanned smart devices are set as trusted with each other, such that the several neighboring unmanned smart devices form a cluster of unmanned smart devices. Further, when the current unmanned intelligent device is authenticated, since a plurality of adjacent unmanned intelligent devices in the cluster trust each other, the classification result of the current unmanned intelligent device can be verified only by using the classification result of any adjacent unmanned intelligent device in the cluster.
Further, when the classification result of any neighboring unmanned intelligent device in the cluster is used to verify the classification result of the current unmanned intelligent device, the method for performing comparison verification in the neighboring unmanned intelligent device described in embodiment one may be used, or the method for performing comparison verification in the current unmanned intelligent device described in embodiment two may be used, which is not described in detail in this embodiment.
Further, when the classification result of any one adjacent unmanned intelligent device in the cluster is consistent with the classification result of the current unmanned intelligent device, the verification result is 1, and a first verification result is obtained; and when the classification result of any adjacent unmanned intelligent equipment in the cluster is inconsistent with the classification result of the current unmanned intelligent equipment, the verification result is 0, and a second verification result is obtained.
And S4, judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result.
Specifically, when the verification result is that the first verification result is 1, the current unmanned intelligent device is in a normal working state; and when the verification result is that the second verification result is 0, the current unmanned intelligent equipment is in an abnormal working state.
Please refer to embodiment one for the specific execution steps of steps S1 and S2, and please refer to embodiment one for the technical effect achieved in this embodiment, which is not described in detail in this embodiment.
Example four
On the basis of the first embodiment, the second embodiment and the third embodiment, the embodiment provides another unmanned intelligent device security constraint method based on environment side channel information verification, and the method comprises the following steps:
s1, a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of the environment where the unmanned intelligent devices are located.
And S2, each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result.
And S3, verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result.
Specifically, the real-time environment side channel information classification result of the current unmanned intelligent device is verified by using the initial environment side channel information classification result of the adjacent unmanned intelligent device, so that a verification result is obtained.
For example, the unmanned smart device a is authenticated. After the unmanned intelligent device A transmits the classification result M to a plurality of adjacent unmanned intelligent devices B, C, D, the result of the environment side channel information matched with the task allocation at the initial moment of the plurality of adjacent unmanned intelligent devices B, C, D is compared with the received classification result M for verification. As can be seen, in this embodiment, the classification result of the initial time of the plurality of adjacent unmanned intelligent devices B, C, D is always used for comparison with the real-time classification result of the unmanned intelligent device a.
And S4, judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result.
Please refer to embodiment one for the specific execution steps of steps S1, S2, and S4, and please refer to embodiment one for the technical effect achieved in this embodiment, which is not described in detail herein.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. An unmanned intelligent device safety constraint method based on environment side channel information verification is characterized by comprising the following steps:
the method comprises the following steps that a plurality of unmanned intelligent devices in a collaborative working scene respectively collect side channel information of environments where the unmanned intelligent devices are located;
each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result;
verifying the classification result of the current unmanned intelligent device by using the classification result of the adjacent unmanned intelligent device to obtain a verification result;
and judging whether the current unmanned intelligent equipment is in a normal working state or not according to the verification result.
2. The unmanned intelligent device security constraint method based on environment side channel information verification, according to claim 1, wherein the side channel information comprises one or more of sound information, temperature information, light information, behavior information, time information, and location information.
3. The method for security constraint of unmanned intelligent devices based on environment side channel information verification according to claim 1, wherein each unmanned intelligent device filters the side channel information and classifies the filtered side channel information to obtain a classification result, comprising:
each unmanned intelligent device filters the side channel information by using a wavelet filtering method to obtain filtered side channel information;
and classifying the filtered side channel information by using a multi-classification method based on machine learning to obtain a classification result.
4. The method for security constraint of unmanned intelligent device based on environment side channel information verification according to claim 1, wherein the step of verifying the classification result of the current unmanned intelligent device by using the classification result of the neighboring unmanned intelligent device to obtain the verification result comprises:
and verifying the real-time environment side channel information classification result of the current unmanned intelligent equipment by using the initial environment side channel information classification result of the adjacent unmanned intelligent equipment to obtain a verification result.
5. The method for security constraint of unmanned intelligent devices based on verification of environment side channel information as claimed in claim 1, wherein verifying the classification result of current unmanned intelligent device by using the classification result of neighboring unmanned intelligent device to obtain verification result comprises:
the current unmanned intelligent equipment respectively sends the classification result of the current unmanned intelligent equipment to a plurality of adjacent unmanned intelligent equipment;
each adjacent unmanned intelligent device compares the classification result with the received classification result of the current unmanned intelligent device for verification; when the classification result of the adjacent unmanned intelligent equipment is consistent with the classification result of the current unmanned intelligent equipment, obtaining a first verification result; when the classification result of the adjacent unmanned intelligent equipment is inconsistent with the classification result of the current unmanned intelligent equipment, obtaining a second verification result;
and each adjacent unmanned intelligent device sends the first verification result or the second verification result to the current unmanned intelligent device.
6. The method for security constraint of unmanned intelligent devices based on verification of environment side channel information as claimed in claim 1, wherein verifying the classification result of current unmanned intelligent device by using the classification result of neighboring unmanned intelligent device to obtain verification result comprises:
the adjacent unmanned intelligent devices respectively send classification results of the adjacent unmanned intelligent devices to the current unmanned intelligent device;
the current unmanned intelligent equipment respectively compares and verifies the classification result of the current unmanned intelligent equipment with the received classification result of each adjacent unmanned intelligent equipment; when the classification result of the current unmanned intelligent device is consistent with the classification result of the adjacent unmanned intelligent device, obtaining a first verification result; and when the classification result of the current unmanned intelligent device is inconsistent with the classification result of the adjacent unmanned intelligent device, obtaining a second verification result.
7. The method for security constraint of unmanned intelligent device based on environment side channel information verification according to claim 5 or 6, wherein the step of determining whether the current unmanned intelligent device is in a normal working state according to the verification result comprises:
and the current unmanned intelligent equipment counts the number of the first verification results and the number of the second verification results, and judges whether the current unmanned intelligent equipment is in a normal working state or not according to the number of the first verification results and the number of the second verification results.
8. The method for security constraint of unmanned intelligent device based on environment side channel information verification according to claim 7, wherein determining whether the current unmanned intelligent device is in a normal working state according to the number of the first verification results and the number of the second verification results comprises:
when the number of the first verification results is larger than that of the second verification results, the current unmanned intelligent equipment is in a normal working state;
and when the number of the first verification results is less than or equal to the number of the second verification results, the current unmanned intelligent equipment is in an abnormal working state.
9. The method for security constraint of unmanned intelligent devices based on verification of environment side channel information as claimed in claim 1, wherein verifying the classification result of current unmanned intelligent device by using the classification result of neighboring unmanned intelligent device to obtain verification result comprises:
and verifying the classification result of the current unmanned intelligent equipment by using the classification result of any one adjacent unmanned intelligent equipment in an unmanned intelligent equipment cluster consisting of a plurality of credible adjacent unmanned intelligent equipment to obtain a first verification result or a second verification result.
10. The method for security constraint of the unmanned intelligent device based on environmental side channel information verification of claim 9, wherein determining whether the current unmanned intelligent device is in a normal working state according to the verification result comprises:
when the verification result is the first verification result, the current unmanned intelligent equipment is in a normal working state; and when the verification result is the second verification result, the current unmanned intelligent equipment is in an abnormal working state.
CN202210669540.3A 2022-06-14 2022-06-14 Unmanned intelligent device safety constraint method based on environment side channel information verification Active CN115225322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669540.3A CN115225322B (en) 2022-06-14 2022-06-14 Unmanned intelligent device safety constraint method based on environment side channel information verification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669540.3A CN115225322B (en) 2022-06-14 2022-06-14 Unmanned intelligent device safety constraint method based on environment side channel information verification

Publications (2)

Publication Number Publication Date
CN115225322A true CN115225322A (en) 2022-10-21
CN115225322B CN115225322B (en) 2024-02-02

Family

ID=83607116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669540.3A Active CN115225322B (en) 2022-06-14 2022-06-14 Unmanned intelligent device safety constraint method based on environment side channel information verification

Country Status (1)

Country Link
CN (1) CN115225322B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088706A1 (en) * 2015-11-26 2017-06-01 中国银联股份有限公司 Geographical location-based mobile device collaborative authentication method and system
CN108227746A (en) * 2018-01-23 2018-06-29 深圳市科卫泰实业发展有限公司 A kind of unmanned plane cluster control system and method
EP3385875A1 (en) * 2017-04-06 2018-10-10 Bundesdruckerei GmbH Method and system for authentication
CN109917767A (en) * 2019-04-01 2019-06-21 中国电子科技集团公司信息科学研究院 A kind of distribution unmanned plane cluster autonomous management system and control method
US20200169400A1 (en) * 2018-11-27 2020-05-28 Microsoft Technology Licensing Llc Trusted execution based on environmental factors
CN111884817A (en) * 2020-08-18 2020-11-03 重庆交通大学 Method for realizing distributed unmanned aerial vehicle cluster network secure communication
US20200358794A1 (en) * 2019-05-06 2020-11-12 Cisco Technology, Inc. Continuous validation of active labeling for device type classification
CN112180985A (en) * 2020-10-26 2021-01-05 中国人民解放军国防科技大学 Small airborne cooperative control system supporting cluster control of multiple unmanned aerial vehicles
CN112433856A (en) * 2020-12-04 2021-03-02 中国科学技术大学 Decentralization autonomous decision-making method for unmanned plane swarm network
CN113156524A (en) * 2021-05-20 2021-07-23 一飞(海南)科技有限公司 Method, device, medium and terminal for detecting geomagnetic interference in flying field of cluster unmanned aerial vehicle
CN113467517A (en) * 2021-07-30 2021-10-01 河北科技大学 Flight control method and system of unmanned aerial vehicle cluster under fault condition
CN114043990A (en) * 2021-12-15 2022-02-15 吉林大学 Multi-scene traffic vehicle driving state analysis system and method considering auditory information
CN114155396A (en) * 2021-11-24 2022-03-08 信通院车联网创新中心(成都)有限公司 Multi-source data selective fusion method oriented to unmanned vehicle multi-environment positioning
KR102392576B1 (en) * 2020-11-26 2022-04-29 숭실대학교 산학협력단 Method for verifying integrity of aritificial intelligence model, computing device and system for executing the method
WO2022095616A1 (en) * 2020-11-03 2022-05-12 国网智能科技股份有限公司 On-line intelligent inspection system and method for transformer substation

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088706A1 (en) * 2015-11-26 2017-06-01 中国银联股份有限公司 Geographical location-based mobile device collaborative authentication method and system
EP3385875A1 (en) * 2017-04-06 2018-10-10 Bundesdruckerei GmbH Method and system for authentication
CN108227746A (en) * 2018-01-23 2018-06-29 深圳市科卫泰实业发展有限公司 A kind of unmanned plane cluster control system and method
US20200169400A1 (en) * 2018-11-27 2020-05-28 Microsoft Technology Licensing Llc Trusted execution based on environmental factors
CN109917767A (en) * 2019-04-01 2019-06-21 中国电子科技集团公司信息科学研究院 A kind of distribution unmanned plane cluster autonomous management system and control method
US20200358794A1 (en) * 2019-05-06 2020-11-12 Cisco Technology, Inc. Continuous validation of active labeling for device type classification
CN111884817A (en) * 2020-08-18 2020-11-03 重庆交通大学 Method for realizing distributed unmanned aerial vehicle cluster network secure communication
CN112180985A (en) * 2020-10-26 2021-01-05 中国人民解放军国防科技大学 Small airborne cooperative control system supporting cluster control of multiple unmanned aerial vehicles
WO2022095616A1 (en) * 2020-11-03 2022-05-12 国网智能科技股份有限公司 On-line intelligent inspection system and method for transformer substation
KR102392576B1 (en) * 2020-11-26 2022-04-29 숭실대학교 산학협력단 Method for verifying integrity of aritificial intelligence model, computing device and system for executing the method
CN112433856A (en) * 2020-12-04 2021-03-02 中国科学技术大学 Decentralization autonomous decision-making method for unmanned plane swarm network
CN113156524A (en) * 2021-05-20 2021-07-23 一飞(海南)科技有限公司 Method, device, medium and terminal for detecting geomagnetic interference in flying field of cluster unmanned aerial vehicle
CN113467517A (en) * 2021-07-30 2021-10-01 河北科技大学 Flight control method and system of unmanned aerial vehicle cluster under fault condition
CN114155396A (en) * 2021-11-24 2022-03-08 信通院车联网创新中心(成都)有限公司 Multi-source data selective fusion method oriented to unmanned vehicle multi-environment positioning
CN114043990A (en) * 2021-12-15 2022-02-15 吉林大学 Multi-scene traffic vehicle driving state analysis system and method considering auditory information

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
G. MANOGARAN, C. -H. HSU, P. M. SHAKEEL AND M. ALAZAB: "Non-Recurrent Classification Learning Model for Drone Assisted Vehicular Ad-Hoc Network Communication in Smart Cities", IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, pages 2792 - 2800 *
XIAOPAN ZHU; CHUNJIANG BIAN; YU CHEN; SHI CHEN: "A Low Latency Clustering Method for Large-Scale Drone Swarms", IEEE, vol. 7, pages 260 - 267 *
井田;王涛;王维平;李小波;周鑫;: "一种持续侦察无人机集群规模自适应调控方法", 计算机研究与发展, no. 06, pages 140 - 148 *
周万锴;龙敏;: "基于区块链的环境监测数据安全传输方案", 计算机科学, no. 01, pages 321 - 326 *
杨闯;刘建业;熊智;赖际舟;熊骏;: "由感知到动作决策一体化的类脑导航技术研究现状与未来发展", 航空学报, no. 01, pages 35 - 49 *
霍梦真;魏晨;于月平;赵建霞;: "基于鸽群智能行为的大规模无人机集群聚类优化算法", 中国科学:技术科学, no. 04, pages 111 - 118 *
马助兴;付炜平;李焱;谷浩;康哲;: "基于物联网技术的变电站智能安全管控系统的设计及实现", 电子测量技术, no. 23, pages 12 - 20 *

Also Published As

Publication number Publication date
CN115225322B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN110163611B (en) Identity recognition method, device and related equipment
Shi et al. Smart user authentication through actuation of daily activities leveraging WiFi-enabled IoT
US20200210561A1 (en) Biometric authentication, identification and detection method and device for mobile terminal and equipment
KR101433472B1 (en) Apparatus, method and computer readable recording medium for detecting, recognizing and tracking an object based on a situation recognition
CN111505632B (en) Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics
Zerrouki et al. Accelerometer and camera-based strategy for improved human fall detection
Chen et al. Combining fractional‐order edge detection and chaos synchronisation classifier for fingerprint identification
Shi A Novel Ensemble Learning Algorithm Based on DS Evidence Theory for IoT Security.
CN113609976A (en) Direction-sensitive multi-gesture recognition system and method based on WiFi (Wireless Fidelity) equipment
Wang et al. A survey of user authentication based on channel state information
Li et al. Adaptive deep feature fusion for continuous authentication with data augmentation
Al-Nima et al. Human identification using local binary patterns for finger outer knuckle
CN116343261A (en) Gesture recognition method and system based on multi-modal feature fusion and small sample learning
CN113742669B (en) User authentication method based on twin network
Li et al. ClickLeak: Keystroke leaks through multimodal sensors in cyber-physical social networks
El_Tokhy Robust multimodal biometric authentication algorithms using fingerprint, iris and voice features fusion
CN110688969A (en) Video frame human behavior identification method
CN115225322B (en) Unmanned intelligent device safety constraint method based on environment side channel information verification
WO2018113206A1 (en) Image processing method and terminal
Pandey et al. Csi-based joint location and activity monitoring for covid-19 quarantine environments
CN106714163A (en) Gesture behavior authentication model constructing method and system based on posture change
Yuan et al. Fingerprint liveness detection adapted to different fingerprint sensors based on multiscale wavelet transform and rotation-invarient local binary pattern
Li et al. iwalk: Let your smartphone remember you
Chaitanya et al. Verification of pattern unlock and gait behavioural authentication through a machine learning approach
US20210232801A1 (en) Model-based iterative reconstruction for fingerprint scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant