WO2022075678A2 - Appareil et procédé de détection de symptômes anormaux d'un véhicule basés sur un apprentissage auto-supervisé en utilisant des données pseudo-normales - Google Patents

Appareil et procédé de détection de symptômes anormaux d'un véhicule basés sur un apprentissage auto-supervisé en utilisant des données pseudo-normales Download PDF

Info

Publication number
WO2022075678A2
WO2022075678A2 PCT/KR2021/013572 KR2021013572W WO2022075678A2 WO 2022075678 A2 WO2022075678 A2 WO 2022075678A2 KR 2021013572 W KR2021013572 W KR 2021013572W WO 2022075678 A2 WO2022075678 A2 WO 2022075678A2
Authority
WO
WIPO (PCT)
Prior art keywords
normal data
data
vehicle
neural network
network model
Prior art date
Application number
PCT/KR2021/013572
Other languages
English (en)
Korean (ko)
Other versions
WO2022075678A3 (fr
Inventor
김휘강
송현민
Original Assignee
고려대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020210006212A external-priority patent/KR102506805B1/ko
Application filed by 고려대학교 산학협력단 filed Critical 고려대학교 산학협력단
Publication of WO2022075678A2 publication Critical patent/WO2022075678A2/fr
Publication of WO2022075678A3 publication Critical patent/WO2022075678A3/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security

Definitions

  • the present disclosure relates to a vehicle anomaly detection method, and more particularly, to a vehicle anomaly detection method based on self-supervised learning using pseudo-normal data.
  • CAN In-vehicle Network
  • CAN Controller Area Network
  • LIN Local Interconnected Network
  • FlexRay FlexRay
  • CAN is well known as the de facto standard for IVN and is known to be the most distributed.
  • CAN provides an efficient and economical communication channel between ECUs, it lacks security functions and may be vulnerable from cyber threats. For example, when CAN receives a connection from a user device, since it does not require a separate authentication procedure, an external device other than the user device can also be easily connected.
  • the present disclosure has been devised in response to the above-described background technology, and an object of the present disclosure is to provide a method for detecting anomalies in a vehicle network based on self-supervised learning using pseudo-normal data.
  • a method for detecting anomalies in a vehicle network based on self-supervised learning using pseudo-normal data is disclosed.
  • a vehicle abnormal symptom detection method for solving the above-described problems includes: obtaining normal data generated in the vehicle; pre-processing the obtained normal data; generating pseudo normal data by inputting data into a pre-trained first neural network model, training a second neural network model based on the generated pseudo normal data, and generating in the vehicle and inputting the data to the learned second neural network model to detect abnormal signs of the vehicle.
  • the acquiring of the normal data may include acquiring controller area network (CAN) traffic data generated in a vehicle in a normal state.
  • CAN controller area network
  • the pre-processing of the normal data includes extracting a CAN ID from CAN messages included in the normal data, and a CAN ID sequence based on the extracted CAN ID. (sequence) may be included.
  • the CAN ID sequence may be expressed in hexadecimal or binary data.
  • the generating of the pseudo-normal data includes: inputting the pre-processed normal data into the pre-trained first neural network model; and the pre-trained first neural network model.
  • the method may include generating pseudo normal data by predicting a CAN ID that appears next to each CAN ID included in the normal data through a network model.
  • the step of inputting the pre-processed normal data into the pre-trained first neural network model may include pre-learning the normal data including an arbitrary CAN ID or CAN ID sequence. can be input to the first neural network model.
  • the generating of the pseudo-normal data includes: a next CAN ID according to a probability distribution of a CAN ID appearing next to each CAN ID included in the normal data. You can predict and choose.
  • the generating of the pseudo-normal data may include adding noise by selecting an arbitrary CAN ID according to a uniform distribution when selecting the next CAN ID. there is.
  • an arbitrary CAN ID when adding the noise, may be selected as the uniform distribution based on a preset noise ratio.
  • the pseudo-normal data may include a CAN ID sequence having an arbitrary length.
  • the pseudo normal data may include a CAN ID sequence having the same length as that of normal data input to the first neural network model.
  • the pseudo-normal data includes a CAN ID sequence, and some CAN IDs are selected with a uniform distribution according to a preset noise ratio among all CAN IDs of the CAN ID sequence can be
  • the first neural network model when the CAN ID or CAN ID sequence extracted from the normal data is input, the CAN ID that appears next to the input CAN ID or CAN ID sequence can be pre-trained to predict the probability distribution for
  • the prior learning of the first neural network model includes receiving the CAN ID extracted from the normal data and converting it into a vector of a certain size, based on the converted vector It may include extracting a context of a given sequence, and predicting and learning a probability distribution for a CAN ID appearing next to the input CAN ID based on the context of the extracted sequence.
  • the first neural network model includes an embedding layer that receives the CAN ID extracted from the normal data and converts it into a vector of a certain size, the converted vector.
  • a Long Short-Term Memory layer (LSTM) that extracts the context of a given sequence based on the context, and a probability distribution for the CAN ID appearing next to the input CAN ID based on the context of the extracted sequence It may include a dense layer (Dense layer).
  • the training of the second neural network model may include inputting the pre-processed normal data and the pseudo-normal data into the second neural network model to convert the pseudo-normal data into an abnormality. It can be learned to classify as data.
  • the training of the second neural network model includes inputting the pre-processed normal data and additionally acquired attack type hint data into the second neural network model. It can be learned to classify the hint data of the attack type as abnormal data.
  • the training of the second neural network model includes the second neural network based on at least one of the pseudo normal data, the attack type hint data, and the abnormal data.
  • the size of the gradient backpropagated can be limited to below a threshold value.
  • the detecting of the vehicle anomaly may include acquiring data generated in the vehicle, pre-processing the acquired data, and pre-learning the pre-processed data and classifying it as normal data or abnormal data by inputting it to the second neural network model, and detecting abnormal signs of the vehicle.
  • the following operations for detecting abnormal signs of a vehicle when the computer program is executed in one or more processors The operations include: acquiring normal data generated in the vehicle, pre-processing the acquired normal data, and inputting the pre-processed normal data into a pre-trained first neural network model to obtain a pseudo-normal generating data (pseudo normal data), learning a second neural network model based on the generated pseudo normal data, and inputting data generated from the vehicle into the learned second neural network model It may include an operation of detecting abnormal signs of the vehicle.
  • a computing device for providing a vehicle anomaly detection method comprising a processor including one or more cores, and a memory, wherein the processor includes the vehicle Acquire normal data generated in A second neural network model may be trained based on the pseudo-normal data, and the data generated from the vehicle may be input to the learned second neural network model to detect an abnormal symptom of the vehicle.
  • FIG. 1 is a diagram illustrating a block diagram of a computing device that performs an operation for providing a method for detecting anomalies in a vehicle according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a block configuration diagram of a processor for explaining a method for learning and detecting a vehicle anomaly detection model according to an embodiment of the present disclosure.
  • FIG. 3 is a diagram exemplarily illustrating a neural network model of a pseudo-normal data generator, according to an embodiment of the present disclosure.
  • FIG. 4 is a diagram illustrating an example of a decision boundary of a supervised learning model according to learning data, according to an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating a flowchart of a method for detecting anomalies in a vehicle according to an embodiment of the present disclosure.
  • FIG. 6 depicts a general schematic diagram of an exemplary computing environment in which embodiments of the present disclosure may be implemented.
  • a controller area network (CAN) anomaly detection system may include a first model of a long short term memory (LSTM) based pseudo normal data generator and a second model of an anomaly detection unit.
  • the first model of the LSTM-based pseudo-normal data generator may generate pseudo-normal data imitating normal CAN traffic collected from a vehicle in a general situation in which there are no abnormal signs of the vehicle.
  • the second model of the abnormality detection unit may detect an abnormality in CAN traffic.
  • FIG. 1 is a diagram illustrating a block diagram of a computing device that performs an operation for providing a method for detecting anomalies in a vehicle according to an embodiment of the present disclosure.
  • the configuration of the computing device 100 shown in FIG. 1 is only a simplified example.
  • the computing device 100 may include other components for performing the computing environment of the computing device 100 , and only some of the disclosed components may configure the computing device 100 .
  • the computing device 100 may include a processor 110 , a memory 130 , and a network unit 150 .
  • the processor 110 can detect vehicle anomalies based on self-supervised learning using pseudo-normal data, and can effectively train a vehicle anomaly detection model in a limited data environment.
  • the processor 110 acquires normal data generated in a vehicle, pre-processes the acquired normal data, and inputs the pre-processed normal data to a pre-trained first neural network model to obtain a doctor Generates pseudo normal data, trains a second neural network model based on the generated pseudo-normal data, and detects abnormal signs of a vehicle by inputting data generated from the vehicle into the learned second neural network model can do.
  • the processor 110 may acquire controller area network (CAN) traffic data generated in a vehicle in a normal state.
  • CAN controller area network
  • the processor 110 when pre-processing the normal data, extracts a CAN ID from CAN messages included in the normal data, and based on the extracted CAN ID CAN ID sequence (sequence) can create
  • the CAN ID sequence may be expressed as hexadecimal or binary data.
  • the processor 110 when generating pseudo normal data, inputs the pre-processed normal data to the pre-trained first neural network model, and through the pre-trained first neural network model Pseudo-normal data can be generated by predicting the CAN ID that appears next to each CAN ID included in the normal data.
  • the processor 110 when the processor 110 inputs the pre-processed normal data to the pre-trained first neural network model, the pre-trained normal data including an arbitrary CAN ID or CAN ID sequence It can be input to the first neural network model.
  • the processor 110 when generating pseudo normal data, the processor 110 generates a next CAN ID according to a probability distribution of a CAN ID appearing next to each CAN ID included in the normal data. You can predict and choose.
  • the processor 110 may add noise by selecting an arbitrary CAN ID according to a uniform distribution.
  • the processor 110 may select an arbitrary CAN ID with a uniform distribution based on a preset noise ratio.
  • the pseudo-normal data of the present disclosure may include a CAN ID sequence having an arbitrary length.
  • the pseudo normal data may include a CAN ID sequence having the same length as that of normal data input to the first neural network model.
  • the pseudo-normal data may include a CAN ID sequence, and some CAN IDs may be selected with a uniform distribution according to a preset noise ratio among all CAN IDs of the CAN ID sequence. The foregoing is merely an example, and the present disclosure is not limited thereto.
  • the first neural network model of the present disclosure is pre-trained to predict the probability distribution for the CAN ID that appears next to the input CAN ID or CAN ID sequence.
  • can Pre-learning of the first neural network model receives the CAN ID extracted from normal data, transforms it into a vector of a certain size, extracts the context of a given sequence based on the transformed vector, and the context of the extracted sequence It is possible to learn by predicting a probability distribution for the CAN ID that appears next to the input CAN ID based on the .
  • the first neural network model includes an embedding layer that receives a CAN ID extracted from normal data and transforms it into a vector of a certain size, and extracts the context of a given sequence based on the transformed vector. It may include a Long Short-Term Memory layer (LSTM) and a density layer that predicts a probability distribution for a CAN ID appearing next to an input CAN ID based on the context of the extracted sequence.
  • LSTM Long Short-Term Memory layer
  • density layer that predicts a probability distribution for a CAN ID appearing next to an input CAN ID based on the context of the extracted sequence.
  • the processor 110 when training the second neural network model, inputs the preprocessed normal data and the pseudo-normal data to the second neural network model to classify the pseudo-normal data as abnormal data. can be taught to do.
  • the processor 110 trains the second neural network model
  • the preprocessed normal data and the additionally acquired hint data of the attack type are input to the second neural network model to enter the attack type. It can be trained to classify the hint data of , as abnormal data.
  • the processor 110 trains the second neural network model based on at least one of pseudo normal data, attack type hint data, and abnormal data. You can limit the size below a threshold.
  • the processor 110 obtains data generated in the vehicle when detecting an abnormal symptom of the vehicle, pre-processes the obtained data, and pre-learned the pre-processed data to the second neural By entering into the network model and classifying it as normal data or abnormal data, abnormal signs of the vehicle can be detected.
  • the processor 110 may acquire controller area network (CAN) traffic data generated in an abnormal or normal vehicle.
  • the processor 110 may extract a CAN ID from CAN messages included in the data, and generate a CAN ID sequence based on the extracted CAN ID.
  • CAN controller area network
  • the processor 110 may include one or more cores, and a central processing unit (CPU), a general purpose graphics processing unit (GPGPU), and a tensor of the computing device 100 . It may include a processor for deep learning, such as a tensor processing unit (TPU).
  • the processor 110 may read a computer program stored in the memory 130 to detect anomalies of the vehicle according to an embodiment of the present disclosure. According to an embodiment of the present disclosure, the processor 110 may perform an operation for detecting abnormal signs of a vehicle.
  • the processor 110 performs learning of the neural network such as processing of input data for learning in deep learning (DN), extracting features from the input data, calculating an error, and updating the weight of the neural network using backpropagation.
  • DN deep learning
  • the processor 110 at least one of a CPU, a GPGPU, and a TPU may process learning of a network function.
  • the CPU and GPGPU can process learning of a network function and detection of anomalies in a vehicle using the network function.
  • learning of a network function and detection of anomalies of an unmanned moving object using the network function may be processed by using the processors of a plurality of computing devices together.
  • the computer program executed in the computing device according to an embodiment of the present disclosure may be a CPU, GPGPU or TPU executable program.
  • the memory 130 may store any type of information generated or determined by the processor 110 and any type of information received by the network unit 150 .
  • the memory 130 includes a flash memory type, a hard disk type, a multimedia card micro type, and a card type memory (eg, SD or XD memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Memory (PROM) read-only memory), a magnetic memory, a magnetic disk, and an optical disk may include at least one type of storage medium.
  • the computing device 100 may operate in relation to a web storage that performs a storage function of the memory 130 on the Internet.
  • the description of the above-described memory is only an example, and the present disclosure is not limited thereto.
  • the network unit 150 may transmit/receive data for detecting abnormal signs of a vehicle to/from other computing devices, servers, and the like.
  • the network unit 150 may transmit/receive data to and from other computing devices, servers, and the like in order to detect anomalies of the vehicle.
  • the network unit 150 may enable communication between a plurality of computing devices so that learning of a network function is performed in a distributed manner in each of the plurality of computing devices.
  • the network unit 150 may enable communication between a plurality of computing devices to distribute analysis data generation using a network function.
  • the network unit 150 may be configured regardless of its communication mode, such as wired and wireless, and may include a personal area network (PAN) and a wide area network (WAN). ), etc., may be composed of various communication networks.
  • the network unit 150 may be a known World Wide Web (WWW), and may use a wireless transmission technology used for short-range communication such as infrared (IrDA) or Bluetooth (Bluetooth).
  • IrDA infrared
  • Bluetooth Bluetooth
  • the present disclosure may generate pseudo-normal data to detect a new type of anomaly that the model has not learned while maintaining the performance advantage of self-supervised learning and utilize it for model training. Accordingly, according to the present disclosure, the generated model can detect not only the learned type of anomaly but also the new type of attack data.
  • the present disclosure can detect not only the types of anomalies used for model training, but also new types of anomalies. Since the existing self-supervised learning method-based model uses a method of learning the boundary between normal and abnormal data points in the data space, it is dependent on the abnormal data used for training, Although there is a limitation in not being able to determine, in the present disclosure, both the learned type of anomaly as well as the new type of anomaly can be detected by allowing the model to learn the boundary of the spatial region where normal data points are distributed.
  • FIG. 2 is a diagram illustrating a block configuration diagram of a processor for explaining a method for learning and detecting a vehicle anomaly detection model according to an embodiment of the present disclosure.
  • the processor of the present disclosure includes a preprocessor 210 , a pseudo-normal data generator 220 including a first model, and an abnormality detector 230 including a second model. can do.
  • the preprocessor 210 extracts a CAN ID from CAN messages included in the normal data, and a CAN ID sequence based on the extracted CAN ID (sequence) can be created.
  • the CAN ID sequence may be expressed as hexadecimal or binary data.
  • the preprocessor 210 may perform a data preprocessing process, and may extract only information necessary for a model from the CAN traffic data. Specifically, the preprocessor 210 may extract CAN ID information from CAN messages included in CAN traffic and convert it into CAN ID sequence data. In this case, the CAN ID sequence data may be expressed as hexadecimal or binary data. However, the present invention is not limited thereto.
  • CAN IDs may be extracted from the CAN message and divided to form a CAN ID sequence.
  • the CAN ID may be displayed in different forms in the first model of the pseudo-normal data generator 220 and the second model of the abnormality detector 230 .
  • each CAN ID expressed as a hexadecimal string may be mapped to an integer representation as an index from 0 to the number of CAN IDs of CAN traffic.
  • each CAN ID may be converted into an 11-bit representation.
  • the CAN ID sequence converted according to each model may be divided into small batches of fixed-length subsequences and supplied to the model.
  • the present invention is not limited thereto.
  • the pseudo-normal data generator 220 inputs the pre-processed normal data to the pre-trained first neural network model, and adds the pre-trained normal data to the normal data through the pre-trained first neural network model.
  • Pseudo-normal data can be generated by predicting the CAN ID that appears next to each included CAN ID.
  • the normal data input to the pre-trained first neural network model may be an arbitrary CAN ID or normal data including a CAN ID sequence.
  • the pseudo-normal data generating unit 220 may predict and select the next CAN ID according to a probability distribution of a CAN ID that appears next to each CAN ID included in the normal data. . Also, when selecting the next CAN ID, the pseudo-normal data generator 220 may select an arbitrary CAN ID according to a uniform distribution to add noise. When adding noise, the pseudo-normal data generator 220 may select an arbitrary CAN ID with a uniform distribution based on a preset noise ratio.
  • the pseudo-normal data may include a CAN ID sequence having any length.
  • the pseudo-normal data may include a CAN ID sequence having the same length as that of normal data input to the first neural network model of the pseudo-normal data generator 220 .
  • the pseudo-normal data may include a CAN ID sequence, and some CAN IDs may be selected with a uniform distribution according to a preset noise ratio among all CAN IDs of the CAN ID sequence.
  • the first neural network model of the pseudo-normal data generator 220 predicts a probability distribution for a CAN ID that appears next to the input CAN ID or CAN ID sequence when a CAN ID or CAN ID sequence extracted from normal data is input. can be pre-trained to do so. Pre-learning of the first neural network model receives the CAN ID extracted from normal data, transforms it into a vector of a certain size, extracts the context of a given sequence based on the transformed vector, and the context of the extracted sequence It is possible to learn by predicting a probability distribution for the CAN ID that appears next to the input CAN ID based on the .
  • the first neural network model includes an embedding layer that receives a CAN ID extracted from normal data and transforms it into a vector of a certain size, and extracts the context of a given sequence based on the transformed vector. It may include a Long Short-Term Memory layer (LSTM) and a density layer that predicts a probability distribution for a CAN ID appearing next to an input CAN ID based on the context of the extracted sequence.
  • LSTM Long Short-Term Memory layer
  • density layer that predicts a probability distribution for a CAN ID appearing next to an input CAN ID based on the context of the extracted sequence.
  • the first neural network model of the pseudo-normal data generator 220 may generate pseudo-normal data based on Long Short Term Memory (LSTM), which is a representative Recurrent Neural Network (RNN) type.
  • LSTM Long Short Term Memory
  • RNN Recurrent Neural Network
  • the LSTM network may be suitable for processing time series data such as voice and video using a feedback connection.
  • the present invention is not limited thereto.
  • An input to the first neural network model of the pseudo-normal data generator 220 may be a CAN ID or a series of CAN IDs.
  • the first neural network model of the pseudo-normal data generator 220 may be trained to predict which CAN ID is most likely to be the next CAN ID at each time step based on a given CAN ID or series of CAN IDs. .
  • the present invention is not limited thereto.
  • the second neural network model of the abnormality detection unit 230 may be learned based on the pseudonormal data generated by the pseudonormal data generator 220 .
  • the learned second neural network model may receive data generated from the vehicle and detect an abnormal symptom of the vehicle.
  • the second neural network model of the abnormality detection unit 230 may receive preprocessed normal data and pseudonormal data and learn to classify the pseudonormal data as abnormal data.
  • the second neural network model of the abnormality detection unit 230 may be trained to classify the attack type hint data as abnormal data by receiving the preprocessed normal data and additionally acquired attack type hint data. .
  • the second neural network model of the abnormality detection unit 230 determines the size of the gradient back propagated when learning based on at least one of pseudo normal data, attack type hint data, and abnormal data. It can be limited below a threshold. The reason is that when the model is trained at a high learning rate, an exploding gradient problem may occur. Therefore, if a gradient clipping technique that limits the size of the gradient backpropagated is applied, training can be performed at a high learning rate and the model performance can be further improved.
  • a neural network may be composed of a set of interconnected computational units, which may generally be referred to as nodes. These nodes may also be referred to as neurons.
  • a neural network is configured to include at least one or more nodes. Nodes (or neurons) constituting the neural networks may be interconnected by one or more links.
  • one or more nodes connected through a link may relatively form a relationship between an input node and an output node.
  • the concepts of an input node and an output node are relative, and any node in an output node relationship with respect to one node may be in an input node relationship in a relationship with another node, and vice versa.
  • an input node-to-output node relationship may be created around a link.
  • One or more output nodes may be connected to one input node through a link, and vice versa.
  • the value of the output node may be determined based on data input to the input node.
  • a node interconnecting the input node and the output node may have a parameter.
  • the parameters may be variable, and may be changed by the user or algorithm in order for the neural network to perform a desired function. For example, when one or more input nodes are interconnected to one output node by respective links, the output node sets values input to input nodes connected to the output node and links corresponding to the respective input nodes. An output node value may be determined based on the parameter.
  • one or more nodes are interconnected through one or more links to form an input node and an output node relationship in the neural network.
  • the characteristics of the neural network may be determined according to the number of nodes and links in the neural network, correlations between nodes and links, and parameter values assigned to each of the links. For example, when two neural networks having the same number of nodes and links and having different parameter values between the links exist, the two neural networks may be recognized as different from each other.
  • a neural network may include one or more nodes. Some of the nodes constituting the neural network may configure one layer based on distances from the initial input node. For example, a set of nodes having a distance of n from the initial input node may constitute n layers. The distance from the initial input node may be defined by the minimum number of links that must be passed to reach the corresponding node from the initial input node. However, the definition of such a layer is arbitrary for description, and the order of the layer in the neural network may be defined in a different way from the above. For example, a layer of nodes may be defined by a distance from the final output node.
  • the initial input node may mean one or more nodes to which data is directly input without going through a link in a relationship with other nodes among nodes in the neural network.
  • it may mean nodes that do not have other input nodes connected by a link.
  • the final output node may refer to one or more nodes that do not have an output node in relation to other nodes among nodes in the neural network.
  • the hidden node may mean nodes constituting the neural network other than the first input node and the last output node.
  • the neural network according to an embodiment of the present disclosure may be a neural network in which the number of nodes in the input layer may be the same as the number of nodes in the output layer, and the number of nodes decreases and then increases again as progresses from the input layer to the hidden layer.
  • the number of nodes in the input layer may be less than the number of nodes in the output layer, and the number of nodes may be reduced as the number of nodes progresses from the input layer to the hidden layer. there is.
  • the neural network according to another embodiment of the present disclosure may be a neural network in which the number of nodes in the input layer may be greater than the number of nodes in the output layer, and the number of nodes increases as the number of nodes progresses from the input layer to the hidden layer.
  • the neural network according to another embodiment of the present disclosure may be a neural network in a combined form of the aforementioned neural networks.
  • a deep neural network may refer to a neural network including a plurality of hidden layers in addition to an input layer and an output layer.
  • Deep neural networks can be used to identify the latent structures of data. In other words, it can identify the potential structure of photos, texts, videos, voices, and music (e.g., what objects are in the photos, what the text and emotions are, what the texts and emotions are, etc.) .
  • Deep neural networks include convolutional neural networks (CNNs), recurrent neural networks (RNNs), auto encoders, generative adversarial networks (GANs), and restricted Boltzmann machines (RBMs). boltzmann machine), a deep belief network (DBN), a Q network, a U network, a Siamese network, and the like.
  • CNNs convolutional neural networks
  • RNNs recurrent neural networks
  • GANs generative adversarial networks
  • RBMs restricted Boltzmann machines
  • boltzmann machine a deep belief network
  • DBN deep belief network
  • Q network
  • FIG. 3 is a diagram exemplarily illustrating a neural network model of a pseudo-normal data generator, according to an embodiment of the present disclosure.
  • the first model of the pseudo-normal data generator includes at least one embedding layer 222 , at least one LSTM layer 224 , and at least one dense layer ( 226) may be included.
  • the embedding layer 222 may serve to convert the input CAN ID into a vector of a predetermined size.
  • the LSTM layer 224 may perform a role of receiving a vector and extracting information.
  • the density layer 226 may finally predict a probability distribution for the next CAN ID.
  • the first model of the pseudo-normal data generator may be trained to receive CAN ID sequence data extracted from normal CAN traffic and predict a probability distribution for a CAN ID that will appear after the input sequence.
  • the present invention is not limited thereto.
  • the first model of the pseudo-normal data generating unit for which the training has been completed may generate pseudo-normal data.
  • the pseudo-normal data is a CAN ID sequence, and each CAN ID constituting the sequence is predicted by the first model of the pseudo-normal data generator at each time step and then probabilistically according to the probability distribution of the CAN ID. can be selected.
  • the first model of the pseudo-normal data generator that has been trained can add noise by arbitrarily selecting a CAN ID using a uniform distribution rather than a predicted probability distribution when selecting the next CAN ID with a certain probability. there is.
  • the present invention is not limited thereto.
  • the pseudo-normal data generated from the first model of the pseudo-normal data generating unit may be used together with the normal data for supervised learning of the second model of the abnormal detecting unit.
  • the second model of the abnormality detection unit may be trained to classify the pseudonormal data and the normal data. Accordingly, the computing device of the present disclosure may improve the performance of the model by using the generated pseudo normal data and the abnormal symptom data separately collected as abnormal data together.
  • the present invention is not limited thereto.
  • the problem in which the first model of the pseudo-normal data generator predicts the next CAN ID may be regarded as a general multi-class classification problem.
  • the first model of the LSTM-based pseudo-normal data generator may predict the class of the next CAN ID based on the given previous state of the LSTM layer and the input CAN ID.
  • a categorical cross entropy loss function may be used.
  • categorical cross entropy can be implemented by adding softmax activation before calculating cross entropy.
  • Softmax activation can be calculated as in Equation 1 below by normalizing the C-dimensional vector s to the C-dimensional vector ⁇ (s) in the range (0, 1) in which the sum is 1.
  • C may represent the number of CAN IDs.
  • the vector s may represent an output logit of the last dense layer.
  • the cross entropy loss can be calculated as in Equation 2 below.
  • t i may indicate the next CAN ID of the given sequence.
  • the computing device of the present disclosure may supply the starting CAN ID to the first model of the pseudo-normal data generating unit by setting the number of CAN IDs to be generated.
  • the pseudo normal data generating unit may generate the CAN ID sequence.
  • the first model of the pseudo-normal data generator may predict the distribution of the next CAN ID based on the given start CAN ID.
  • the first model of the pseudo-normal data generator may obtain the index of the next CAN ID by sampling from the predicted probability distribution.
  • the predicted CAN ID may be used as the next input of the first model.
  • the first model of the pseudo-stationary data generator when selecting the next item to increase the diversity of the generated pseudo-stationary data, uses a uniform distribution of a given probability called the noise ratio instead of the probability distribution predicted in the dense layer. You can get a sample constructor model.
  • the first model of the pseudo-normal data generator may be selected by sampling from a uniform distribution of 20% of the CAN IDs of the generated sequence given a uniform sampling probability of 0.2.
  • the present invention is not limited thereto.
  • the second model of the anomaly detection unit may be learned through supervised learning using noise pseudo-normal data generated by the first model of the pseudo-normal data generation unit and actual CAN traffic data. Therefore, the training of the second model of the anomaly detection unit may be regarded as a binary classification problem.
  • samples of actual CAN data and pseudo-normal data may be represented by 0 and 1, respectively.
  • the present invention is not limited thereto.
  • the second model of the anomaly detection unit may use hint data for an attack, which is a type of attack data, by using additional abnormal data in addition to the pseudo normal data.
  • the second model of the anomaly detection unit may acquire a specific type of attack data and use it for training together with the noisy pseudo-normal data.
  • the hint about the attack may help the second model of the anomaly detection unit to learn the attack pattern and various general data.
  • the hint data may be labeled like noise pseudo-normal data.
  • the present invention is not limited thereto.
  • the binary cross entropy loss classifies the input CAN ID sequence into two classes, normal and abnormal, so that the second model of the anomaly detection unit is used. can be used to learn.
  • the binary cross entropy loss may be calculated by Equation 2 above.
  • C may be set to 2 according to the number of output classes.
  • the present invention is not limited thereto.
  • gradient clipping may be applied to prevent a gradient exploding problem that may occur during training of the first model of the pseudo-normal data generator and the second model of the abnormality detector.
  • the problem of gradient exploiting is that large error gradients can accumulate, causing too large updates to model weights during training.
  • c is the hyperparameter
  • g is the slope
  • c is the hyperparameter
  • g is the slope
  • gradient clipping can make the model training process more stable by allowing gradient g to have a norm of max c.
  • the present invention is not limited thereto.
  • FIG. 4 is a diagram illustrating an example of a decision boundary of a supervised learning model according to learning data, according to an embodiment of the present disclosure.
  • the data generated from the vehicle includes normal data and abnormal data (attack data) (a), only normal data (b), and noise pseudo data and normal data. There may be a case (c) included. If the amount of labeled normal data and anomalous data samples is sufficient, the anomaly detection model can be trained to classify normal data and abnormal data. However, there is a problem in that it is difficult to classify only normal data from data including normal data and abnormal data. Accordingly, the present disclosure can improve model performance by generating pseudo-normal data having noise from normal data and learning an anomaly detection model based on the generated noisy pseudo-normal data and normal data.
  • the present disclosure may generate pseudo-normal data to detect a new type of anomaly that the model has not learned while maintaining the performance advantage of self-supervised learning and utilize it for model training. Accordingly, according to the present disclosure, the generated model can detect not only the learned type of anomaly but also the new type of attack data.
  • the present disclosure can detect not only the types of anomalies used for model training, but also new types of anomalies. Since the existing self-supervised learning method-based model uses a method of learning the boundary between normal and abnormal data points in the data space, it is dependent on the abnormal data used for training, Although there is a limitation in not being able to determine, in the present disclosure, both the learned type of anomaly as well as the new type of anomaly can be detected by allowing the model to learn the boundary of the spatial region where normal data points are distributed.
  • FIG. 5 is a diagram illustrating a flowchart of a method for detecting anomalies in a vehicle according to an embodiment of the present disclosure.
  • the computing device of the present disclosure may acquire normal data generated in the vehicle ( S10 ).
  • the computing device may acquire controller area network (CAN) traffic data generated in a vehicle in a normal state.
  • CAN controller area network
  • the computing device of the present disclosure may pre-process the acquired normal data (S20).
  • the computing device may extract a CAN ID from CAN messages included in normal data and generate a CAN ID sequence based on the extracted CAN ID.
  • the computing device of the present disclosure may generate pseudo normal data by inputting the pre-processed normal data to the pre-trained first neural network model ( S30 ).
  • the computing device inputs the pre-processed normal data to the pre-trained first neural network model, predicts the CAN ID that appears next to each CAN ID included in the normal data through the pre-trained first neural network model, data can be generated.
  • the computing device may input normal data including an arbitrary CAN ID or CAN ID sequence to the pre-trained first neural network model.
  • the computing device may predict and select the next CAN ID according to a probability distribution of a CAN ID that appears next to each CAN ID included in the normal data. Also, when selecting the next CAN ID, the computing device may add noise by selecting an arbitrary CAN ID according to a uniform distribution. For example, when adding noise, the computing device may select an arbitrary CAN ID with a uniform distribution based on a preset noise ratio.
  • the first neural network model may be pre-trained to predict a probability distribution for a CAN ID that appears next to the input CAN ID or CAN ID sequence.
  • Pre-learning of the first neural network model receives the CAN ID extracted from normal data, transforms it into a vector of a certain size, extracts the context of a given sequence based on the transformed vector, and the context of the extracted sequence It is possible to learn by predicting a probability distribution for the CAN ID that appears next to the input CAN ID based on the .
  • the computing device of the present disclosure may train the second neural network model based on the generated pseudo-normal data ( S40 ).
  • the computing device may input the pre-processed normal data and the pseudo-normal data into the second neural network model to learn to classify the pseudo-normal data as abnormal data.
  • the computing device inputs the pre-processed normal data and additionally acquired hint data of the attack type into the second neural network model to classify the hint data of the attack type as abnormal data. can learn
  • the computing device trains the second neural network model based on at least one of pseudo normal data, attack type hint data, and abnormal data the size of the gradient back propagated may be limited to less than or equal to a threshold value. there is.
  • the computing device of the present disclosure may input data generated from the vehicle into the learned second neural network model to detect an abnormal symptom of the vehicle ( S50 ).
  • the computing device acquires data generated in the vehicle, pre-processes the obtained data, and inputs the pre-processed data to the pre-trained second neural network model when detecting an abnormal symptom of the vehicle to normal data or abnormal data. By classifying, it is possible to detect abnormal signs of the vehicle.
  • the computing device may acquire controller area network (CAN) traffic data generated in the vehicle in an abnormal and normal state.
  • CAN controller area network
  • the computing device may extract a CAN ID from CAN messages included in the data, and may generate a CAN ID sequence based on the extracted CAN ID.
  • FIG. 6 depicts a general schematic diagram of an example computing environment in which embodiments of the present disclosure may be implemented.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein include routines, procedures, programs, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • modules herein can be applied to single-processor or multiprocessor computer systems, minicomputers, mainframe computers as well as personal computers, handheld computing devices, microprocessor-based or programmable consumer electronics, etc. (each of which is It will be appreciated that other computer system configurations may be implemented, including those that may operate in connection with one or more associated devices.
  • the described embodiments of the present disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • Computers typically include a variety of computer-readable media.
  • Media accessible by a computer includes volatile and nonvolatile media, transitory and non-transitory media, removable and non-removable media.
  • computer-readable media may include computer-readable storage media and computer-readable transmission media.
  • Computer readable storage media includes volatile and nonvolatile media, temporary and non-transitory media, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. includes media.
  • a computer-readable storage medium may be RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital video disk (DVD) or other optical disk storage device, magnetic cassette, magnetic tape, magnetic disk storage device, or other magnetic storage device. device, or any other medium that can be accessed by a computer and used to store the desired information.
  • a computer readable transmission medium typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and the like. Includes all information delivery media.
  • modulated data signal means a signal in which one or more of the characteristics of the signal is set or changed so as to encode information in the signal.
  • computer-readable transmission media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also intended to be included within the scope of computer-readable transmission media.
  • An example environment 1100 implementing various aspects of the disclosure is shown including a computer 1102 , the computer 1102 including a processing unit 1104 , a system memory 1106 , and a system bus 1108 . do.
  • a system bus 1108 couples system components, including but not limited to system memory 1106 , to the processing device 1104 .
  • the processing device 1104 may be any of a variety of commercially available processors. Dual processor and other multiprocessor architectures may also be used as processing unit 1104 .
  • the system bus 1108 may be any of several types of bus structures that may further interconnect a memory bus, a peripheral bus, and a local bus using any of a variety of commercial bus architectures.
  • System memory 1106 includes read only memory (ROM) 1110 and random access memory (RAM) 1112 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) is stored in non-volatile memory 1110, such as ROM, EPROM, EEPROM, etc., the BIOS is the basic input/output system (BIOS) that helps transfer information between components within computer 1102, such as during startup. contains routines.
  • BIOS basic input/output system
  • RAM 1112 may also include high-speed RAM, such as static RAM, for caching data.
  • the computer 1102 may also include an internal hard disk drive (HDD) 1114 (eg, EIDE, SATA) - this internal hard disk drive 1114 may also be configured for external use within a suitable chassis (not shown).
  • HDD hard disk drive
  • FDD magnetic floppy disk drive
  • optical disk drive 1120 eg, a CD-ROM
  • the hard disk drive 1114 , the magnetic disk drive 1116 , and the optical disk drive 1120 are connected to the system bus 1108 by the hard disk drive interface 1124 , the magnetic disk drive interface 1126 , and the optical drive interface 1128 , respectively.
  • the interface 1124 for external drive implementation includes, for example, at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies.
  • drives and their associated computer-readable media provide non-volatile storage of data, data structures, computer-executable instructions, and the like.
  • drives and media correspond to storing any data in a suitable digital format.
  • computer readable storage media refers to HDDs, removable magnetic disks, and removable optical media such as CDs or DVDs, those skilled in the art will use zip drives, magnetic cassettes, flash memory cards, cartridges, It will be appreciated that other tangible computer-readable storage media and the like may also be used in the exemplary operating environment and any such media may include computer-executable instructions for performing the methods of the present disclosure. .
  • a number of program modules may be stored in the drive and RAM 1112 , including an operating system 1130 , one or more application programs 1132 , other program modules 1134 , and program data 1136 . All or portions of the operating system, applications, modules, and/or data may also be cached in RAM 1112 . It will be appreciated that the present disclosure may be implemented in various commercially available operating systems or combinations of operating systems.
  • a user may enter commands and information into the computer 1102 via one or more wired/wireless input devices, for example, a pointing device such as a keyboard 1138 and a mouse 1140 .
  • Other input devices may include a microphone, IR remote control, joystick, game pad, stylus pen, touch screen, and the like.
  • input device interface 1142 is often connected to the system bus 1108, parallel ports, IEEE 1394 serial ports, game ports, USB ports, IR interfaces, and the like may be connected by other interfaces.
  • a monitor 1144 or other type of display device is also coupled to the system bus 1108 via an interface, such as a video adapter 1146 .
  • the computer typically includes other peripheral output devices (not shown), such as speakers, printers, and the like.
  • Computer 1102 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1148 via wired and/or wireless communications.
  • Remote computer(s) 1148 may be workstations, server computers, routers, personal computers, portable computers, microprocessor-based entertainment devices, peer devices, or other common network nodes, and are generally Although including many or all of the components described, only memory storage device 1150 is shown for simplicity.
  • the logical connections shown include wired/wireless connections to a local area network (LAN) 1152 and/or a larger network, eg, a wide area network (WAN) 1154 .
  • LAN and WAN networking environments are common in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which can be connected to a worldwide computer network, for example, the Internet.
  • the computer 1102 When used in a LAN networking environment, the computer 1102 is coupled to the local network 1152 through a wired and/or wireless communication network interface or adapter 1156 .
  • Adapter 1156 may facilitate wired or wireless communication to LAN 1152 , which LAN 1152 also includes a wireless access point installed therein for communicating with wireless adapter 1156 .
  • the computer 1102 may include a modem 1158 , connected to a communication server on the WAN 1154 , or otherwise establishing communications over the WAN 1154 , such as over the Internet. have the means
  • a modem 1158 which may be internal or external and a wired or wireless device, is coupled to the system bus 1108 via a serial port interface 1142 .
  • program modules described for computer 1102 may be stored in remote memory/storage device 1150 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communication link between the computers may be used.
  • the computer 1102 may be associated with any wireless device or object that is deployed and operates in wireless communication, for example, a printer, scanner, desktop and/or portable computer, portable data assistant (PDA), communication satellite, wireless detectable tag. It operates to communicate with any device or place and phone. This includes at least Wi-Fi and Bluetooth wireless technologies. Accordingly, the communication may be a predefined structure as in a conventional network or may simply be an ad hoc communication between at least two devices.
  • PDA portable data assistant
  • Wi-Fi Wireless Fidelity
  • Wi-Fi is a wireless technology such as cell phones that allows these devices, eg, computers, to transmit and receive data indoors and outdoors, ie anywhere within range of a base station.
  • Wi-Fi networks use a radio technology called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, and high-speed wireless connections.
  • Wi-Fi can be used to connect computers to each other, to the Internet, and to wired networks (using IEEE 802.3 or Ethernet).
  • Wi-Fi networks may operate in unlicensed 2.4 and 5 GHz radio bands, for example, at 11 Mbps (802.11a) or 54 Mbps (802.11b) data rates, or in products that include both bands (dual band). there is.
  • the various embodiments presented herein may be implemented as methods, apparatus, or articles of manufacture using standard programming and/or engineering techniques.
  • article of manufacture includes a computer program or media accessible from any computer-readable device.
  • computer-readable storage media include magnetic storage devices (eg, hard disks, floppy disks, magnetic strips, etc.), optical disks (eg, CDs, DVDs, etc.), smart cards, and flash drives. memory devices (eg, EEPROMs, cards, sticks, key drives, etc.).
  • machine-readable medium includes, but is not limited to, wireless channels and various other media that can store, hold, and/or convey instruction(s) and/or data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Testing And Monitoring For Control Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

Un procédé de détection de symptômes anormaux d'un véhicule basés sur un apprentissage auto-supervisé à l'aide de données pseudo-normales peut comprendre les étapes consistant à : obtenir des données normales générées dans un véhicule ; prétraiter les données normales obtenues ; générer des données pseudo-normales par entrée des données normales prétraitées dans un premier modèle de réseau neuronal préalablement entraîné ; entraîner un second modèle de réseau neuronal sur la base des données pseudo-normales générées ; et détecter des symptômes anormaux du véhicule par entrée des données générées dans le véhicule dans le second modèle de réseau neuronal entraîné.
PCT/KR2021/013572 2020-10-07 2021-10-05 Appareil et procédé de détection de symptômes anormaux d'un véhicule basés sur un apprentissage auto-supervisé en utilisant des données pseudo-normales WO2022075678A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20200129476 2020-10-07
KR10-2020-0129476 2020-10-07
KR10-2021-0006212 2021-01-15
KR1020210006212A KR102506805B1 (ko) 2020-10-07 2021-01-15 의사 정상 데이터를 이용한 자가 감독 학습 기반의 차량 이상징후 탐지 장치 및 방법

Publications (2)

Publication Number Publication Date
WO2022075678A2 true WO2022075678A2 (fr) 2022-04-14
WO2022075678A3 WO2022075678A3 (fr) 2022-10-27

Family

ID=81125873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/013572 WO2022075678A2 (fr) 2020-10-07 2021-10-05 Appareil et procédé de détection de symptômes anormaux d'un véhicule basés sur un apprentissage auto-supervisé en utilisant des données pseudo-normales

Country Status (1)

Country Link
WO (1) WO2022075678A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115035913A (zh) * 2022-08-11 2022-09-09 合肥中科类脑智能技术有限公司 一种声音异常检测方法
US20220398146A1 (en) * 2021-06-09 2022-12-15 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102030837B1 (ko) * 2013-09-30 2019-10-10 한국전력공사 침입 탐지 장치 및 방법
KR101714520B1 (ko) * 2015-10-30 2017-03-09 현대자동차주식회사 차량 내 네트워크 공격 탐지 방법 및 장치
KR101843930B1 (ko) * 2016-08-03 2018-04-02 고려대학교 산학협력단 시퀀스 마이닝 기반의 차량 이상 징후 탐지 장치
KR102088428B1 (ko) * 2018-03-22 2020-04-24 슈어소프트테크주식회사 운전 상태 추정을 위한 이동체, 서버, 운전 상태 추정 방법 및 시스템
JP7215131B2 (ja) * 2018-12-12 2023-01-31 株式会社オートネットワーク技術研究所 判定装置、判定プログラム、判定方法及びニューラルネットワークモデルの生成方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220398146A1 (en) * 2021-06-09 2022-12-15 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and storage medium
US11860716B2 (en) * 2021-06-09 2024-01-02 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and storage medium
CN115035913A (zh) * 2022-08-11 2022-09-09 合肥中科类脑智能技术有限公司 一种声音异常检测方法

Also Published As

Publication number Publication date
WO2022075678A3 (fr) 2022-10-27

Similar Documents

Publication Publication Date Title
WO2022075678A2 (fr) Appareil et procédé de détection de symptômes anormaux d'un véhicule basés sur un apprentissage auto-supervisé en utilisant des données pseudo-normales
WO2019074195A1 (fr) Dispositif et procédé de comparaison d'images basée sur un apprentissage profond, et programme d'ordinateur stocké sur un support d'enregistrement lisible par ordinateur
WO2022055100A1 (fr) Procédé de détection d'anomalies et dispositif associé
WO2020005049A1 (fr) Procédé d'apprentissage pour réseau neuronal artificiel
WO2019027208A1 (fr) Procédé d'apprentissage pour un réseau neuronal artificiel
WO2019039757A1 (fr) Dispositif et procédé de génération de données d'apprentissage et programme informatique stocké dans un support d'enregistrement lisible par ordinateur
EP3857469A1 (fr) Apprentissage continu basé sur des tâches multiples
WO2021261825A1 (fr) Dispositif et procédé de génération de données météorologiques reposant sur l'apprentissage automatique
KR20220046408A (ko) 의사 정상 데이터를 이용한 자가 감독 학습 기반의 차량 이상징후 탐지 장치 및 방법
WO2021040354A1 (fr) Procédé de traitement de données utilisant un réseau de neurones artificiels
WO2020004815A1 (fr) Procédé de détection d'une anomalie dans des données
WO2022203127A1 (fr) Procédé d'apprentissage en continu de détection d'anomalie d'objet et modèle de classification d'états, et appareil associé
KR102241859B1 (ko) 악성 멀티미디어 파일을 분류하는 인공지능 기반 장치, 방법 및 그 방법을 수행하는 프로그램을 기록한 컴퓨터 판독 가능 기록매체
WO2021194105A1 (fr) Procédé d'apprentissage de modèle de simulation d'expert, et dispositif d'apprentissage
WO2024117708A1 (fr) Procédé de conversion d'image faciale à l'aide d'un modèle de diffusion
WO2024080791A1 (fr) Procédé de génération d'ensemble de données
WO2024058465A1 (fr) Procédé d'apprentissage de modèle de réseau neuronal local pour apprentissage fédéré
CN113992419A (zh) 一种用户异常行为检测和处理系统及其方法
WO2016208817A1 (fr) Appareil et procédé d'interfaçage d'entrée de touches
US20230035291A1 (en) Generating Authentication Template Filters Using One or More Machine-Learned Models
WO2023027278A1 (fr) Procédé d'apprentissage actif fondé sur un programme d'apprentissage
WO2021251691A1 (fr) Procédé de détection d'objet à base de rpn sans ancrage
KR20200141682A (ko) 파일 내 악성 위협을 처리하는 인공지능 기반 장치, 그 방법 및 그 기록매체
WO2022092335A1 (fr) Procédé d'authentification personnel
KR20220071843A (ko) 무인 이동체 메시지 id 시퀀스 생성을 위한 생성적 적대 신경망 모델과 그 학습 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21877930

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21877930

Country of ref document: EP

Kind code of ref document: A2