WO2022114025A1 - 異常検知方法、異常検知装置、及び、プログラム - Google Patents
異常検知方法、異常検知装置、及び、プログラム Download PDFInfo
- Publication number
- WO2022114025A1 WO2022114025A1 PCT/JP2021/043063 JP2021043063W WO2022114025A1 WO 2022114025 A1 WO2022114025 A1 WO 2022114025A1 JP 2021043063 W JP2021043063 W JP 2021043063W WO 2022114025 A1 WO2022114025 A1 WO 2022114025A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- attack
- messages
- abnormality detection
- certain period
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 209
- 230000005856 abnormality Effects 0.000 title claims abstract description 136
- 230000007704 transition Effects 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims description 163
- 230000015654 memory Effects 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000006403 short-term memory Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 89
- 230000002159 abnormal effect Effects 0.000 description 50
- 238000000034 method Methods 0.000 description 44
- 238000004590 computer program Methods 0.000 description 14
- 238000002347 injection Methods 0.000 description 10
- 239000007924 injection Substances 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 239000002245 particle Substances 0.000 description 7
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/552—Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/606—Protecting data by securing the transmission between two devices or processes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/40—Bus networks
- H04L12/407—Bus networks with decentralised control
- H04L12/417—Bus networks with decentralised control with deterministic access, e.g. token passing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/12—Applying verification of the received information
- H04L63/123—Applying verification of the received information received data contents, e.g. message integrity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/40—Bus networks
- H04L2012/40208—Bus networks characterized by the use of a particular bus standard
- H04L2012/40215—Controller Area Network CAN
Definitions
- the present disclosure relates to an abnormality detection method, an abnormality detection device, and a program for detecting an abnormality in a message transmitted in an in-vehicle network.
- ECUs electronice control units
- in-vehicle network The network connecting these ECUs is called an in-vehicle network.
- CAN Controller Area Network
- CAN bus Controller Area Network
- the CAN bus is designed to avoid a large amount of physical wiring between the ECUs of an automobile.
- the CAN packet or message payload contains data from one or more ECUs called sensors that detect vehicle behavior, such as vehicle speed sensors, acceleration sensors, yaw rate sensors, and the like.
- each ECU broadcasts a message using an ID assigned in advance.
- an injection attack may be performed.
- the injection attack is one of the general cyber attacks.
- Non-Patent Document 1 discloses a technique for accurately detecting an attack on a CAN bus from a CAN packet using a deep neural network (DNN).
- DNN deep neural network
- Non-Patent Document 1 there is a possibility that the attack on the CAN bus cannot be detected as the cyber attack method becomes smarter.
- the present disclosure has been made in view of the above circumstances, and an object of the present disclosure is to provide an abnormality detection method, an abnormality detection device, and a program capable of accurately detecting an attack on an in-vehicle network.
- the abnormality detection method detects an abnormality in the network in an in-vehicle network system including a plurality of electronic control units that exchange messages via a network in the vehicle.
- This is an anomaly detection method for the purpose, and the transition of the reception interval or the sensor value of a plurality of messages included in the message series for a certain period of the message series received from the network is converted into image data and trained CNN (Convolution Neural). Networks) is used to classify whether or not the attack message is inserted in the fixed period from the image data, and if the attack message is inserted in the fixed period, the attack message is inserted in the fixed period. Outputs the detection result that there is an additional attack to be performed.
- CNN Convolution Neural
- a recording medium such as a system, method, integrated circuit, computer program or computer-readable CD-ROM, and the system, method, integrated circuit, computer. It may be realized by any combination of a program and a recording medium.
- FIG. 1 is a diagram showing an outline of an abnormality detection device according to the first embodiment.
- FIG. 2A is a diagram showing the structure of the CAN bus data frame according to the first embodiment.
- FIG. 2B is an explanatory diagram of an attack method of an injection attack according to the first embodiment.
- FIG. 3 is a diagram showing an example of a hardware configuration of a computer that realizes the function of the abnormality detection device according to the embodiment by software.
- FIG. 4 is a block diagram showing an example of the configuration of the abnormality detection device according to the first embodiment.
- FIG. 5 is a block diagram showing an example of the detailed configuration of the input processing unit shown in FIG.
- FIG. 6A is an explanatory diagram of a window generation method of the window generation processing unit according to the first embodiment.
- FIG. 6B is a diagram showing an example of the size of the subwindow generated by the window generation processing unit according to the first embodiment.
- FIG. 7 is a diagram showing an example of a case where an attack message is included in the current subwindow generated by the window generation processing unit according to the first embodiment.
- FIG. 8A is a diagram conceptually showing the reception interval of a normal message sequence when no additional attack is performed.
- FIG. 8B is a diagram conceptually showing the reception interval of an abnormal message sequence when an additional attack is performed.
- FIG. 9A is a diagram conceptually showing the sensor values of a normal message sequence when no additional attack is performed.
- FIG. 9B is a diagram conceptually showing the sensor values of the abnormal message sequence in the case of an additional attack.
- FIG. 10A is a diagram showing an example of an image of a reception interval of a message series when the additional attack according to the first embodiment is not performed.
- FIG. 10B is a diagram showing an example of an image of a reception interval of a message series when an additional attack according to the first embodiment is performed.
- FIG. 11A is a diagram showing an example of an image of the sensor value of the message series when the additional attack according to the first embodiment is not performed.
- FIG. 11B is a diagram showing an example of an image of the sensor value of the message series when the additional type attack according to the first embodiment is performed.
- FIG. 12 is a conceptual diagram of the processing of the learned CNN according to the first embodiment.
- FIG. 13 is a diagram for conceptually explaining the flow of processing of the learned CNN shown in FIG. FIG.
- FIG. 14 is a diagram showing an example of the structure of the learned CNN according to the first embodiment.
- FIG. 15 is a flowchart showing an operation outline of the abnormality detection device according to the first embodiment.
- FIG. 16 is a block diagram showing an example of the configuration of the abnormality detection device according to the second embodiment.
- FIG. 17 is a block diagram showing an example of a detailed configuration of the message classification processing unit according to the second embodiment.
- FIG. 18 is a diagram for conceptually explaining the particle size of abnormality detection according to the second embodiment.
- FIG. 19 is a diagram for conceptually explaining the flow of processing of the CNN Message Classifier according to the second embodiment.
- FIG. 20A is a diagram showing a specific example of a plurality of messages input to the message classifier according to the second embodiment and a feature amount.
- FIG. 20A is a diagram showing a specific example of a plurality of messages input to the message classifier according to the second embodiment and a feature amount.
- FIG. 20B is a diagram showing a specific example of the determination result by the message classifier according to the second embodiment.
- FIG. 21 is a diagram for conceptually explaining the feature extraction process performed by the message classifier according to the second embodiment.
- FIG. 22 is a diagram for conceptually explaining the flow of processing of the LSTM Message Classifier according to the second embodiment.
- FIG. 23A is a diagram showing a specific example of a plurality of messages input to the message classifier according to the second embodiment and a feature amount.
- FIG. 23B is a diagram showing a specific example of the determination result by the message classifier according to the second embodiment.
- FIG. 24 is a diagram for explaining the problems of message level classification for equivalence attacks and countermeasures against them.
- FIG. 25 is a diagram for conceptually explaining that it is possible to determine whether or not the attack is equivalent by calculating the difference between the sensor values between the messages in the message series.
- FIG. 26 is a flowchart showing an operation outline of the abnormality detection device according to the second embodiment.
- FIG. 27 is a flowchart showing an example of the detailed processing of step S243 shown in FIG. 26.
- FIG. 28 is a flowchart showing another example of the detailed processing of step S243 shown in FIG. 26.
- FIG. 29 is a diagram showing an example of an abnormal message series in a case where all messages included in a certain period according to a modification of the second embodiment can be classified based on a reception interval.
- FIG. 30 is a diagram showing an example of a classification result of an abnormal message series when all the messages included in a certain period according to the modified example of the second embodiment can be classified based on the reception interval.
- FIG. 31 is a diagram showing an example of a classification result of an abnormal message series when all the messages included in a certain period according to the modified example of the second embodiment cannot be classified based on the reception interval.
- FIG. 32 is a flowchart showing an example of the detailed processing of step S2434A shown in FIG. 28.
- FIG. 33A is a diagram conceptually showing an example of the case of Yes in step S24345 shown in FIG. 32.
- FIG. 33B is a diagram conceptually showing an example of the case of Yes in step S24347 shown in FIG. 32.
- FIG. 33C is a diagram conceptually showing an example of the case of No in step S24347 shown in FIG. 32.
- FIG. 34 is a flowchart showing an example of the detailed processing of step S2435A shown in FIG. 28.
- FIG. 35 is a diagram showing a determination rule used in the detailed processing of step S2435A shown in FIG. 28.
- FIG. 36A is a diagram for conceptually explaining whether the message that could not be classified is the end of the attack.
- FIG. 36B is a diagram for conceptually explaining the reception interval rule in the same group shown in FIG. 35.
- FIG. 36C is a diagram conceptually showing an example of the case of Yes in step S24354 shown in FIG. 34.
- FIG. 36D is a diagram conceptually showing an example of the case of Yes in step S24354 shown in FIG. 34.
- FIG. 36E is a diagram showing an example of the determination result of step S24359 shown in FIG. 34.
- the abnormality detection method is an abnormality detection method for detecting an abnormality in the network in an in-vehicle network system including a plurality of electronic control units that exchange messages via a network in the vehicle.
- the transition of the reception interval or the sensor value of a plurality of messages included in the message series for a certain period of the message series received from the network is converted into image data, and the trained CNN (Convolution Neural Networks) is used to convert the image data. Therefore, it is classified whether or not the attack message is inserted in the fixed period, and if the attack message is inserted in the fixed period, there is an additional type attack in which the attack message is inserted in the fixed period. The detection result of is output.
- a plurality of messages included in the fixed period are acquired, and the plurality of messages are obtained from the acquired plurality of messages.
- image data representing the reception interval of the above or an image showing the transition of the sensor values of the plurality of messages the transition of the reception interval or the sensor value of the plurality of messages is converted into image data.
- the sensor values included in each of the plurality of messages included in the message series in the fixed period are further determined based on a predetermined rule. , It may be determined whether or not the plurality of messages are attack messages.
- the attack message when the attack message is inserted in the fixed period, all the sensors of the combination consisting of two messages before and after the received time among the plurality of messages included in the message series in the fixed period.
- the difference value of the value is calculated, the calculated difference value is grouped, it is determined whether all the difference values included in each grouped group are 0, and all the difference values are 0. If not, the detection result that there is an additional type attack in which the attack message is inserted in the fixed period is output, and if all the difference values are not 0, the group in which the difference value is not 0 in the fixed period.
- the above-mentioned detection result may be output assuming that the message of is not an attack message.
- the plurality of messages included in the message series in the fixed period are further used by using a learned CNN different from the learned CNN.
- the message which is an attack message and the attacked sensor may be detected.
- the learned LSTM Long short-term memory
- the learned LSTM Long short-term memory
- the sensor values included in each of the plurality of messages included in the message series in the fixed period are further determined based on a predetermined rule.
- the determination result obtained by determining whether or not the plurality of messages are attack messages is acquired, and a plurality of learned CNNs different from the learned CNN are included in the message sequence for a certain period of time.
- the first detection result of detecting the attack message and the attacked sensor among the plurality of messages is acquired from the above-mentioned messages, and is included in the message sequence for the fixed period using the learned LSTM.
- the second detection result of detecting whether or not the plurality of messages are attack messages is acquired, and the acquired determination result, the first detection result, and the second detection result are ensemble processed.
- the determination result, the first detection result, and the second detection result are selected or acquired, and the determination result, the first detection result, and the second detection result are selected.
- the second detection result may be integrated by load averaging.
- the abnormality detection device is an abnormality detection device for detecting an abnormality in the network in an in-vehicle network system including a plurality of electronic control units that exchange messages via a network in the vehicle. Therefore, it is equipped with a processor and a memory, and the transition of the reception interval or the sensor value of a plurality of messages included in the message series for a certain period of the message series received from the network is converted into image data, and the learned CNN is obtained. It is used to classify whether or not the attack message is inserted in the fixed period from the image data, and if the attack message is inserted in the fixed period, the attack message is inserted in the fixed period. Outputs the detection result that there is a type attack.
- a recording medium such as a system, method, integrated circuit, computer program or computer-readable CD-ROM, and the system, method, integrated circuit, computer. It may be realized by any combination of a program and a recording medium.
- FIG. 1 is a diagram showing an outline of the abnormality detection device 10 according to the first embodiment.
- the abnormality detection device 10 is a device for detecting an abnormality in an in-vehicle network in an in-vehicle network system including a plurality of electronic control units (ECUs) that exchange messages via an in-vehicle network that is a network in the vehicle. As shown in FIG. 1, the abnormality detection device 10 receives the CAN data stream flowing through the vehicle-mounted network and outputs the detection result of detecting the abnormality of the vehicle-mounted network at the event level.
- the CAN data stream is a message sequence received from the in-vehicle network and includes a plurality of messages.
- FIG. 2A is a diagram showing the structure of the CAN bus data frame according to the first embodiment.
- the CAN bus data frame shown in FIG. 2A is also referred to as a CAN packet of the CAN bus or a message, and will be hereinafter referred to as a message.
- the message structure includes SOF (StartOfFrame), ID field, RTR (RemoteTransmissionRequest), IDE (IdentifierExtension), reserved bit “r”, DLC (DataLengthCode), and data field. , CRC (Cyclic Redundancy Check) sequence, CRC delimiter "DEL”, ACK (Acknowledgement) slot, ACK delimiter "DEL”, and EOF (EndOfFrame) fields.
- SOF StartOfFrame
- ID field As shown in FIG. 2A, the message structure includes SOF (StartOfFrame), ID field, RTR (RemoteTransmissionRequest), IDE (IdentifierExtension), reserved bit “r”, DLC (DataLengthCode), and data field.
- CRC Cyclic Redundancy Check
- CRC delimiter "DEL” CRC delimiter "DEL”
- ACK Acknowledgement
- EOF EndOfFrame
- the message has a simple structure that includes three important parts: an ID field, a DLC, and a data field that includes the main content.
- the ID is used to identify the message.
- the ID field is composed of, for example, 11 bits, and stores a value indicating the type of data.
- the value of the ID is also used for the priority of the message, for example, a message having an ID with a small value has priority over a message having an ID with a large value.
- the DLC is composed of 4 bits and is a value indicating the length of the data field.
- the data field is composed of a maximum of 64 bits and stores the value indicating the content of the message.
- FIG. 2B is an explanatory diagram of an attack method of an injection attack according to the first embodiment.
- an injection attack is a cyber attack that inserts an attack message that is an illegal or abnormal message into a message sequence.
- the injection attack is one of the general cyber attacks, but it is a cyber attack assumed in the present embodiment.
- the attack method of the injection attack can be divided into an additional type attack and a replacement type attack.
- the additional attack is an attack method that inserts an attack message into a normal message sequence. That is, as shown in FIG. 2B, the additional type attack attacks the message sequence by inserting the attack message indicated by the hatched solid line circle between the normal messages indicated by the hatched dotted line circle. This is an injection attack in which a message is inserted.
- the replacement type attack is an attack method that overwrites a normal message with an attack message. That is, in the replacement type attack, as shown in FIG. 2B, the attack message indicated by the hatched solid line circle is replaced with the normal message indicated by the hatched dotted line circle, so that the attack message is inserted into the message sequence. It is an injection attack.
- FIG. 3 is a diagram showing an example of a hardware configuration of a computer 1000 that realizes the function of the abnormality detection device 10 according to the embodiment by software.
- the computer 1000 is a computer including an input device 1001, an output device 1002, a CPU 1003, a built-in storage 1004, a RAM 1005, a reading device 1007, a transmission / reception device 1008, and a bus 1009.
- the input device 1001, the output device 1002, the CPU 1003, the built-in storage 1004, the RAM 1005, the reading device 1007, and the transmission / reception device 1008 are connected by the bus 1009.
- the input device 1001 is a device that serves as a user interface such as an input button, a touch pad, and a touch panel display, and accepts user operations.
- the input device 1001 may be configured to accept a user's contact operation, a voice operation, a remote control, or the like.
- the output device 1002 is also used as an input device 1001, and is configured by a touch pad, a touch panel display, or the like, and notifies the user of information to be notified.
- the built-in storage 1004 is a flash memory or the like. Further, the built-in storage 1004 may store in advance a program or the like for realizing the function of the abnormality detection device 10.
- RAM1005 is a random access memory (RandomAccessMemory), which is used to store data or the like when executing a program or application.
- RandomAccessMemory Random AccessMemory
- the reading device 1007 reads information from a recording medium such as a USB (Universal Serial Bus) memory.
- the reading device 1007 reads the program or application from the recording medium in which the program or application as described above is recorded, and stores the program or application in the built-in storage 1004.
- the transmission / reception device 1008 is a communication circuit for wirelessly or wired communication.
- the transmission / reception device 1008 may communicate with, for example, a server device or a cloud connected to a network, download a program or application as described above from the server device or the cloud, and store the program or application in the built-in storage 1004.
- the CPU 1003 is a central processing unit (Central Processing Unit), copies programs and applications stored in the built-in storage 1004 to RAM 1005, and sequentially reads and executes instructions included in the programs and applications from RAM 1005. It may be executed directly from the built-in storage 1004.
- Central Processing Unit Central Processing Unit
- FIG. 4 is a block diagram showing an example of the configuration of the abnormality detection device 10 according to the first embodiment.
- the abnormality detection device 10 includes an input processing unit 11, an event classification processing unit 12, and an output processing unit 13.
- the output processing unit 13 is not an indispensable configuration, and it is sufficient that the classification result of the event classification processing unit 12 can be acquired.
- the input processing unit 11 acquires a plurality of messages when the CAN data stream flowing through the vehicle-mounted network is input. Further, the input processing unit 11 converts the reception interval of the acquired plurality of messages or the transition of the sensor value into image data. In the present embodiment, the input processing unit 11 converts the reception interval of a plurality of messages included in the message sequence of the message sequence received from the vehicle-mounted network or the transition of the sensor value into image data.
- FIG. 5 is a block diagram showing an example of the detailed configuration of the input processing unit 11 shown in FIG.
- the input processing unit 11 includes a message receiving unit 111, a window generation processing unit 112, and an imaging processing unit 113.
- the message receiving unit 111 receives a message sequence from the vehicle-mounted network by inputting a CAN data stream flowing through the vehicle-mounted network.
- the window generation processing unit 112 acquires a plurality of messages included in a certain period by dividing the message sequence received from the vehicle-mounted network by the message receiving unit 111 using a sliding window.
- FIG. 6A is an explanatory diagram of a window generation method of the window generation processing unit 112 according to the first embodiment.
- FIG. 6B is a diagram showing an example of the size of the subwindow generated by the window generation processing unit 112 according to the first embodiment.
- the window means a buffer area for storing received messages.
- FIG. 6A (a) shows an example of a message series received by the message receiving unit 111. Each of the multiple messages contained in the message sequence is represented by a circle.
- FIG. 6A (b) shows an example of a sliding window generated by the window generation processing unit 112.
- the window generation processing unit 112 generates a sliding window divided into three parts of past, present, and future subwindows.
- the size of the past subwindow is set to 200 ms
- the size of the current subwindow is set to 100 ms
- the size of the future subwindow is set to 100 ms. ..
- FIG. 7 is a diagram showing an example of a case where an attack message is included in the current subwindow generated by the window generation processing unit 112 according to the first embodiment.
- past, present, and future subwindows are generated in order to detect whether the current subwindow contains an attack event, that is, whether an attack message is inserted.
- an attack event that is, whether an attack message is inserted.
- Imaging processing unit 113 The image processing unit 113 generates image data representing the reception interval of the plurality of messages or an image showing the transition of the sensor values of the plurality of messages from the plurality of messages acquired by the window generation processing unit 112. In this way, the imaging processing unit 113 converts the reception interval of the plurality of messages or the transition of the sensor value into image data from the plurality of messages acquired by the window generation processing unit 112.
- FIG. 8A is a diagram conceptually showing the reception interval of a normal message sequence when no additional attack is performed.
- FIG. 8B is a diagram conceptually showing the reception interval of an abnormal message sequence when an additional attack is performed.
- FIG. 9A is a diagram conceptually showing the sensor value of a normal message sequence when no additional attack is performed.
- FIG. 9B is a diagram conceptually showing the sensor values of the abnormal message sequence in the case of an additional attack.
- FIG. 9A in a normal message sequence, the sensor values included in the message are not disturbed.
- FIG. 9B in the abnormal message sequence, the sensor value included in the message is disturbed.
- FIGS. 9A and 9B when an additional attack is made, the sensor value of the message is disturbed.
- the image processing unit 113 converts the reception interval of the plurality of messages or the transition of the sensor value into image data from the plurality of messages acquired by the window generation processing unit 112.
- FIG. 10A is a diagram showing an example of an image of the reception interval of the message series when the additional attack according to the first embodiment is not performed.
- FIG. 10B is a diagram showing an example of an image of a reception interval of a message series when an additional attack according to the first embodiment is performed.
- FIG. 11A is a diagram showing an example of an image of the sensor value of the message series when the additional attack according to the first embodiment is not performed.
- FIG. 11B is a diagram showing an example of an image of the sensor value of the message series when the additional type attack according to the first embodiment is performed.
- 11A and 11B show a plurality of images as an example. Each image shown in FIGS. 10A to 11B is generated as, for example, an image of 96 pixels ⁇ 96 pixels.
- the image processing unit 113 obtains an image as shown in FIG. 10A or FIG. 10B by converting the reception interval of the plurality of messages into image data from the plurality of messages acquired by the window generation processing unit 112. Can be done. Similarly, the image processing unit 113 converts the sensor values of the plurality of messages into image data from the plurality of messages acquired by the window generation processing unit 112, and is shown in FIG. 11A or FIG. 11B, for example. Each image can be obtained. Comparing FIGS. 10A and 10B with FIGS. 11A and 11B, FIGS. 11A and 11B, which are images in the case of an additional attack, show characteristic changes in the sensor value and the message reception frequency. You can see that it is included.
- the event classification processing unit 12 detects whether or not there is an additional attack for a certain period of the message sequence received from the in-vehicle network. More specifically, the event classification processing unit 12 classifies whether or not an attack message is inserted into a plurality of messages included in a certain period from image data by using a learned CNN (Convolution Neural Networks) 121. do. The plurality of messages included in a certain period are the plurality of messages included in the current subwindow described above.
- CNN Convolution Neural Networks
- the learned CNN 121 is an example of an event classifier that can classify whether or not an attack message is inserted into a plurality of messages included in a certain period from image data.
- This event classifier is trained using, for example, an image as shown in FIGS. 10A to 11B, that is, an image showing a sensor value or a reception interval of a normal message and an attack message as teacher data.
- the event classifier is not limited to the case of CNN121, and may be an LSTM (Long short-term memory) or a BLSTM (Bi-directional Long short-term memory).
- the event classification processing unit 12 may determine whether or not an attack message has been inserted for a certain period of time based on a predetermined rule. In this case, the event classification processing unit 12 may determine whether or not the attack message is inserted in a certain period from the statistical values of the reception frequency or the reception interval of the plurality of messages acquired by the window generation processing unit 112. .. Examples of the reception interval statistics here include the difference between the reception times of a plurality of messages or the average of the reception intervals of a plurality of messages.
- FIG. 12 is a conceptual diagram of the processing of the learned CNN 121 according to the first embodiment.
- FIG. 13 is a diagram for conceptually explaining the processing flow of the learned CNN 121 shown in FIG.
- the message sequence received from the in-vehicle network is divided by using a sliding window, so that a plurality of messages included in the current subwindow are included. Is obtained.
- FIG. 13B the reception interval of a plurality of messages included in the current subwindow or the transition of the sensor value is converted into image data and input to the event classifier.
- the trained CNN 121 as an event classifier classifies whether or not an additional attack is performed by a plurality of messages included in the input image data in a certain period.
- the learned CNN 121 outputs, as a detection result, whether or not there is an additional attack in the plurality of messages included in the current subwindow.
- FIG. 14 is a diagram showing an example of the structure of the learned CNN 121 according to the first embodiment.
- the CNN 121 includes, for example, a plurality of convolution layers, a plurality of pooling layers, a fully connected layer, and a custom layer, image data is input, and a classification result is output.
- the custom layer is used as a layer for performing image segmentation.
- the output processing unit 13 detects that there is an additional attack in which the attack message is inserted in the fixed period when the attack message is inserted in the message series received from the in-vehicle network. Is output. That is, when the event classification processing unit 12 detects that the event classification processing unit 12 has an additional attack for a certain period of the message series received from the in-vehicle network, the output processing unit 13 detects that fact. Output the result.
- the output processing unit 13 indicates that the plurality of messages included in the fixed period are normal. The indicated detection result is output.
- the output processing unit 13 When the event classification processing unit 12 detects that there is no additional attack for the fixed period, the output processing unit 13 further replaces a plurality of messages included in the fixed period. You may determine if a type attack has been made. In this case, the output processing unit 13 can determine whether or not a substitution type attack is performed on a plurality of messages included in the fixed period by using a neural network model such as CNN. Then, if the plurality of messages included in the fixed period include an abnormal message, the output processing unit 13 may determine that a replacement type attack has been performed, and output a detection result indicating that fact. On the other hand, if the output processing unit 13 does not include an abnormal message in the plurality of messages included in the fixed period, the output processing unit 13 may output a detection result indicating that the plurality of messages included in the fixed period are normal. good.
- FIG. 15 is a flowchart showing an operation outline of the abnormality detection device 10 according to the first embodiment.
- the abnormality detection device 10 images the transition of the reception interval or the sensor value of the message included in the sub window (fixed period) (S11). More specifically, the abnormality detection device 10 converts the reception interval of a plurality of messages included in the message sequence of the message sequence received from the vehicle-mounted network or the transition of the sensor value into image data. As a result, the abnormality detection device 10 can obtain an image showing the transition of the reception interval or the sensor value of a plurality of messages.
- the abnormality detection device 10 classifies the input image using CNN (S12). More specifically, the abnormality detection device 10 uses the learned CNN to insert an attack message in the fixed period from the image showing the transition of the reception interval or the sensor value of the plurality of messages obtained in step S11. Classify whether or not.
- the abnormality detection device 10 determines whether the classification result in step S12 indicates that there is an additional attack (S13).
- step S13 when the classification result indicates that there is an additional type attack (Yes in S13), the abnormality detection device 10 outputs a detection result that there is an additional type attack in the window (S14). That is, when the attack message is inserted in the fixed period, the abnormality detection device 10 outputs a detection result indicating that there is an additional type attack in which the attack message is inserted in the fixed period.
- step S13 when the classification result does not indicate that there is an additional attack (No in S13), the abnormality detection device 10 outputs a detection result other than that including that there is no additional attack in the window. (S15).
- the anomaly detection device 10 and the anomaly detection method according to the present embodiment when an additional attack is performed, the message reception interval or the sensor value is disturbed, so that the message reception interval or the plurality of message reception intervals or By using the image of the transition of the sensor value, it is possible to accurately detect that there was an additional attack in the message sequence for a certain period of time.
- the fact that the message reception interval or the sensor value is disturbed when an additional attack is made is a phenomenon that cannot be disguised even if the cyber attack method becomes smart. Therefore, even if the cyber attack method becomes smart, it is unlikely that the attack on the CAN bus cannot be detected.
- the abnormality detection device 10 and the abnormality detection method according to the present embodiment it is possible to accurately detect an attack on the in-vehicle network.
- the attack event is detected on a window-by-window basis, that is, whether or not an additional attack is performed on the message series for a certain period of the message series received from the in-vehicle network. , Not limited to this.
- an attack event is detected on a window-by-window basis, it may be further detected on a message-by-message basis whether each message is normal or abnormal. That is, when it is detected that an additional attack has been performed on a message sequence for a certain period of time, it may be further determined or detected whether each of the plurality of messages included in the message sequence is a normal message or an attack message. This case will be described below as the second embodiment.
- FIG. 16 is a block diagram showing an example of the configuration of the abnormality detection device 10A according to the second embodiment.
- the abnormality detection device 10A includes an input processing unit 11A, an event classification processing unit 12, an output processing unit 13A, and a message classification processing unit 14A.
- the same elements as in FIG. 4 are designated by the same reference numerals, and detailed description thereof will be omitted.
- the input processing unit 11A acquires a plurality of messages when the CAN data stream flowing through the vehicle-mounted network is input.
- the input processing unit 11 converts the reception interval of the acquired plurality of messages or the transition of the sensor value into image data and outputs the image data to the event classification processing unit 12. Further, the input processing unit 11 outputs the acquired plurality of messages to the message classification processing unit 14A.
- the input processing unit 11A receives a message sequence from the vehicle-mounted network by inputting a CAN data stream flowing through the vehicle-mounted network.
- the input processing unit 11A acquires a plurality of messages included in a certain period by dividing the message sequence received from the vehicle-mounted network by using a sliding window.
- the input processing unit 11A generates an image data representing the reception interval of the plurality of messages or an image showing the transition of the sensor values of the plurality of messages from the acquired plurality of messages included in a certain period, and the event classification processing unit 11A. Output to 12.
- the input processing unit 11A outputs a plurality of acquired messages included in the fixed period to the message classification processing unit 14A. Note that the input processing unit 11A may output a plurality of acquired messages included in a certain period to the message classification processing unit 14A when an attack event is detected in the event classification processing unit 12.
- the output processing unit 13 sends an instruction to the message classification processing unit 14A to detect whether each message is normal or abnormal in each message, and the message classification processing is performed.
- the detection result is acquired from the unit 14A.
- the output processing unit 13 may output a detection result indicating that there is an additional attack in which an attack message is inserted for a certain period of time and that each message is normal or abnormal.
- the output processing unit 13 implements the function when the event classification processing unit 12 does not detect an attack event, that is, when the event classification processing unit 12 detects that there is no additional attack for the fixed period. Since it is as described in the form of the above, the description thereof will be omitted.
- the message classification processing unit 14A detects whether each message is normal or abnormal in message units. More specifically, when the event classification processing unit 12 detects an attack event in window units, the message classification processing unit 14A inputs a message sequence for a certain period of time input by the input processing unit 11A. The message classification processing unit 14A detects whether each of the plurality of messages included in the input message sequence for a certain period is a normal message or an attack message.
- FIG. 17 is a block diagram showing an example of the detailed configuration of the message classification processing unit 14A according to the second embodiment.
- the message classification processing unit 14A includes a CNN Message Classifier 141, an LSTM Message Classifier 142, a Human Message Classifier 143, and an ensemble processing unit 144.
- CNNMessageClassifier141 uses the learned CNN to detect the attack message and the attacked sensor from a plurality of messages included in the input message sequence for a certain period of time.
- This learned CNN is a CNN model different from the CNN 121 according to the first embodiment, and is a message classification for classifying whether each of a plurality of messages included in the message sequence is an attack message or a normal message from the message sequence.
- This is an example of a vessel.
- the message classifier includes, for example, a plurality of convolution layers, a plurality of pooling layers, and a fully connected layer, and is trained using, for example, a message sequence as shown in FIG. 7 as teacher data.
- FIG. 18 is a diagram for conceptually explaining the particle size of abnormality detection according to the second embodiment.
- FIG. 18 shows an example of a message series received from the in-vehicle network, and the squares represent each message.
- the abnormality detection device 10 first detects whether or not an attack event has occurred in a certain section (fixed period) of the message series with the particle size of the window unit shown in 1). Next, the abnormality detection device 10 detects whether each message included in a certain section (window unit) of the message series is an attack message or a normal message at the particle size of the message unit shown in 2). Further, as shown in 3), the abnormality detection device 10 detects whether each message is an attack message or a normal message by using the particle size of each sensor, that is, the sensor value included in each message. More specifically, when the message contains a plurality of sensor values, the abnormality detection device 10 determines whether each sensor value indicates an attack or a normal state. On the other hand, when the message contains a single sensor value, the abnormality detection device 10 determines whether the message is an attack message or a normal message depending on whether the sensor value indicates an attack or normal. Can be done.
- the CNN Message Classifier 141 detects whether each message is normal (normal message) or abnormal (attack message) at the particle size of the message unit shown in 2) of FIG.
- FIG. 19 is a diagram for conceptually explaining the processing flow of the CNN Message Classifier 141 according to the second embodiment.
- 19 (a) is shown in the same notation as the message sequence shown in FIG. 7, and the message sequence shown in FIG. 19 (a) is received from the in-vehicle network shown in FIG. 13 (a). Corresponds to the message series to be.
- the message sequence received from the in-vehicle network is divided by using the sliding window, so that a plurality of messages included in the current subwindow are acquired and the message classification is performed. It is input to the trained CNN as a vessel.
- FIG. 20A is a diagram showing a specific example of a plurality of messages input to the message classifier according to the second embodiment and a feature amount.
- each message contains a plurality of features.
- the feature amount includes a sensor value of 1 or more as a sensor value and a value indicating a message reception interval.
- the sensor values for example, as shown in FIG. 20A, a sensor value indicating the vehicle speed obtained by the vehicle speed sensor, a sensor value indicating the steering angle obtained by the steering angle sensor, and a sensor value indicating the acceleration obtained by the acceleration sensor. and so on. It may be referred to as a feature amount of a message including a sensor value and a value indicating a message reception interval.
- the learned CNN as a message classifier classifies whether each of the plurality of messages included in the input message sequence is a normal message or an abnormal message. Since the abnormal message includes the sensor value obtained by the sensor under the attack and the sensor value indicating the abnormal value, it is referred to as an attack message in the present embodiment.
- the learned CNN as a message classifier outputs as a detection result whether each of the plurality of messages included in the current subwindow is a normal message or an attack message.
- FIG. 20B is a diagram showing a specific example of the determination result by the message classifier according to the second embodiment.
- the speed value of 80 km / h of message No. 3 indicates an abnormality due to an attack, and the other sensor values are normal.
- the trained CNN as the message classifier classifies that the message No. 3 is abnormal, as shown in FIG. 20B, and further indicates a speed sensor that outputs the speed value of the message No. 3, such as 2.
- the ID (index) is included and output as the judgment result.
- the trained CNN as a message classifier cannot determine whether it is abnormal or normal on a sensor-by-sensor basis. Whether or not one message includes sensor values obtained from a plurality of sensors differs depending on the vehicle type.
- FIG. 21 is a diagram for conceptually explaining the feature extraction process performed by the message classifier according to the second embodiment.
- FIG. 21A shows a specific example of a plurality of messages input to the message classifier and the feature amount, which is the same as in FIG. 20A.
- FIG. 21B shows a determination result obtained as a result of convolution of a plurality of messages contained in the current subwindow using a predetermined small filter.
- the trained CNN as a message classifier is, for example, for message No. 1 in messages No. 1 to 3, for example, for message No. 2 in messages No. 2 to 4, for example, message No. 3 to. Convolution is performed for message No. 3 in 5.
- the trained CNN as a message classifier indicates that the messages Nos. 3 and 5 are abnormal as a result of such feature extraction processing, and that the speed sensor indicated by the sensor ID of 2, for example, is under attack.
- the judgment result (classification result) is output.
- LSTM Message Classifier 142 When an attack event is detected in window units in the event classification processing unit 12, the LSTM Message Classifier 142 inputs a message sequence for a certain period of time input by the input processing unit 11A.
- the LSTM Message Classifier 142 uses the learned LSTM to detect whether or not a plurality of messages are attack messages from a plurality of messages included in the input message series for a certain period of time.
- This learned LSTM may be a BLSTM.
- This learned LSTM is an example of a message classifier for classifying from a message sequence whether each of a plurality of messages included in the message sequence is an attack message or a normal message.
- This message classifier is trained using, for example, a message sequence as shown in FIG. 7 as teacher data.
- FIG. 22 is a diagram for conceptually explaining the processing flow of the LSTM Message Classifier 142 according to the second embodiment.
- 22 (a) is the same diagram as FIG. 19 (a), and the message sequence shown in FIG. 19 (a) is a message sequence received from the in-vehicle network shown in FIG. 13 (a).
- FIG. 22B shows an example in which the LSTM as a message classifier is a BLSTM.
- the trained BLSTM as a message classifier is configured by superimposing two BLSTM layers and two Dense layers.
- the message sequence received from the in-vehicle network is divided by using the sliding window, so that a plurality of messages included in the current sub-window are acquired and the message classification is performed. It is input to BLSTM as a vessel.
- FIG. 23A is a diagram showing a specific example of a plurality of messages input to the message classifier according to the second embodiment and a feature amount. Since FIG. 23A is the same diagram as FIG. 20A, the description thereof will be omitted. Further, although not shown in FIG. 23A, a missing state of each sensor value may be further included as a feature amount. In this case, if there is a defect in the sensor value, it may be expressed as 1, and if there is no defect in the sensor value, it may be expressed as 0.
- the learned BLSTM as a message classifier classifies whether each of the plurality of messages included in the input message sequence is a normal message or an abnormal message.
- the trained BLSTM as a message classifier outputs as a detection result whether each of the plurality of messages included in the current subwindow is a normal message or an attack message.
- FIG. 23B is a diagram showing a specific example of the determination result by the message classifier according to the second embodiment.
- the speed value of 80 km / h of message No. 3 indicates an abnormality due to an attack, and the other sensor values are normal.
- the trained BLSTM as a message classifier has a value of, for example, 1 for the attacked sensor and a value of 0 for the normal sensor, based on the number of sensors and the number of messages to be detected. It is output as a judgment result including the result classified in.
- the HumanMessageClassifier143 determines whether or not the plurality of messages are attack messages based on a predetermined rule from the sensor values included in each of the plurality of messages included in the input message series for a certain period of time. judge.
- the predetermined rules are rules based on human intuition that are effective for message-level classification.
- the attack message is inserted between the normal message sequences, so that the reception interval of the abnormal message sequence including the attack message is disturbed.
- the sensor value included in the attack message deviates from the sensor value included in the normal message sequence, for example, an attack on a speed such as a vehicle speed. That is, when the attack message and the normal message included in the normal message sequence are clustered, they form relatively prime groups.
- Human Message Classifier 143 has a plurality of messages as attack messages from the sensor values included in each of the plurality of messages included in the message series for a certain period. Judge whether or not.
- an attack with an attack message that has the same sensor value as a normal message is called an equivalence attack.
- An attack with an attack message that has a sensor value different from that of a normal message is called a shift attack.
- FIG. 24 is a diagram for explaining the problems of message level classification for equivalence attacks and their countermeasures.
- the vertical direction indicates the sensor value
- the horizontal direction indicates the reception time.
- FIG. 25 is a diagram for conceptually explaining that it is possible to determine whether or not the attack is equivalent by calculating the difference between the sensor values between the messages in the message series.
- FIG. 25 shows a message sequence that was attacked by the same value. It is assumed that the attack message is indicated by circles B, D, and F, and the normal message is indicated by circles A, C, and E.
- each of the message sequences indicated by A to F may be determined to be a normal message.
- the difference between the sensor values of the previous message is calculated, and if the difference continues at 0 and it is determined that the attack is equivalent, the attack message determined to be an equivalent attack is regarded as a normal message. You just have to judge. On the other hand, if the difference is not 0, it is not an equivalence attack, so it may be targeted for abnormality detection, and the result of another Message Classifier may be used by the ensemble processing unit 144 described later.
- HumanMessageClassifier143 calculates the difference value of all the sensor values of the combination consisting of two messages before and after the reception time among the plurality of messages included in the input message series for a certain period. do. Next, HumanMessageClassifier143 groups the calculated difference values and determines whether all the difference values included in each grouped group are 0.
- HumanMessageClassifier143 If all the difference values are not 0, HumanMessageClassifier143 outputs the detection result that there is an additional type attack in which the attack message is inserted for a certain period of time. On the other hand, when all the difference values are not 0, HumanMessageClassifier143 outputs the detection result as the message of the group whose difference value is not 0 in the input fixed period is not an attack message.
- the ensemble process is a method used in machine learning, and is a process that can output a prediction result that combines the goodness of each model while suppressing the variation in the prediction result of each model.
- the ensemble processing unit 144 stacks and outputs the detection result of the CNNMessageClassifier141, the detection result of the LSTMMessageClassifier142, and the determination result of the HumanMessageClassifier143.
- the ensemble processing unit 144 may select and output one of the detection result of CNNMessageClassifier141, the detection result of LSTMMessageClassifier142, and the determination result of HumanMessageClassifier143.
- the ensemble processing unit 144 acquires the detection result of the CNN Message Classifier 141.
- the ensemble processing unit 144 was attacked by using a learned CNN different from the learned CNN from a plurality of messages included in the message sequence for a certain period to a message that is an attack message among a plurality of messages.
- the first detection result of detecting the sensor is acquired.
- the ensemble processing unit 144 acquires the detection result of the LSTM Message Classifier 142.
- the ensemble processing unit 144 acquires the second detection result of detecting whether or not the plurality of messages are attack messages from the plurality of messages included in the message sequence for a certain period by using the learned LSTM. do.
- the ensemble processing unit 144 acquires the determination result of Human Message Classifier 143. In other words, the ensemble processing unit 144 determines whether or not the plurality of messages are attack messages based on a predetermined rule from the sensor values included in each of the plurality of messages included in the message series for a certain period. The judgment result obtained by judging is acquired.
- the ensemble processing unit 144 processes the acquired determination result, the first detection result, and the second detection result into an ensemble and outputs the result.
- the ensemble processing unit 144 selects one of the determination result, the first detection result, and the second detection result, or selects the acquired determination result, the first detection result, and the second detection result. Integrate by load averaging.
- FIG. 26 is a flowchart showing an operation outline of the abnormality detection device 10A according to the second embodiment.
- the abnormality detection device 10A performs input processing in window units (S21). More specifically, the abnormality detection device 10A acquires a plurality of messages included in the current sub-window (fixed period) by dividing the message sequence received from the vehicle-mounted network by using a sliding window. The abnormality detection device 10A generates image data representing the reception interval of the plurality of messages or an image showing the transition of the sensor values of the plurality of messages from the acquired plurality of messages included in a certain period.
- the abnormality detection device 10A performs event classification processing (S22). More specifically, the abnormality detection device 10A uses the learned CNN121 to display an attack message during a certain period from an image showing a transition of a plurality of message reception intervals or sensor values generated in the input process of step S21. Classify whether or not there was an attack event in which is inserted.
- the abnormality detection device 10 determines whether the classification result of the event classification process performed in step S22 indicates that there is an additional attack (S23).
- step S23 when the classification result indicates that there is an additional type attack (Yes in S23), the abnormality detection device 10A further performs message classification processing (S24). More specifically, the abnormality detection device 10A performs a process of detecting whether each of the plurality of messages included in the input message sequence for a certain period is a normal message or an attack message.
- step S24 will be described.
- step S24 the abnormality detection device 10A performs CNNMessageClassifier processing (S241), LSTMMessageClassifier processing (S242), and HumanMessageClassifier processing (S243). Since the CNNMessageClassifier processing is performed by the above-mentioned CNNMessageClassifier141 and the LSTMMessageClassifier processing is performed by the above-mentioned LSTMMessageClassifier142, detailed description thereof will be omitted. Similarly, the Human Message Classifier process is performed by the Human Message Classifier 143 described above. This detailed processing will be described later.
- the abnormality detection device 10A ensemble-processes and outputs the first detection result obtained in step S241, the second detection result obtained in step S242, and the determination result obtained in step S243.
- the abnormality detection device 10A indicates that there is no additional attack in the input message sequence for a certain period of time.
- the detection result that is, the detection result indicating normality is output (S25).
- FIG. 27 is a flowchart showing an example of the detailed processing of step S243 shown in FIG. 26.
- the abnormality detection device 10A calculates the difference value of the sensor value of the received message (S2431). More specifically, the abnormality detection device 10A includes all combinations of two messages before and after the reception time among the plurality of messages included in the current subwindow (fixed period) acquired in the input process of step S21. Calculate the difference value of the sensor value.
- the abnormality detection device 10A performs grouping (classification) on the difference between the sensor values calculated in step S2431 (S2432). More specifically, the abnormality detection device 10A groups the difference values of the sensor values of all the combinations of the two messages before and after the reception time calculated in step S2431 to generate a group.
- the abnormality detection device 10A acquires the difference value included in each group generated in step S2432 (S2433).
- the abnormality detection device 10A determines whether the difference value included in each group acquired in step S2433 is 0 (S2434).
- step S2434 when the difference value included in the group is 0 (Yes in S2434), the abnormality detection device 10A determines that the attack is equivalent to the difference value 0, and determines that the received message included in the group is normal. (S2435). More specifically, when the difference value included in the group is 0, the abnormality detection device 10A determines that all the plurality of messages included in the group are normal messages.
- step S2434 when the difference value included in the group is not 0 (No in S2434), the abnormality detection device 10A determines that the received message included in the group is the detection target (S2436). More specifically, if the difference value included in the group is not 0, the abnormality detection device 10A includes the group because the plurality of messages included in the group may include an attack message. Multiple messages are determined to be detected.
- an attack on the in-vehicle network is detected as to whether or not there is an attack event in the message sequence for a certain period, and the attack event is present.
- the attack message is detected in two steps. This makes it possible to accurately detect an attack on the in-vehicle network.
- detection using a learned CNN detection using a learned LSTM, and detection based on a predetermined rule are performed in parallel, and the result is processed into an ensemble. Then, the detection by the neural network model and the detection by the rule base can be multiplied.
- the false positive rate of the attack message can be brought close to zero, so that an attack on the in-vehicle network can be detected accurately.
- the Human Message Classifier 143 is another example of the predetermined rule described above, based on a rule using clustering and regression, and each of a plurality of messages included in a message sequence for a certain period of time.
- the abnormality detection device 10A is not limited to performing the detailed processing shown in FIG. 27 as the detailed processing of step S243 shown in FIG. 26, but may perform the detailed processing shown in FIG. 28 described below.
- FIG. 28 is a flowchart showing another example of the detailed processing of step S243 shown in FIG. 26.
- the abnormality detection device 10A calculates the difference between the reception interval and the sensor value (S2431A). More specifically, the abnormality detection device 10A is a combination consisting of a reception interval of a plurality of messages included in the current subwindow (fixed period) acquired in the input process of step S21 and two messages before and after the reception time. Calculate the difference between all sensor values.
- the abnormality detection device 10A classifies the messages based on the reception interval calculated in step S2431A (S2432A). More specifically, the abnormality detection device 10A classifies each of the plurality of messages included in the fixed period into a normal message and an abnormal message based on the reception interval calculated in step S2431A.
- the abnormality detection device 10A determines in step S2432A whether all the messages included in the fixed period could be classified (S2433A).
- the abnormality detection device 10A determines each sensor based on the message classification result in step S2432A (S2434A).
- FIG. 29 is a diagram showing an example of an abnormal message series in a case where all messages included in a certain period according to a modified example of the second embodiment can be classified based on a reception interval.
- FIG. 29A shows a case where the attack message is inserted immediately after the normal message.
- FIG. 29B shows a case where the attack message is inserted immediately before the normal message.
- FIG. 30 is a diagram showing an example of the classification result of an abnormal message series when all the messages included in a certain period according to the modified example of the second embodiment can be classified based on the reception interval.
- the reception interval between the attack message and the normal message is an abnormal interval. Also, the reception interval between normal messages is known. From these, all messages can be classified into attack messages and normal messages based on the reception interval, for example, as shown in FIG.
- the abnormality detection device 10A determines the message and the sensor included in the fixed period by a plurality of rules (S2435A).
- FIG. 31 is a diagram showing an example of the classification result of an abnormal message series when all the messages included in a certain period according to the modified example of the second embodiment cannot be classified based on the reception interval.
- the normal message and the attack message are classified based on the reception interval as shown in FIG. 31. Cannot be correct. That is, when the attack message is inserted in the middle of the reception interval of the normal message, the attack message and the normal message may not be distinguished if only the reception interval of the abnormal message series is used.
- step S2434A that is, the determination processing of each sensor when all the messages can be classified will be described.
- FIG. 32 is a flowchart showing an example of the detailed processing of step S2434A shown in FIG. 28.
- FIG. 33A is a diagram conceptually showing an example of the case of Yes in step S24345 shown in FIG. 32.
- FIG. 33B is a diagram conceptually showing an example of the case of Yes in step S24347 shown in FIG. 32.
- FIG. 33C is a diagram conceptually showing an example of the case of No in step S24347 shown in FIG. 32.
- step S2434A first, the abnormality detection device 10A determines whether or not there is information from a plurality of sensors in the message (S24341). More specifically, the abnormality detection device 10A determines whether or not sensor values obtained from a plurality of sensors are included in each of all the messages included in a certain period.
- step S24341 when there is information of a plurality of sensors in the message (Yes in S24341), the abnormality detection device 10A adopts the classification result of all the messages performed in step S2432A shown in FIG. 28 (S24342). More specifically, when the abnormality detection device 10A does not include the sensor values obtained from the plurality of sensors in each of the messages included in the fixed period, the abnormality detection device 10A sets the classification result of all the messages performed in step S2432A as a step. It is output as a result of the detailed processing of S24342.
- step S24341 when there is no information of a plurality of sensors in the message (No in S24341), the abnormality detection device 10A performs the processes of steps S24343 to S24348 for each sensor. More specifically, in step S24341, when there is no information of a plurality of sensors in the message (No in S24341), the abnormality detection device 10A performs a process of dividing into two groups based on the message reception interval (S24343). ).
- the abnormality detection device 10A uses the difference value calculated in step S2431A shown in FIG. 28 to determine whether the difference value of the sensor value is constant in at least one of the two groups divided in step S24343. (S24344).
- step S24344 if the difference value of the sensor values is not constant in at least one group (No in S24344), the abnormality detection device 10A proceeds to step S24342.
- step S24344 when the difference value of the sensor values is constant in at least one group (Yes in S24344), the abnormality detection device 10A determines whether the sensor values are the same over the two groups (S24345). ..
- step S24345 if the sensor values are the same over the two groups (Yes in S24345), the abnormality detection device 10A proceeds to step S24348. More specifically, when the sensor values are the same over the two groups, for example, as shown in FIG. 33A, the sensor values of the attack message and the normal message are flat, and it is possible to distinguish between the attack message and the normal message. do not have. Therefore, the abnormality detection device 10A proceeds to step S24348 and determines that all the messages are normal messages.
- step S24345 when the sensor values are not the same over the two groups (No in S24345), the abnormality detection device 10A divides the sensor values into two groups using the K-means method or the like as a clustering algorithm (S24346). ).
- the abnormality detection device 10A determines whether or not there is a group having the same sensor value as a result of the process in step S24346 (S24347).
- step S24347 If there is no group having the same sensor value in step S24347 (Yes in S24347), the process proceeds to step S24348. More specifically, when there is no group having the same sensor value as a result of the process of step S24346, as shown in FIG. 33B, for example, copies of the normal sensor value are lined up and the normal sensor value (fixed value). ) Is considered to be fluctuating. Therefore, the abnormality detection device 10A proceeds to step S24348 and determines that all the messages are normal messages.
- step S24342 if there is a group having the same sensor value in step S24347 (No in S24347), the process proceeds to step S24342. More specifically, when there is a group having the same sensor value as a result of the process of step S24346, for example, as shown in FIG. 33C, it is fixed to a group of sensor values having a normal sensor value and a constant value. It is considered to be divided into two groups of values. Therefore, the abnormality detection device 10A proceeds to step S24342 and adopts the classification result of all the messages performed in step S2432A.
- the abnormality detection device 10A clusters the reception interval and the sensor value as to whether or not the messages classified in step S2433A shown in FIG. 28 are due to an equivalence attack (whether due to an equivalence attack or a shift attack). , Normal or abnormal grouping can be performed.
- step S2435A shown in FIG. 28, that is, the determination processing of each sensor when all the messages cannot be classified will be described.
- FIG. 34 is a flowchart showing an example of the detailed processing of step S2435A shown in FIG. 28.
- FIG. 35 is a diagram showing a determination rule used in the detailed processing of step S2435A shown in FIG. 28.
- FIG. 36A is a diagram for conceptually explaining whether the message that could not be classified is the end of the attack.
- FIG. 36B is a diagram for conceptually explaining the reception interval rule in the same group shown in FIG. 35.
- 36C and 36D are diagrams conceptually showing an example of the case of Yes in step S24354 shown in FIG. 34.
- FIG. 36E is a diagram showing an example of the determination result of step S24359 shown in FIG. 34.
- step S2435A first, the abnormality detection device 10A determines whether or not it is the end of the attack (S24351). More specifically, the anomaly detection device 10A determines whether the message that could not be classified in step 2433A is a normal message followed by an attack message ending at the end.
- FIG. 36A exemplifies a message shown as A that could not be classified in the current subwindow and a message shown as B, which is the terminal attack message, in the message sequence. That is, as shown in FIG. 36A, when the message indicated as B is the end of the attack message, only the first message of the current subwindow indicated as A has a shorter reception interval, but subsequent messages. The reception interval of is normal.
- the abnormality detection device 10A confirms whether the end of the attack is determined by whether or not the following three conditions are satisfied. Whether or not the three conditions are met is that only the first message in the current subwindow cannot be classified, the other messages show a normal reception interval, and there was an additional attack in the past subwindows. Whether or not the condition is satisfied.
- step S24351 If it is confirmed in step S24351 that it is the end of the attack (Yes in S24351), the abnormality detection device 10A proceeds to step S24352 and determines that all the sensors are normal (S24352).
- step S24351 when it is confirmed in step S24351 that it is not the end of the attack (No in S24351), the abnormality detection device 10A performs the processes of steps S24353 to S24359 for each sensor. More specifically, when it is confirmed in step S24351 that it is not the end of the attack (No in S24351), the abnormality detection device 10A uses the K-means method or the like as a clustering algorithm and is based on the message reception interval. Clustering is performed (S24353).
- the abnormality detection device 10A determines whether or not the plurality of messages included in the group generated by clustering have an apparently abnormal reception interval and are not copy patterns (equal values) (S24354).
- step S24354 if the reception interval is clearly abnormal and the copy pattern (equivalent value) is not (Yes in S24354), the determination is made based on the reception interval rule in the same group (S24355).
- the case where the reception interval is clearly abnormal corresponds to the case where the reception interval of the attack message and the normal message is clearly disturbed and the reception interval of the message can be clearly classified into the normal or attack group.
- reception interval rule within the same group prescribes that messages with the same reception interval have the same determination result. More specifically, in the reception interval rule within the same group, for example, the rule illustrated at the top of FIG. 35 is predetermined.
- the reception interval rule within the same group shown in FIG. 35 it is determined that the reception interval is normal if it is classified into a normal group, and that the reception interval is abnormal (attacked) if it is classified into an attack group. It has established. For example, as shown in FIG. 36B, messages having the same reception interval are classified into the same group. More specifically, when a message is included in the frame A, for example, it is classified into a normal group, so that the reception interval can be determined to be normal. Similarly, if the message is included in the frame B, for example, it will be classified into an attack group, so that the reception interval is abnormal and it can be determined that the attack has been performed.
- step S24354 if there is no apparently abnormal reception interval or the copy pattern (equivalent) (No in S24354), the process proceeds to step S24356, and the equivalence attack determination rule is used for determination (S24356). ..
- the attack message is inserted in the middle of the normal message cycle, and the reception intervals of the attack message and the normal message are almost equal. Applicable to the case.
- the case where there is no abnormal reception interval corresponds to the case where an attack message is inserted from the middle of the window.
- the case of the copy pattern corresponds to the case where an attack message having the same sensor value as the sensor value (normal value) of the normal message is inserted, for example, as shown in FIG. 36D.
- the rule illustrated in the middle of FIG. 35 is predetermined. That is, in the equivalence attack determination rule shown in FIG. 35, the sensor value and the difference value of the immediately preceding message are calculated and grouped, and the group in which the difference value of 0 continues is determined to be normal, and the group in which the difference value other than 0 continues is determined to be abnormal. It is stipulated to do.
- the abnormality detection device 10A determines whether or not there is an unclassified message (S24357).
- step S24357 If there are no unclassified messages in step S24357 (No in S24357), all the messages have been classified, so the abnormality detection device 10A ends the process.
- step S24357 if there is an unclassified message in step S24357 (Yes in S24357), it is determined whether the past subwindow is normal (S24358).
- the abnormality detection device 10A can determine whether or not the past sub-window is normal by determining whether or not all the sensor values of the plurality of messages included in the past sub-window are normal.
- step S24358 if the past subwindow is not normal (No in S24358), the process proceeds to step S24355.
- step S24358 when the past sub-window is normal (Yes in S24358), the abnormality detection device 10A determines using the regression determination rule (S24359).
- the regression determination rule for example, the rule exemplified at the bottom of FIG. 35 is predetermined.
- the regression determination rule shown in FIG. 35 since the sensor value of the normal message changes continuously, it is stipulated that if the regression error is small, it is determined to be normal, and if the regression error is large, it is determined to be abnormal.
- FIG. 36E when the past subwindow surrounded by the frame is normal, the sensor value changes continuously in the current subwindow with the same tendency following the last normal message of the past subwindow. It is assumed that there is a range of normal messages. Therefore, the regression line of the past subwindow surrounded by the frame is calculated, and the error between the sensor value indicated by the message in the current subwindow and the regression line is calculated. As a result, if the regression error from the sensor value indicated by the message in the current subwindow is small, it can be determined to be normal, and if the regression error is large, it can be determined to be abnormal (attacked).
- the abnormality detection device 10A classifies the messages that could not be classified by the reception interval in step S2433A shown in FIG. 28, and further determines for each sensor, so that the messages that could not be classified by the reception interval are normal or. It can be determined that it is abnormal.
- the main body and the device in which each process is performed are not particularly limited. It may be processed by a processor built in a specific device located locally. It may also be processed by a cloud server or the like located at a location different from the local device.
- this disclosure also includes the following cases.
- the above-mentioned device is a computer system composed of a microprocessor, ROM, RAM, a hard disk unit, a display unit, a keyboard, a mouse, and the like.
- a computer program is stored in the RAM or the hard disk unit.
- the microprocessor operates according to the computer program, each device achieves its function.
- a computer program is configured by combining a plurality of instruction codes indicating commands to a computer in order to achieve a predetermined function.
- a part or all of the components constituting the above device may be composed of one system LSI (Large Scale Integration).
- a system LSI is a super-multifunctional LSI manufactured by integrating a plurality of components on one chip, and specifically, is a computer system including a microprocessor, a ROM, a RAM, and the like. ..
- a computer program is stored in the RAM. When the microprocessor operates according to the computer program, the system LSI achieves its function.
- Some or all of the components constituting the above device may be composed of an IC card or a single module that can be attached to and detached from each device.
- the IC card or the module is a computer system composed of a microprocessor, ROM, RAM and the like.
- the IC card or the module may include the above-mentioned super multifunctional LSI.
- the microprocessor operates according to a computer program, the IC card or the module achieves its function. This IC card or this module may have tamper resistance.
- the present disclosure may be the method shown above. Further, it may be a computer program that realizes these methods by a computer, or it may be a digital signal composed of the computer program.
- the computer program or a recording medium capable of reading the digital signal by a computer for example, a flexible disk, a hard disk, a CD-ROM, MO, DVD, DVD-ROM, DVD-RAM, BD ( It may be recorded on a Blu-ray (registered trademark) Disc), a semiconductor memory, or the like. Further, it may be the digital signal recorded on these recording media.
- the computer program or the digital signal may be transmitted via a telecommunication line, a wireless or wired communication line, a network typified by the Internet, data broadcasting, or the like.
- the present disclosure is a computer system including a microprocessor and a memory, and the memory may store the computer program, and the microprocessor may operate according to the computer program.
- the present disclosure can be used for anomaly detection methods, anomaly detection devices, and programs for detecting anomalies in messages transmitted in an in-vehicle network, and in particular, for messages mounted on a vehicle together with an in-vehicle network and transmitted in the in-vehicle network. It can be used for anomaly detection methods, anomaly detection devices, and programs that detect anomalies.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Bioethics (AREA)
- Medical Informatics (AREA)
- Small-Scale Networks (AREA)
Abstract
Description
非特許文献1には、ディープニューラルネットワーク(DNN)を用いて、CANパケットから、CANバスにおける攻撃を精度よく検知する技術が開示されている。
以下では、図面を参照しながら、実施の形態1に係る異常検知装置10の情報処理方法等の説明を行う。
図1は、実施の形態1に係る異常検知装置10の概要を示す図である。
図3は、実施の形態に係る異常検知装置10の機能をソフトウェアにより実現するコンピュータ1000のハードウェア構成の一例を示す図である。
図4は、実施の形態1に係る異常検知装置10の構成の一例を示すブロック図である。
入力処理部11は、車載ネットワークに流れるCANデータストリームが入力されると、複数のメッセージを取得する。また、入力処理部11は、取得した複数のメッセージの受信間隔またはセンサ値の推移を画像データ化する。本実施の形態では、入力処理部11は、車載ネットワークから受信されたメッセージ系列のうちの一定期間のメッセージ系列に含まれる複数のメッセージの受信間隔またはセンサ値の推移を画像データ化する。
メッセージ受信部111は、車載ネットワークに流れるCANデータストリームが入力されることで、車載ネットワークからメッセージ系列を受信する。
ウィンドウ生成処理部112は、メッセージ受信部111により車載ネットワークから受信されるメッセージ系列を、スライディングウィンドウを用いて分割することで、一定期間に含まれる複数のメッセージを取得する。
画像化処理部113は、ウィンドウ生成処理部112により取得された複数のメッセージから、当該複数のメッセージの受信間隔を表す画像データまたは複数のメッセージのセンサ値の推移を表す画像を生成する。このようにして、画像化処理部113は、ウィンドウ生成処理部112により取得された複数のメッセージから、当該複数のメッセージの受信間隔またはセンサ値の推移を画像データ化する。
イベント分類処理部12は、車載ネットワークから受信されたメッセージ系列のうちの一定期間に対して追加型攻撃が有るかどうかを検知する。より具体的には、イベント分類処理部12は、学習済みCNN(Convolution Neural Networks)121を用いて、画像データから、一定期間に含まれる複数のメッセージに攻撃メッセージが挿入されているか否かを分類する。一定期間に含まれる複数のメッセージは、上述した現在のサブウィンドウに含まれる複数のメッセージである。
出力処理部13は、車載ネットワークから受信されたメッセージ系列のうちの一定期間に攻撃メッセージが挿入されている場合には、当該一定期間において攻撃メッセージが挿入される追加型攻撃がある旨の検知結果を出力する。つまり、出力処理部13は、イベント分類処理部12により、車載ネットワークから受信されたメッセージ系列のうちの一定期間に対して追加型攻撃が有ることが検知された場合には、その旨を示す検知結果を出力する。
次に、上記のように構成された異常検知装置10の動作について説明する。
以上のように、本実施の形態に係る異常検知装置10及び異常検知方法によれば、追加型攻撃がされると、メッセージの受信間隔またはセンサ値が乱れることから、複数のメッセージの受信間隔またはセンサ値の推移の画像を用いることで、一定期間のメッセージ系列に追加型攻撃があったことを精度よく検知できる。また、追加型攻撃がされた場合にメッセージの受信間隔またはセンサ値が乱れることは、サイバー攻撃手法がスマートになったとしても偽装できない現象である。したがって、サイバー攻撃手法がスマートになったとしてもCANバスにおける攻撃を検知できなくなる可能性は低い。
実施の形態1では、ウィンドウ単位で攻撃イベントを検知する、すなわち、車載ネットワークから受信されたメッセージ系列のうちの一定期間のメッセージ系列に追加型攻撃がされているか否か検知することについて説明したが、これに限らない。ウィンドウ単位で攻撃イベントを検知した場合には、さらにメッセージ単位で各メッセージが正常か異常かを検知してもよい。つまり、一定期間のメッセージ系列に追加型攻撃がされていることを検知した場合に、当該メッセージ系列に含まれる複数のメッセージそれぞれが正常メッセージか攻撃メッセージかどうかをさらに判定または検知してもよい。この場合を実施の形態2として、以下説明する。
図16は、実施の形態2に係る異常検知装置10Aの構成の一例を示すブロック図である。
入力処理部11Aは、車載ネットワークに流れるCANデータストリームが入力されると、複数のメッセージを取得する。入力処理部11は、取得した複数のメッセージの受信間隔またはセンサ値の推移を画像データ化して、イベント分類処理部12に出力する。また、入力処理部11は、取得した複数のメッセージを、メッセージ分類処理部14Aに出力する。
出力処理部13は、イベント分類処理部12において攻撃イベントが検知された場合に、メッセージ分類処理部14Aに、メッセージ単位で各メッセージが正常か異常かを検知するよう指示を送信し、メッセージ分類処理部14Aから検知結果を取得する。この場合、出力処理部13は、当該一定期間において攻撃メッセージが挿入される追加型攻撃がある旨と、各メッセージが正常または異常である旨とを示す検知結果を出力すればよい。
メッセージ分類処理部14Aは、イベント分類処理部12においてウィンドウ単位で攻撃イベントが検知された場合、メッセージ単位で各メッセージが正常か異常かを検知する。より具体的には、メッセージ分類処理部14Aは、イベント分類処理部12においてウィンドウ単位で攻撃イベントが検知された場合、入力処理部11Aにより入力された一定期間のメッセージ系列が入力される。メッセージ分類処理部14Aは、入力された一定期間のメッセージ系列に含まれる複数のメッセージそれぞれが正常メッセージか攻撃メッセージであるかを検知する。
CNN Message Classifier141は、イベント分類処理部12においてウィンドウ単位で攻撃イベントが検知された場合、入力処理部11Aにより入力された一定期間のメッセージ系列が入力される。
LSTM Message Classifier142は、イベント分類処理部12においてウィンドウ単位で攻撃イベントが検知された場合、入力処理部11Aにより入力された一定期間のメッセージ系列が入力される。
Human Message Classifier143は、イベント分類処理部12においてウィンドウ単位で攻撃イベントが検知された場合、入力処理部11Aにより入力された一定期間のメッセージ系列が入力される。
アンサンブル処理は、機械学習において利用される手法であり、個々のモデルの予測結果のばらつきを抑えながら個々のモデルの良さを組み合わせた予測結果を出力できる処理である。
次に、上記のように構成された異常検知装置10Aの動作について説明する。
以上のように、本実施の形態に係る異常検知装置10及び異常検知方法によれば、車載ネットワークに対する攻撃を、一定期間のメッセージ系列に攻撃イベントがあるかないかを検知し、攻撃イベントがあった場合に、攻撃メッセージを検知するという2段階で検知する。これにより、車載ネットワークに対する攻撃を精度よく検知することができる。
なお、本変形例では、Human Message Classifier143は、上述した予め定められたルールの別の例として、クラスタリングと回帰とを利用するルールに基づいて、一定期間のメッセージ系列に含まれる複数のメッセージのそれぞれが、攻撃メッセージであるか否かを判定する場合について説明する。換言すると、異常検知装置10Aは、図26に示すステップS243の詳細処理として、図27に示す詳細処理を行う場合に限らず、以下に説明する図28に示す詳細処理を行ってもよい。
以上、実施の形態において本開示の異常検知方法及び異常検知装置について説明したが、各処理が実施される主体や装置に関しては特に限定しない。ローカルに配置された特定の装置内に組み込まれたプロセッサなどによって処理されてもよい。またローカルの装置と異なる場所に配置されているクラウドサーバなどによって処理されてもよい。
11、11A 入力処理部
12 イベント分類処理部
13、13A 出力処理部
14A メッセージ分類処理部
111 メッセージ受信部
112 ウィンドウ生成処理部
113 画像化処理部
121 CNN
141 CNN Message Classifier
142 LSTM Message Classifier
143 Human Message Classifier
144 アンサンブル処理部
1000 コンピュータ
1001 入力装置
1002 出力装置
1004 内蔵ストレージ
1007 読取装置
1008 送受信装置
1009 バス
Claims (9)
- 車両内のネットワークを介してメッセージの授受を行う複数の電子制御ユニットを備える車載ネットワークシステムにおける前記ネットワークの異常を検知するための異常検知方法であって、
前記ネットワークから受信されたメッセージ系列のうちの一定期間のメッセージ系列に含まれる複数のメッセージの受信間隔またはセンサ値の推移を画像データ化し、
学習済みCNN(Convolution Neural Networks)を用いて、画像データから、前記一定期間に攻撃メッセージが挿入されているか否かを分類し、
前記一定期間に攻撃メッセージが挿入されている場合には、前記一定期間において攻撃メッセージが挿入される追加型攻撃がある旨の検知結果を出力する、
異常検知方法。 - 前記ネットワークから受信されるメッセージ系列を、スライディングウィンドウを用いて分割することで、前記一定期間に含まれる複数のメッセージを取得し、
取得した前記複数のメッセージから、前記複数のメッセージの受信間隔を表す画像データまたは前記複数のメッセージのセンサ値の推移を表す画像を生成することで、前記複数のメッセージの受信間隔またはセンサ値の推移を画像データ化する、
請求項1に記載の異常検知方法。 - 前記一定期間に攻撃メッセージが挿入されている場合には、
さらに、
前記一定期間のメッセージ系列に含まれる複数のメッセージのそれぞれに含まれるセンサ値から、予め定められたルールに基づいて、前記複数のメッセージが、攻撃メッセージであるか否かを判定する、
請求項1または2に記載の異常検知方法。 - 前記一定期間に攻撃メッセージが挿入されている場合には、
さらに、
前記一定期間のメッセージ系列に含まれる複数のメッセージのうち、受信時刻で前後する2つのメッセージからなる組み合わせすべてのセンサ値の差分値を計算し、
計算した前記差分値に対して、グルーピングを行い、
グルーピングされた各グループに含まれるすべての前記差分値が0であるかを判定し、
前記すべての差分値が0でない場合、前記一定期間に攻撃メッセージが挿入されている追加型攻撃がある旨の検知結果を出力し、
前記すべての差分値が0でない場合、前記一定期間のうち、前記差分値が0でないグループのメッセージは攻撃メッセージでないとして前記検知結果を出力する、
請求項1~3のいずれか1項に記載の異常検知方法。 - 前記一定期間に攻撃メッセージが挿入されている場合には、
さらに、
前記学習済みCNNと異なる学習済みCNNを用いて、前記一定期間のメッセージ系列に含まれる複数のメッセージから、前記複数のメッセージのうち、攻撃メッセージであるメッセージと、攻撃されたセンサとを検知する、
請求項1または2に記載の異常検知方法。 - 前記一定期間に攻撃メッセージが挿入されている場合には、
さらに、
学習済みLSTM(Long short-term memory)を用いて、前記一定期間のメッセージ系列に含まれる複数のメッセージから、前記複数のメッセージが、攻撃メッセージであるか否かを検知する、
請求項1または2に記載の異常検知方法。 - 前記一定期間に攻撃メッセージが挿入されている場合には、
さらに、
前記一定期間のメッセージ系列に含まれる複数のメッセージのそれぞれに含まれるセンサ値から、予め定められたルールに基づいて、前記複数のメッセージが、攻撃メッセージであるか否かを判定することで得た判定結果を取得し、
前記学習済みCNNと異なる学習済みCNNを用いて、前記一定期間のメッセージ系列に含まれる複数のメッセージから、前記複数のメッセージのうち、攻撃メッセージであるメッセージと、攻撃されたセンサとを検知した第1検知結果を取得し、
学習済みLSTMを用いて、前記一定期間のメッセージ系列に含まれる複数のメッセージから、前記複数のメッセージが、攻撃メッセージであるか否かを検知した第2検知結果を取得し、
取得した前記判定結果、前記第1検知結果、及び前記第2検知結果をアンサンブル処理して出力し、
前記アンサンブル処理では、
前記判定結果、前記第1検知結果、及び前記第2検知結果のうちのいずれかを選択する、または、取得した前記判定結果、前記第1検知結果、及び前記第2検知結果を荷重平均することで統合する、
請求項1または2に記載の異常検知方法。 - 車両内のネットワークを介してメッセージの授受を行う複数の電子制御ユニットを備える車載ネットワークシステムにおける前記ネットワークの異常を検知するための異常検知装置であって、
プロセッサと、メモリとを備え、
前記ネットワークから受信されたメッセージ系列のうちの一定期間のメッセージ系列に含まれる複数のメッセージの受信間隔またはセンサ値の推移を画像データ化し、
学習済みCNNを用いて、画像データから、前記一定期間に攻撃メッセージが挿入されているか否かを分類し、
前記一定期間に攻撃メッセージが挿入されている場合には、前記一定期間において攻撃メッセージが挿入される追加型攻撃がある旨の検知結果を出力する、
異常検知装置。 - 車両内のネットワークを介してメッセージの授受を行う複数の電子制御ユニットを備える車載ネットワークシステムにおける前記ネットワークの異常を検知するための異常検知方法をコンピュータに実行させるプログラムであって、
前記ネットワークから受信されたメッセージ系列のうちの一定期間のメッセージ系列に含まれる複数のメッセージの受信間隔またはセンサ値の推移を画像データ化し、
学習済みCNNを用いて、画像データから、前記一定期間に攻撃メッセージが挿入されているか否かを分類し、
前記一定期間に攻撃メッセージが挿入されている場合には、前記一定期間において攻撃メッセージが挿入される追加型攻撃がある旨の検知結果を出力することを、
コンピュータに実行させるプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180076727.6A CN116457783A (zh) | 2020-11-24 | 2021-11-24 | 异常检测方法、异常检测装置以及程序 |
EP21897997.9A EP4254236A4 (en) | 2020-11-24 | 2021-11-24 | ANOMALY DETECTION METHOD, ANOMALY DETECTION DEVICE, AND PROGRAM |
JP2022565387A JPWO2022114025A1 (ja) | 2020-11-24 | 2021-11-24 | |
US18/197,460 US20230283622A1 (en) | 2020-11-24 | 2023-05-15 | Anomaly detection method, anomaly detection device, and recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063117690P | 2020-11-24 | 2020-11-24 | |
US63/117,690 | 2020-11-24 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/197,460 Continuation US20230283622A1 (en) | 2020-11-24 | 2023-05-15 | Anomaly detection method, anomaly detection device, and recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022114025A1 true WO2022114025A1 (ja) | 2022-06-02 |
Family
ID=81755555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/043063 WO2022114025A1 (ja) | 2020-11-24 | 2021-11-24 | 異常検知方法、異常検知装置、及び、プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230283622A1 (ja) |
EP (1) | EP4254236A4 (ja) |
JP (1) | JPWO2022114025A1 (ja) |
CN (1) | CN116457783A (ja) |
WO (1) | WO2022114025A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115150198A (zh) * | 2022-09-01 | 2022-10-04 | 国汽智控(北京)科技有限公司 | 车载入侵检测系统、方法、电子设备及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7517223B2 (ja) * | 2021-03-29 | 2024-07-17 | 株式会社デンソー | 攻撃分析装置、攻撃分析方法、及び攻撃分析プログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018168291A1 (ja) * | 2017-03-13 | 2018-09-20 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 情報処理方法、情報処理システム、及びプログラム |
WO2019240054A1 (ja) * | 2018-06-11 | 2019-12-19 | 国立大学法人 東京大学 | 通信装置、パケット処理方法及びプログラム |
CN111031071A (zh) * | 2019-12-30 | 2020-04-17 | 杭州迪普科技股份有限公司 | 恶意流量的识别方法、装置、计算机设备及存储介质 |
CN111245848A (zh) * | 2020-01-15 | 2020-06-05 | 太原理工大学 | 一种分层依赖关系建模的工控入侵检测方法 |
-
2021
- 2021-11-24 EP EP21897997.9A patent/EP4254236A4/en active Pending
- 2021-11-24 JP JP2022565387A patent/JPWO2022114025A1/ja active Pending
- 2021-11-24 CN CN202180076727.6A patent/CN116457783A/zh active Pending
- 2021-11-24 WO PCT/JP2021/043063 patent/WO2022114025A1/ja active Application Filing
-
2023
- 2023-05-15 US US18/197,460 patent/US20230283622A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018168291A1 (ja) * | 2017-03-13 | 2018-09-20 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 情報処理方法、情報処理システム、及びプログラム |
WO2019240054A1 (ja) * | 2018-06-11 | 2019-12-19 | 国立大学法人 東京大学 | 通信装置、パケット処理方法及びプログラム |
CN111031071A (zh) * | 2019-12-30 | 2020-04-17 | 杭州迪普科技股份有限公司 | 恶意流量的识别方法、装置、计算机设备及存储介质 |
CN111245848A (zh) * | 2020-01-15 | 2020-06-05 | 太原理工大学 | 一种分层依赖关系建模的工控入侵检测方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4254236A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115150198A (zh) * | 2022-09-01 | 2022-10-04 | 国汽智控(北京)科技有限公司 | 车载入侵检测系统、方法、电子设备及存储介质 |
CN115150198B (zh) * | 2022-09-01 | 2022-11-08 | 国汽智控(北京)科技有限公司 | 车载入侵检测系统、方法、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP4254236A4 (en) | 2024-03-27 |
US20230283622A1 (en) | 2023-09-07 |
CN116457783A (zh) | 2023-07-18 |
EP4254236A1 (en) | 2023-10-04 |
JPWO2022114025A1 (ja) | 2022-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108401491B (zh) | 信息处理方法、信息处理系统以及程序 | |
WO2022114025A1 (ja) | 異常検知方法、異常検知装置、及び、プログラム | |
US11411681B2 (en) | In-vehicle information processing for unauthorized data | |
US11438350B2 (en) | Unauthorized communication detection method, unauthorized communication detection system, and non-transitory computer-readable recording medium storing a program | |
US11757903B2 (en) | Unauthorized communication detection reference deciding method, unauthorized communication detection reference deciding system, and non-transitory computer-readable recording medium storing a program | |
US12063233B2 (en) | Unauthorized communication detection reference deciding method, unauthorized communication detection reference deciding system, and non- transitory computer-readable recording medium storing a program | |
US11765186B2 (en) | Unauthorized communication detection method, unauthorized communication detection system, and non-transitory computer-readable recording medium storing a program | |
JPWO2022114025A5 (ja) | ||
WO2018168291A1 (ja) | 情報処理方法、情報処理システム、及びプログラム | |
CN109478241B (zh) | 执行推理的计算机实现的方法、存储介质及计算设备 | |
US11803732B2 (en) | Device and method for classifying data in particular for a controller area network or an automotive ethernet network | |
US20210178995A1 (en) | Abnormal communication detection apparatus, abnormal communication detection method and program | |
CN114731301B (zh) | 决定方法、决定系统以及程序记录介质 | |
JP2022007238A (ja) | 情報処理装置、情報処理方法及びプログラム | |
Francia et al. | Applied machine learning to vehicle security | |
Chougule et al. | HybridSecNet: In-Vehicle Security on Controller Area Networks Through a Hybrid Two-Step LSTM-CNN Model | |
JP7312965B2 (ja) | 情報処理装置、情報処理方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21897997 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022565387 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180076727.6 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021897997 Country of ref document: EP Effective date: 20230626 |