WO2024038417A1 - Dynamic system calls-level security defensive system for containerized applications - Google Patents
Dynamic system calls-level security defensive system for containerized applications Download PDFInfo
- Publication number
- WO2024038417A1 WO2024038417A1 PCT/IB2023/058297 IB2023058297W WO2024038417A1 WO 2024038417 A1 WO2024038417 A1 WO 2024038417A1 IB 2023058297 W IB2023058297 W IB 2023058297W WO 2024038417 A1 WO2024038417 A1 WO 2024038417A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- system call
- sequence
- system calls
- host node
- calls
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 150
- 230000008569 process Effects 0.000 claims abstract description 106
- 238000007689 inspection Methods 0.000 claims abstract description 33
- 230000007704 transition Effects 0.000 claims description 62
- 230000009471 action Effects 0.000 claims description 58
- 230000000903 blocking effect Effects 0.000 claims description 41
- 238000012544 monitoring process Methods 0.000 claims description 19
- 238000004374 forensic analysis Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 16
- 241000233805 Phoenix Species 0.000 description 15
- 230000006870 function Effects 0.000 description 15
- 238000004590 computer program Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 9
- 238000003860 storage Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 5
- 230000000670 limiting effect Effects 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001010 compromised effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- GVVPGTZRZFNKDS-JXMROGBWSA-N geranyl diphosphate Chemical compound CC(C)=CCC\C(C)=C\CO[P@](O)(=O)OP(O)(O)=O GVVPGTZRZFNKDS-JXMROGBWSA-N 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
Definitions
- the present disclosure relates to computer system security, and in particular, to a security defensive system for containerized applications.
- Containerization may refer to a type of virtualization in which components of an application are bundled into a single container.
- Containerization provides strong isolation to a process or a set of processes.
- the use of containers in computing environments such as cloud environments has increased rapidly as containers offers many advantages over traditionally used virtual machines (VM). Some advantages include faster deployment, increased portability, and less resource overhead. However, containers are quite different from virtual machines and bring new security concerns.
- Containers may be n directly on top of an operating system. Further, containers may have access to system calls (also known as “syscalls”) at the kernel level to execute one or more functionalities. However, containers can be exposed to attacks and be exploited to do something different from what they were intended to do (e.g., crypto mining). For example, containers may be exploited to compromise the security of containerized applications. An attacking entity or software (e.g., malware) can perform malicious activities by leveraging any system calls available to the exploited container. This can lead to security breaches ranging from data exfiltration to privilege escalation that can be used to gain access to a hosting node (i.e., a node hosting the container and/or containerized application). In addition, the underlying kernel (i.e., the kernel of the operating system where the containers are running), may be directly exposed to attacks from the containers.
- system calls also known as “syscalls”
- Syscalls system calls
- containers can be exposed to attacks and be exploited to do something different from
- an attack surface may be minimized, e g., by reducing interactions between containers and the kernel to a minimum such that only required interactions are allowed.
- System calls may be employed by the operating system to allow user-spaced programs to use kernel-space features (e.g., open a network connection, load a file in its memory, etc.).
- kernel-space features e.g., open a network connection, load a file in its memory, etc.
- protecting rings may be used as a protection mechanism that limits processes operations to their own address space.
- Seccomp helps reduce the attack surface of a given process by allowing only the usage of a subset of system calls, i.e., blocking (e.g., filtering out) the rest of possible system calls.
- blocking e.g., filtering out
- Such filtering may be performed by providing Seccomp with a profile composed of Berkeley Packet Filters (BPF) which may be transparent to the filtered process (i.e., the blocked process does not know which “system calls” it can/cannot use until it actually tries them).
- BPF Berkeley Packet Filters
- Seccomp is not limited to only allowing/blocking operations but can also log actions, kill the process, send signals, and/or return a specific error number.
- a Seccomp profile is defined (in advance and statically) and used to run the container that is to be protected, e.g., so that access is restricted to only the system calls that the container needs for its execution.
- a list of system calls i.e., whitelist of syscalls
- the list does not address some requirements such as some syscalls are required by the container to run its normal functionality and thus cannot be blindly blocked.
- the generated profile is applied based on the regular Seccomp mechanism, which is only used at the starting of the container and cannot be changed at runtime. More specifically, Seccomp profiles may be automatically generated by performing a dynamic analysis of a container and/or automatically generated for containers by tracing all system calls used during a unit and integration testing phase, provided that integration testing exists. Otherwise, the application may be fuzzed, e.g., to attempt to cover all its potential functionalities and the system calls it uses.
- Seccomp profiles may be automatically generated by performing static binary analysis of the container application.
- compiled application code in assembly code
- Libc i.e., C programming language library
- a minimal Seccomp profile required by the container may be generated according to the static analysis.
- each split phase may be configured to use a different list of system calls to be whitelisted.
- containerized applications use a wide set of system calls during their startup phase, but then only require a reduced set of system calls afterward.
- the container application runtime may be split in two phases, and different sets of system calls during each phase may be restricted to reduce the attack surface.
- the container may be started with its startup Seccomp profile, and the Seccomp profile applied using the second set of syscalls to the container processes may be updated whenever the container has left the startup phase.
- the whitelist-based approach is also used in this case, which is statically generated in advance, and is not able to block syscalls based on a known malicious sequence of syscalls.
- a security observability and runtime enforcement tool based on an extended BFP (eBPF) may be used, e.g., to monitor system calls invocation and follow processes execution in containers. Further, system call filters may be enforced at runtime in containers.
- eBPF extended BFP
- this tool lacks visualization aspects, makes it difficult for users to write rules to filter sequences of system calls, and relies solely on user knowledge to build security policies.
- Seccomp profiles are generated beforehand and do not address runtime threats that solely rely on system calls required by the container.
- a system e.g., a defensive system
- the system may be configured to provide dynamic security to system calls associated with containerized applications and/or containers.
- a system also referred to as Phoenix or Phoenix system
- a system e.g., an efficient monitoring and defensive system
- the attacks may include zero-day attacks (i.e., attacks with no known patches yet).
- a root cause is learned (i.e., determined) by performing a security analysis (e.g., a security analysis that is received from another host node).
- a sequence of system calls e g., operating system calls
- the system may be configured to monitor the execution of each system call in the sequence of systems call.
- the monitoring may be performed (e.g., separately performed) such as by dynamically updating a kernel process (e.g., an in-kemel mechanism such as a seccomp profile) at runtime based on the previously monitored/ob served system call in the sequence.
- a kernel process e.g., an in-kemel mechanism such as a seccomp profile
- the system i.e., Phoenix
- the system enables stopping an attack (i.e., interrupt an execution of the attack by preventing it from succeeding).
- the attack is stopped by blocking a subsequent system call in the sequence based on the security criticality of at least one system call in the sequence. For example, some syscalls can be blocked if they are part of a known malicious sequence. If not part of a known malicious sequence, these syscalls may be allowed. This is not supported by existing technology.
- the sequence input to Phoenix can be learned using an anomaly detection system and/or an analysis of a provenance graph that captures an execution dependency between system calls of different processes in containerized application, e.g., after incident occurrence.
- One or more embodiments block sequences of at least one system call that represents a threat, i.e., that could be employed by an attacker. The blocking is performed without preventing the container from working normally.
- One or more embodiments provide one or more of the following advantages:
- a host node configured to protect a software application from a security attack.
- the host node is configured to execute a kernel process associated with the software application and generate a state machine based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls.
- the host node is further configured to perform, using the state machine, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls.
- the kernel process is updated to monitor a next system call in a second sequence of system calls associated with the software application based at least in part on the performed first inspection.
- a second inspection of the monitored next system call is performed based at least on the updated kernel process.
- An execution of the next system call in the second sequence of system calls is blocked or allowed based at least in part on the performed second inspection.
- the host node is further configured to one or more of obtain system call information; generate a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identify the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determine the plurality of parameters associated with each system call of the first sequence of system calls.
- the plurality of parameters include a system call name, arguments, and a process name of a process invoking the corresponding system call.
- the state machine includes a sequence of states and a plurality of transition labels.
- Each state of the sequence of states indicates one system call name and an action to be performed by the host node if the corresponding state is reached.
- a transition label of the plurality of transition labels links at least a first state to a second state consecutive to the first state and indicates at least one parameter associated with the second state.
- the action includes one or more of block a corresponding system call of the second sequence of system calls; warn about the corresponding system call; and step the corresponding system call for release.
- the host node is further configured to, if information associated with the next system call matches the transition label, determine whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state and trigger a transition to a subsequent state of the sequence of states.
- the host node is further configured to, if information associated with the next system call does not match the transition label, allow the execution of the next system call in the second sequence of system calls.
- the host node is further configured to obtain the information from a notification provided by a kernel program.
- blocking or allowing the execution of the next system call includes causing the kernel program to block or allow the execution the next system call.
- updating the kernel process to monitor the next system call includes one or both of (A) updating a monitoring profile of the kernel process using a filter to match the next system call of the second sequence of system calls to one or more system calls of the first sequence of system calls; and (B) trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system cal ls.
- performing the second inspection is based on a trigger signal associated with the monitoring profile.
- one or both of blocking or allowing the execution of the next system call is based a system call security criticality and at least the blocking of the execution of the next system call is associated with protecting the software application from the security attack.
- one or both of the software application is a containerized application comprising a plurality of containers executable by the host node, and the plurality of containers being configured to cause at least the next system call in the second sequence of system calls to be executed.
- a method in a host node configured to protect a software application from a security attack.
- the host node is configured to execute a kernel process associated with the software application.
- the method includes generating a state machine based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls, performing, using the state machine, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls; and updating the kernel process to monitor a next system call in a second sequence of system calls associated with the software application based at least in part on the performed first inspection.
- the method further includes performing a second inspection of the monitored next system call based at least on the updated kernel process and blocking or allowing an execution of the next system call in the second sequence of system calls based at least in part on the performed second inspection.
- the method further includes one or more of obtaining system call information; generating a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identifying the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determining the plurality of parameters associated with each system call of the first sequence of system calls.
- the plurality of parameters include a system call name, arguments, and a process name of a process invoking the corresponding system call.
- the state machine includes a sequence of states and a plurality of transition labels.
- Each state of the sequence of states indicates one system call name and an action to be performed by the host node if the corresponding state is reached.
- a transition label of the plurality of transi tion labels links at least a first state to a second state consecutive to the first state and indicates at least one parameter associated with the second state.
- the action includes one or more of block a corresponding system call of the second sequence of system calls; warn about the corresponding system call; and step the corresponding system call for release.
- the method further includes if information associated with the next system call matches the transition label, determining whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state and tri ggering a transition to a subsequent state of the sequence of states.
- the method further includes if information associated with the next system call does not match the transition label, allowing the execution of the next system call in the second sequence of system calls.
- the method further includes obtaining the information from a notification provided by a kernel program.
- blocking or allowing the execution of the next system call includes causing the kernel program to block or allow the execution the next system call .
- updating the kernel process to monitor the next system call includes one or both of (A) updating a monitoring profile of the kernel process using a filter to match the next system cal l of the second sequence of system calls to one or more system calls of the first sequence of system calls; and (B) trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system calls.
- performing the second inspection is based on a trigger signal associated with the monitoring profile.
- one or both of blocking or allowing the execution of the next system call is based a system call security criticality and at least the blocking of the execution of the next system call is associated with protecting the software application from the security attack.
- one or both of the software application is a containerized application comprising a plurality of containers executable by the host node, and the plurality of containers being configured to cause at least the next system call in the second sequence of system calls to be executed.
- FIG. 1 is a schematic diagram of an example network architecture illustrating a communication system connected via an intermediate network to a host computer according to the principles in the present disclosure
- FIG. 2 is a block diagram of a host computer communicating via a network node with a wireless device over an at least partially wireless connection according to some embodiments of the present disclosure
- FIG. 3 is a flowchart of an example process in a host node according to some embodiments of the present disclosure
- FIG. 4 is a flowchart of another example process in a host node according to some embodiments of the present disclosure.
- FIG. 5 shows an example system overview according to some embodiments of the present disclosure
- FIG. 6 shows an example overview of an example container runtime protection component according to some embodiments of the present disclosure
- FIG. 7 shows an example state machine (e.g., state transition system) according to some embodiments of the present disclosure
- FIG. 8 shows an example execution flow according to some embodiments of the present disclosure
- FIG. 9 shows another example execution flow according to some embodiments of the present disclosure.
- FIG. 10 shows an example execution flow according to some embodiments of the present disclosure. DETAILED DESCRIPTION
- relational terms such as “first” and “second,” “top” and “bottom,” and the like, may be used solely to distinguish one entity or element from another entity or element without necessarily requiring or implying any physical or logical relationship or order between such entities or elements.
- the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the concepts described herein.
- the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
- the joining term, “in communication with” and the like may be used to indicate electrical or data com unication, which may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example.
- electrical or data com unication may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example.
- Coupled may be used herein to indicate a connection, although not necessarily directly, and may include wired and/or wireless connections.
- state machine is used and may refer to a model such as behavior model.
- a state machine may include one or more states and transitions.
- a state may refer to a state of a system.
- the state machine may include an initial state (i.e., where the execution of the state machine begins).
- a transition may determine or define for which input a state is changed.
- host node can be any kind of node such as a standalone node and/or a node comprised in a network which may further comprise any of a network node, virtual machine node, node comprising (and/or configurable to am) one or more containers and/or one or more containerized applications, a node comprising one or more operating systems such as Linux.
- a host node may be a network node, a wireless device, or any other device such as a devices configurable to support communication based on standards promulgated by 3 GPP (The Third Generation Partnership Project (3 GPP)).
- 3 GPP The Third Generation Partnership Project (3 GPP)
- 3GPP has developed and is developing standards for Fourth Generation (4G) (also referred to as Long Term Evolution (LTE)), Fifth Generation (5G) (also referred to as New Radio (NR)), and Sixth Generation (6G) wireless communication systems.
- 4G also referred to as Long Term Evolution (LTE)
- Fifth Generation (5G) also referred to as New Radio (NR)
- 6G Sixth Generation
- Such systems provide, among other features, broadband communication between network nodes, such as base stations, and mobile wireless devices (WD) such as user equipment (UE), as well as communication between network nodes and between WDs.
- WD mobile wireless devices
- UE user equipment
- Some network functions may be deployed as containerized applications which may leverage cloud native technology and containers.
- the host node may comprise a base station (BS), radio base station, base transceiver station (BTS), base station controller (BSC), radio network controller (RNC), g Node B (gNB), evolved Node B (eNB or eNodeB), Node B, multistandard radio (MSR) radio node such as MSR BS, multi -cell/multicast coordination entity (MCE), integrated access and backhaul (IAB) node, relay node, donor node controlling relay, radio access point (AP), transmission points, transmission nodes, Remote Radio Unit (RRU) Remote Radio Head (RRH), a core network node (e.g., mobile management entity (MME), self-organizing network (SON) node, a coordinating node, positioning node, MDT node, etc ), an external node (e.g., 3rd party node, a node external to the current network), nodes in distributed antenna system (DAS), a spectrum access system (SAS) node, an element management system (EMS),
- the host node may also comprise test equipment and/or may be used to also denote a wireless device (WD) such as a wireless device (WD) or a radio network node.
- a wireless device such as a wireless device (WD) or a radio network node.
- WD wireless device
- UE user equipment
- the WD herein can be any type of wireless device capable of communicating with a network node or another WD over radio signals, such as wireless device (WD).
- the WD may also be a radio communication device, target device, device to device (D2D) WD, machine type WD or WD capable of machine to machine communication (M2M), low-cost and/or low-complexity WD, a sensor equipped with WD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, Customer Premises Equipment (CPE), an Internet of Things (loT) device, or a Narrowband loT (NB-IOT) device, etc.
- D2D device to device
- M2M machine to machine communication
- M2M machine to machine communication
- Tablet mobile terminals
- smart phone laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles
- CPE Customer Premises Equipment
- LME Customer Premises Equipment
- NB-IOT Narrowband loT
- radio network node can be any kind of a radio network node which may comprise any of base station, radio base station, base transceiver station, base station controller, network controller, RNC, evolved Node B (eNB), Node B, gNB, Multi-cell/multicast Coordination Entity (MCE), IAB node, relay node, access point, radio access point, Remote Radio Unit (RRU) Remote Radio Head (RRH).
- RNC evolved Node B
- MCE Multi-cell/multicast Coordination Entity
- IAB node IAB node
- relay node access point
- radio access point radio access point
- RRU Remote Radio Unit
- RRH Remote Radio Head
- wireless systems such as, for example, 3GPP LTE and/or New Radio (NR), may be used in this disclosure, this should not be seen as limiting the scope of the disclosure to only the aforementioned system.
- Other wireless systems including without limitation virtual systems, container-based systems, Kubernetes systems, Wide Band Code Division Multiple Access (WCDMA), Worldwide Interoperability for Microwave Access (WiMax), Ultra Mobile Broadband (UMB) and Global System for Mobile Communications (GSM), may also benefit from exploiting the ideas covered within this disclosure.
- WCDMA Wide Band Code Division Multiple Access
- WiMax Worldwide Interoperability for Microwave Access
- UMB Ultra Mobile Broadband
- GSM Global System for Mobile Communications
- functions described herein as being performed by a host node may be distributed over a plurality of host nodes (e.g., nodes in a network and/or wireless devices and/or network nodes).
- host nodes e.g., nodes in a network and/or wireless devices and/or network nodes.
- the functions of the host described herein are not limited to performance by a single physical device and, in fact, can be distributed among several physical devices (and/or virtual devices).
- FIG. 1 a schematic diagram of a system 10, according to an embodiment, which comprises one or more networks 12 (e.g., such as networks 12a, 12b).
- the network 12 e.g., network 12a
- the network 12 may comprise a plurality of host nodes 14a, 14b, 14c (referred to collectively as host nodes 14), such as nodes configurable to support one or more containerized applications.
- another network 12 such as network 12b may comprise one or more host nodes 14.
- Each host node 14a, 14b, 14c is connectable with any other host nodes 14 and/or network 12 (e.g., networks 12a, 12b) over a wired or wireless connection 16 (e.g.,, connections 16a, 16b, 16c) and/or connection 17 (e.g., to/from network 12b).
- Network 12 may refer to a network associated with a container-based environment and/or an access network and/or a core network and/or a cloud network and/or any other type of network.
- a host node 14 is configured to include a security unit 18 (e.g., sequence detection unit) which is configured to perform one or more host node 14 functions described herein such as any step and/or task and/or process and/or method and/or feature described in the present disclosure, e.g., monitor at least one system call of a sequence of system calls; update a kernel process of the host node to monitor a next system call in the sequence of system calls; and perform at least one action based at least on the updated kernel process.
- a security unit 18 e.g., sequence detection unit
- the hardware 20 may include a communication interface 22 for setting up and maintaining a wired or wireless connection with an interface of a different device of the system 10, such as another host node 14.
- the communication interface 22 may be formed as or may include, for example, one or more RF transmitters, one or more RF receivers, and/or one or more RF transceivers.
- the hardware 20 may also include a radio interface 24 for setting up and maintaining a wireless connection with a wireless interface of a different device of the system 10, such as wireless device.
- the radio interface 24 may be formed as or may include, for example, one or more RF transmitters, one or more RF receivers, and/or one or more RF transceivers.
- the hardware 20 of the host node 14 further includes processing circuitry 26.
- the processing circuitry 26 may include a processor 28 and a memory 30.
- the processing circuitry 26 may comprise integrated circuitry for processing and/or control, e.g., one or more processors and/or processor cores and/or FPGAs (Field Programmable Gate Array) and/or ASICs (Application Specific Integrated Circuitry) adapted to execute instructions.
- the processor 28 may be configured to access (e.g., write to and/or read from) the memory 30, which may comprise any kind of volatile and/or nonvolatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory 7 ) and/or ROM (Read-Only Memory) and/or optical memory and/or EPROM (Erasable Programmable Read-Only Memory).
- the memory 30 may comprise any kind of volatile and/or nonvolatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory 7 ) and/or ROM (Read-Only Memory) and/or optical memory and/or EPROM (Erasable Programmable Read-Only Memory).
- the host node 14 further has software 32 stored internally in, for example, memory 30, or stored in external memory (e.g., database, storage array, network storage device, etc.) accessible by the host node 14 via an external connection.
- Software 32 may include at least an operating system 34 and/or software application 36 and/or containers 38 (such as one or more containers associated with a containerized application).
- containers 38 may be comprised by (and/or be) software application 36, e.g., a containerized application.
- the operating system may include a kernel 40 (i.e., an operating system kernel, kernel program, kernel process, etc.).
- the operating system is a Linux operating system
- kernel 40 is a Linux kernel.
- the software 32 (and/or any of its components) may be executable by the processing circuitry 7 26.
- the processing circuitry 26 may be configured to control any of the methods and/or processes described herein and/or to cause such methods, and/or processes to be performed, e.g., by host node 14.
- Processor 28 corresponds to one or more processors 28 for performing host node 14 functions described herein.
- the memory 7 30 is configured to store software 32 (and/or any of its components) data, programmatic software code and/or other information described herein.
- the software 32 may include instructions that, when executed by the processor 28 and/or processing circuitry 26, causes the processor 28 and/or processing circuitry 7 26 to perform the processes described herein with respect to host node 14.
- processing circuitry 26 of the host node 14 may include security unit 18 configured to perform one or more host node 14 functions described herein such as any step and/or task and/or process and/or method and/or feature described in the present disclosure, e.g., monitor at least one system call of a sequence of system calls; update a kernel process of the host node to monitor a next system call in the sequence of system calls; and perform at least one action based at least on the updated kernel process.
- security unit 18 configured to perform one or more host node 14 functions described herein such as any step and/or task and/or process and/or method and/or feature described in the present disclosure, e.g., monitor at least one system call of a sequence of system calls; update a kernel process of the host node to monitor a next system call in the sequence of system calls; and perform at least one action based at least on the updated kernel process.
- FIG. 3 is a flowchart of an example process (i.e., method) in a host node 14 according to some embodiments of the present disclosure.
- One or more blocks described herein may be performed by one or more elements of host node 14 such as by one or more of processing circuitry 26 (including the security unit 18), processor 28, radio interface 24 and/or communication interface 22.
- Host node 14 is configured to monitor (Block S100) at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack, as described herein.
- the host node 14 is configured to update (Block S102) a kernel process of the host node 14 to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call, as described herein.
- the host node 14 is configured to perform (Block S104) at least one action based at least on the updated kernel process, where the at least one action is performed for protecting at least the containerized application from the security attack, as described herein.
- the updating of the kernel process includes updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of the at least one action when the predetermined system call is matched.
- the performing of the at least one action includes one of blocking and allowing an execution of the next system call of the sequence of system calls based at least in part on a system call security criticality.
- the blocking of the execution of the next system call is associated with blocking the security attack.
- FIG. 4 is a flowchart of an example process (i.e., method) in a host node 14 according to some embodiments of the present disclosure.
- One or more blocks described herein may be performed by one or more elements of host node 14 such as by one or more of processing circuitry 26 (including the security unit 18), processor 28, radio interface 24 and/or communication interface 22.
- Host node 14 is configured to monitor (Block S100) at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack, as described herein.
- the host node 14 is configured to protect a software application 36 from a security attack and execute a kernel process associated with the software application 36.
- the host node 14 is further configured to generate (Block S106) a state machine 50 based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls, perform (Block S108), using the state machine 50, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls, and update (Block S I 10) the kernel process to monitor a next system call in a second sequence of system calls associated with the software application based at least in part on the performed first inspection.
- the host node 14 is also configured to perform (Block S106) a state machine 50 based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls, perform (Block S108), using the state machine 50, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of
- Block SI 14 an execution of the next system call in the second sequence of system calls based at least in part on the performed second inspection.
- the method further includes one or more of obtaining system call information; generating a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identifying the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determining the plurality of parameters associated with each system call of the first sequence of system calls.
- the plurality of parameters include a system call name, arguments, and a process name of a process invoking the corresponding system call.
- the state machine 50 includes a sequence of states 52 and a plurality of transition labels 56.
- Each state of the sequence of states 52 indicates one system call name and an action to be performed by the host node 14 if the corresponding state is reached.
- a transition label 56 of the plurality of transition label s 56 links at least a first state 52a to a second state 52b consecutive to the first state 52a and indicates at least one parameter associated with the second state 52b.
- the action includes one or more of block a corresponding system call of the second sequence of system calls, warn about the corresponding system call, and step the corresponding system call for release.
- the method further includes if information associated with the next system call matches the transition label, determining whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state 52b and triggering a transition to a subsequent state 52 of the sequence of states 52.
- the method further includes if information associated with the next system call does not match the transition label, allowing the execution of the next system call in the second sequence of system calls.
- the method further includes obtaining the information from a notification provided by a kernel program.
- blocking or allowing the execution of the next system call includes causing the kernel program to block or allow the execution the next system call.
- updating the kernel process to monitor the next system call includes one or both of (A) updating a monitoring profile of the kernel process using a filter to match the next system call of the second sequence of system calls to one or more system calls of the first sequence of system calls; and (B) trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system calls.
- performing the second inspection is based on a trigger signal associated with the monitoring profile.
- one or both of blocking or allowing the execution of the next system call is based a system call security criticality and at least the blocking of the execution of the next system call is associated with protecting the software application from the security attack.
- one or both of the software application is a containerized application comprising a plurality of containers executable by the host node, and the plurality of containers being configured to cause at least the next system call in the second sequence of system calls to be executed.
- One or more host node 14 functions described below may be performed by one or more of processing circuitry 26, processor 28, security unit 18, hardware 20, software 32, etc.
- a state transition system is used and may refer to a state machine 50.
- a system for protecting software applications 36 (e.g., containerized applications) from security attacks.
- Some security attacks may include zero-day attacks.
- One or more embodiments enable efficient runtime monitoring and blocking of an ordered execution of one or more malicious sequences of system calls.
- the system calls may be performed by the set of processes inside (and/or associated with) one or more containers.
- a sequence of systems calls may include one or more systems calls, where the one or more system calls are associated with a containerized application.
- the sequence of system calls may define an attack (e.g., a security attack may be determined based on the sequence of system calls and/or information associated with the sequence).
- a kernel process (e.g., the in-kernel mechanism such as a seccomp profile) is dynamically updated, e.g.,. so that each specific system call in a given sequence is monitored at a time based on the observation of the precedent system call in the sequence and/or by selectively blocking a given critical system call in the sequence (e.g., to stop (or abort) the security attack).
- the kernel process is updated with a next system call in the sequence if the preceding system call was seen (i.e., determined, detected, determined to be associated with an attack, etc.).
- the updating of the kernel process (e.g., kernel) is used to enable monitoring of the execution of each system call in the sequence.
- an ongoing security attack and/or the progress of the ongoing security attack at runtime is stopped when there is a match with a subset of the sequence of system calls.
- FIG. 5 shows an example system overview according to some embodiments of the present disclosure.
- System 10 comprises one or more components such as software components and/or hardware components.
- system 10 includes a network 12a (e.g., a containerized-based environment), a kernel 40 (e.g., Linux kernel) configured to receive one or more system calls such as from one or more containers and/or processes associated with the containers.
- System 10 may include one or more other components such as comprised (and/or performed, executed) by security unit 18.
- system 10 may further include network 12b (e.g., another containerbased environment) which is configured to communicate with network 12a and may include one or more containers and/or containerized applications, etc.
- system 10 includes a combination of network 12a and network 12b.
- system 10 may be a runtime defending system for protecting host node 14 and/or software applications 36 (e.g., containerized applications) and/or containers 38 from zero-day security attacks originating in containers 38.
- System 10 e.g., Phoenix
- System 10 may receive a set of malicious sequences of enriched system calls.
- Enriched system calls may include system calls and a set of parameters related to each system call in a sequence of system calls (e.g., a plurality of system calls which may be ordered in a predetermined manner such as in a sequence).
- the sequence(s) may be learned from a provenance-based forensics analysis performed on a provenance graph, which may capture the dependency between system calls used by several processes within the software application 36 (e.g., containerized application that system 10 aims to protect at runtime).
- system 10 e.g., Phoenix
- system 10 may comprise of two main components: a Container Restarting component and a Container Runtime Protection component.
- the Container Restarting component may identify at least one compromised container to be protected and/or restart the container and/or prepare the container to be hardened.
- the Container Runtime Protection component may apply protective countermeasures (e.g., any task performed by any of the components of system 10) to a container 38 (e.g., compromised container) so that the container 38 is protected such as during its execution (e.g., at runtime).
- protective countermeasures e.g., any task performed by any of the components of system 10.
- enriched system call sequence may refer to a sequence of system calls where each system call may be enriched with a name of the system call, argument(s) of the system call, and/or a process name of the process invoking the system call.
- one or more enriched system calls (e.g., enriched system call sequence) may include one or more parameters (e.g., may be annotated) as shown in Table 1 below.
- each column is a tuple of enriched system calls sequence including ⁇ order in sequence, syscall name, calling process, arguments> annotated with an action for each system call.
- each system call in the enriched system call sequence may be annotated with an action to determine what to do with the corresponding system call (i.e., block or release) and/or what the next steps to be performed by system 10 (e.g., Phoenix) and/or host node 14.
- system 10 e.g., Phoenix
- host node 14 e.g., Phoenix
- any other actions may be specified (e.g., triggers a given service to consume this system call).
- the annotation of actions on each system call may be performed by user (e.g., a security expert) such as after analyzing system logs. Further, the annotation of actions may be handled by automated tools external to system 10 (e.g., Phoenix) such as a component configured to perform Provenance Graph analysis.
- user e.g., a security expert
- automated tools external to system 10 e.g., Phoenix
- FIG. 6 shows an example overview of an example Container Runtime Protection component according to some embodiments of the present disclosure.
- the Container Runtime Protection (CRP) component of system 10 may be configured to interact with the network 12 (e.g., container-based environment), particularly, with the Linux user-space and kernel-space where the containers are deployed.
- the CRP component may be used to monitor and protect the containers.
- the interactions may include updates to the Seccomp monitor profile to dynamically monitor the execution of a specific system call and interactions with other components of system 10 (e.g., the “ptrace: program) to monitor the enriched system call execution, to notify any component of system 10, and to trigger the actions requested by system 10 (e.g., by host node 14) on the currently observed system calls.
- Seccomp refers to a Linux Kernel feature that may be applied to processes and/or restrict and/or modify their usage of system calls, e.g., by implementing filters.
- “ptrace” refers to a kernel program (e.g., combination of software and hardware) that may attach to another process.
- a Seccomp filter can specify (e.g., request) to send a signal to ptrace upon interception of a specified system call. Ptrace may then inspect the process memory (and thus its syscall, arguments, etc.). Ptrace may also enforce actions on a process by writing into its memory (and thus block its syscall, modify its arguments etc.)
- the CRP component may include a Sequence preprocessing and annotation (SPA) module, Dynamic Seccomp Monitor (DSM) module, Sequence State Monitoring and Matching (SSMM) module, i.e., modules/components (e.g., a combination of software and/or hardware) such as performed by (and/or comprised by) security unit 18 of host node 14.
- the SPA module may be based on a received sequence of enriched system calls as sequence of tuples. Each tuple may include one or more of the following elements: order of the system call in the sequence, its name, a calling process identifier, and an argument of the system call.
- SPA may first preprocess the sequence to identify each syscall and its related parameters, then add annotation to each tuple denoting the action to be performed by system 10 (e.g., Phoenix), e.g., by host node 14, when each of these system calls (e.g., step, warn, block) are detected/ observed/ determined.
- An action added may be either automatically generated by system 10 (e.g., Phoenix) and/or any of its components such that syscalls, before the last one, are labeled “Step” and the last one in the sequence is labeled “Block” or based on some pre-defined association between system calls and actions (e.g., based on the criticality of the syscall from security point of view). For example, executing a shell may be more critical than opening a file.
- SPA may generate a state transition system (i.e., state machine 50) from the tuples where a state 52 denotes the currently observed system call name and the action to be performed (by system 10 (e.g., Phoenix) and/or any of its components) if the state 52 is reached.
- Transitions 54 may be used to link two (or more) consecutive states 52. Labels (i.e., transition labels 56) may also be used and may include an expected next system call with the corresponding calling process and arguments. If the label of the transition 54 is matched, the next state 52 (destination of the transition) can be reached.
- FIG. 7 shows an example state machine 50 (e.g., state transition system) which may include a states 52a, 52b, 52c, 52d, 52e, 52f (collectively referred to as states 52) which are linked by transitions 54.
- States 52 may include information about the state (e.g., initial state) and/or system call name (e.g., OPEN, CONNECT, EXECVE, etc.) and/or actions (e.g., step, warn, block, etc.).
- a transition may occur based on information included in transition labels 56 which may include parameters associated with system calls.
- state machine 50 may be referred to as linked list of states.
- a method may be associated with state machine 50 corresponding to a sequence of system calls (e.g., as shown in Table 1) generated by the SPA module.
- a transition is determined/performed if syscalls equals OPEN, process equals java, and Arguments equals Tog4j.jar.’
- a transition is determined/performed if syscalls equals CONNECT, process equals java, and Arguments equals ‘ 172.16.240.11’, ‘ 1389’.
- a transition is determined/performed if syscalls equals EXECVE, process equals java, and Arguments equals ‘curl’.
- a transition is determined/performed if syscalls equals CONNECT, process equals curl, and Arguments equals ‘ 172. 16.240.11’, ‘80’.
- a transition is determined/performed if syscalls equals OPEN, process equals curl, and Arguments equals 7tmp/’.
- FIG. 8 shows an example execution flow (e.g., S210-S216 performed by SPA, host node 14, security unit 18, etc.).
- a malicious sequence of system calls is preprocessed.
- an action for each system call is associated (e.g., using an annotation).
- a state transition system is generated (e.g., one or more transitions are generated).
- the state transition system is sent to SSMM.
- the DSM module may be configured to interact with Seccomp and dynamically update the Seccomp monitor profile using a specific filter to match a specific system call to be monitored at runtime by system 10 (e.g., Phoenix) and/or any of its components such as security unit 18 of host node 14.
- DSM receives the name of system call(s) to be monitored from the sequence state monitoring and matching module (SSMM).
- SSMM sequence state monitoring and matching module
- a filter may be defined.
- FIG. 9 shows a flow of execution performed by DSM (e.g., performed by security unit 18).
- DSM e.g., performed by security unit 18.
- the filter is updated, when the system call occurs, Seccomp may send a signal to ptrace so ptrace later inspects it in more details (argument and calling process).
- the monitored system call may be held till a decision on the action to be performed on this system call is made (e.g., by system 10, host node 14, security unit 18), which may be handled/processed by the SSMM module as follows.
- the SSMM module receives a state transition system (such as shown in FIG. 6) from the SPA module.
- the state transition system corresponds to a specific malicious sequence of enriched system calls (e.g., that may follow the ordered execution of the malicious sequence by the container that is to be protected).
- SSMM uses the state transition system to perform one or more of the following: (1) save internally the current state of execution, i.e., the last seen system call from the sequence, and take the action corresponding to when it occurred; (2) determine the next system call to be monitored in the sequence and send it to DSM; and (3 ) identify the arguments and calling process of the expected next system call in the sequence, which may be used to be matched with those of the system call actually being executed by the container to be protected (i.e., matching the currently executing system call and parameters with the enriched system call captured in the malicious sequence).
- Arguments and calling process may be determined by ptrace, which may notify any component of system 10 (e.g., Phoenix) about the occurrence of the system call monitored by Seccomp. Once the matching is determined to happen, SSM may send a signal to ptrace with the action to be performed on the currently observed system call.
- FIG. 10 shows an execution flow performed by the SSMM module and interactions with ptrace and DSM module.
- a state transition system is received.
- an instance of the state transition system is initialized.
- a next system call is determined from the next transition.
- the next system call is sent to DSM.
- a notification from ptrace is received and/or processed.
- the current system call execution information is matched with the label of the next transition in the state transition system.
- step S234 whether there is a match is determined. If there is a match, at step S236, the transition to the next state 52 in the state transition system is triggered to update the last seen system call.
- step S2308 a decision is sent to ptrace, e.g., to release the system call.
- SSMM determines whether there is an action to be performed. If there is an action, such as STEP, at step S242, a decision is sent to ptrace, e.g., to release the system call. If there is an action such as BLOCK, at step S244, a decision is sent to ptrace, e.g., to block the system call.
- one or more of the fol lowing may be performed by host node 14:
- a state machine 50 e.g., state transition system
- the state machine allows or blocks system calls. This offers the possibility of having a system call X being allowed or blocked for the same process based on the state machine 50 at different moments based the life cycle of the process.
- the state machine 50 can be modified.
- Monitor at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application (e.g., software application 36 comprising containers 38) and having been identified as a system attack.
- Updating the kernel process may include updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of at least one action when the predetermined system call is matched.
- Performing the further inspection may include inspecting one or more parameter of the containerized application at the time of interception and matching it against a parameter in the sequence of system calls identified as a system attack.
- Performing at least one action may include blocking or allowing an execution of the next system call in the sequence of system calls based at least in part on a system call security criticality.
- making the “further inspection” operation may be less “resource costly” as a result of the “updating a kernel process” (bullet #3) above.
- a host node being configured to, and/or comprising processing circuitry configured to: monitor at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack; update a kernel process of the host node to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call; and perform at least one action based at least on the updated kernel process, the at least one action being performed for protecting at least the containerized application from the security attack.
- Embodiment A2 The host node of Embodiment Al, wherein the updating of the kernel process includes: updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of the at least one action when the predetermined system call is matched.
- Embodiment A3 The host node of any one of Embodiments Al and A2, wherein the performing of the at least one action includes: one of blocking and allowing an execution of the next system call in the sequence of system calls based at least in part on a system call security criticality.
- Embodiment A4 The host node of Embodiment A3, wherein the blocking of the execution of the next system call is associated with blocking the security attack.
- Embodiment Bl A method in a host node, the method comprising: monitoring at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack; updating a kernel process of the host node to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call; and performing at least one action based at least on the updated kernel process, the at least one action being performed for protecting at least the containerized application from the security attack.
- Embodiment B2 The method of any one of Embodiment Bl, wherein the updating of the kernel process includes: updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of the at least one action when the predetermined system call is matched.
- Embodiment B3 The method of any one of Embodiments Bl and B2, wherein the performing of the at least one action includes: one of blocking and allowing an execution of the next system call of the sequence of system calls based at least in part on a system call security criticality.
- Embodiment B4. The method of Embodiment B3, wherein the blocking of the execution of the next system call is associated with blocking the security attack.
- the concepts described herein may be embodied as a method, data processing system, computer program product and/or computer storage media storing an executable computer program. Accordingly, the concepts described herein may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Any process, step, action and/or functionality described herein may be performed by, and/or associated to, a corresponding module, which may be implemented in software and/or firmware and/or hardware. Furthermore, the disclosure may take the form of a computer program product on a tangible computer usable storage medium having computer program code embodied in the medium that can be executed by a computer. Any suitable tangible computer readable medium may be utilized including hard disks, CD-ROMs, electronic storage devices, optical storage devices, or magnetic storage devices.
- These computer program instructions may also be stored in a computer readable memory or storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- Computer program code for carrying out operations of the concepts described herein may be written in an object oriented programming language such as Python, Java® or C++.
- the computer program code for carrying out operations of the disclosure may also be written in conventional procedural programming languages, such as the "C" programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer.
- the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Virology (AREA)
- Debugging And Monitoring (AREA)
Abstract
A host node configured to generate a state machine based on a first sequence of system calls identified as corresponding to a security attack and a plurality of parameters and perform, using the state machine, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters corresponding the one or more system calls. The kernel process is updated to monitor a next system call in a second sequence of system calls associated with the software application based at least in part on the performed first inspection. A second inspection of the monitored next system call is performed based at least on the updated kernel process. An execution of the next system call in the second sequence of system calls is blocked or allowed based at least in part on the performed second inspection.
Description
DYNAMIC SYSTEM CALLS-LEVEL SECURITY DEFENSIVE SYSTEM FOR CONTAINERIZED APPLICATIONS
TECHNICAL FIELD
The present disclosure relates to computer system security, and in particular, to a security defensive system for containerized applications.
BACKGROUND
Some computing systems use containerization, which may refer to a type of virtualization in which components of an application are bundled into a single container. Containerization provides strong isolation to a process or a set of processes. Further, the use of containers in computing environments such as cloud environments has increased rapidly as containers offers many advantages over traditionally used virtual machines (VM). Some advantages include faster deployment, increased portability, and less resource overhead. However, containers are quite different from virtual machines and bring new security concerns.
Containers may be n directly on top of an operating system. Further, containers may have access to system calls (also known as “syscalls”) at the kernel level to execute one or more functionalities. However, containers can be exposed to attacks and be exploited to do something different from what they were intended to do (e.g., crypto mining). For example, containers may be exploited to compromise the security of containerized applications. An attacking entity or software (e.g., malware) can perform malicious activities by leveraging any system calls available to the exploited container. This can lead to security breaches ranging from data exfiltration to privilege escalation that can be used to gain access to a hosting node (i.e., a node hosting the container and/or containerized application). In addition, the underlying kernel (i.e., the kernel of the operating system where the containers are running), may be directly exposed to attacks from the containers.
In response to these concerns, an attack surface may be minimized, e g., by reducing interactions between containers and the kernel to a minimum such that only required interactions are allowed. System calls may be employed by the operating system to allow user-spaced programs to use kernel-space features (e.g., open a network connection, load a file in its memory, etc.). Further, in computer system architecture,
protecting rings may be used as a protection mechanism that limits processes operations to their own address space.
One mechanism that may be used to restrict syscalls available to container is Seccomp. Seccomp helps reduce the attack surface of a given process by allowing only the usage of a subset of system calls, i.e., blocking (e.g., filtering out) the rest of possible system calls. Such filtering may be performed by providing Seccomp with a profile composed of Berkeley Packet Filters (BPF) which may be transparent to the filtered process (i.e., the blocked process does not know which “system calls” it can/cannot use until it actually tries them). Further, Seccomp is not limited to only allowing/blocking operations but can also log actions, kill the process, send signals, and/or return a specific error number. Typically, a Seccomp profile is defined (in advance and statically) and used to run the container that is to be protected, e.g., so that access is restricted to only the system calls that the container needs for its execution.
A list of system calls (i.e., whitelist of syscalls) that are needed by containers may be generated based on inspecting the binary code or monitoring the execution of the container. However, the list does not address some requirements such as some syscalls are required by the container to run its normal functionality and thus cannot be blindly blocked. Further, the generated profile is applied based on the regular Seccomp mechanism, which is only used at the starting of the container and cannot be changed at runtime. More specifically, Seccomp profiles may be automatically generated by performing a dynamic analysis of a container and/or automatically generated for containers by tracing all system calls used during a unit and integration testing phase, provided that integration testing exists. Otherwise, the application may be fuzzed, e.g., to attempt to cover all its potential functionalities and the system calls it uses.
Further, Seccomp profiles may be automatically generated by performing static binary analysis of the container application. In particular, compiled application code (in assembly code) may be analyzed, and both system calls directly called by the application or called through Libc (i.e., C programming language library) or other libraries may be identified. Then, a minimal Seccomp profile required by the container may be generated according to the static analysis.
With respect to splitting container runtime into two phases, each split phase may be configured to use a different list of system calls to be whitelisted. In particular, containerized applications use a wide set of system calls during their startup phase, but then only require a reduced set of system calls afterward. The container application
runtime may be split in two phases, and different sets of system calls during each phase may be restricted to reduce the attack surface. The container may be started with its startup Seccomp profile, and the Seccomp profile applied using the second set of syscalls to the container processes may be updated whenever the container has left the startup phase.
However, the whitelist-based approach is also used in this case, which is statically generated in advance, and is not able to block syscalls based on a known malicious sequence of syscalls.
In addition, a security observability and runtime enforcement tool based on an extended BFP (eBPF) may be used, e.g., to monitor system calls invocation and follow processes execution in containers. Further, system call filters may be enforced at runtime in containers. However, this tool lacks visualization aspects, makes it difficult for users to write rules to filter sequences of system calls, and relies solely on user knowledge to build security policies.
In sum, existing technologies are limited to reducing attack surface by restricting the container system calls to only what is perceived as necessary. However, an attacker can still use this minimal set of system calls to perform attacks on the container or on the kernel itself. Further, typical Seccomp profiles consist of a static list of system calls that allow (whitelist) some calls, while implicitly blocking the rest. In addition, one goal of existing technologies is to provide a container with the most restrictive Seccomp profile, i.e., the one allowing only the system calls needed for the application to function correctly. However, these approaches may cause several issues:
For instance, as the Linux kernel (v5.6) contains more than 300 system calls, building the most restrictive profile for a containerized application (manually) is troublesome, as profiles should be generated depending on the libraries used at runtime, the operating system, the build version, etc. Further, profiles should be updated at each update of library, application or operating system. “Default” profiles are offered but are not tailor-made and often end up being inadequate. In addition, although the existing technologies have been proposed to remedy these issues (i.e., claiming to assist users in automatically generating restrictive Seccomp profiles), existing technologies do not address the problems where any system call required by the application is available to use by an attacker.
■' Even where split-phase dynamic Seccomp update is used, the Seccomp filter is updated only once (when the container enters the second phase). Afterwards, a regular static Seccomp filter is used until the end of the container lifecycle, thereby bringing back the aforementioned problems.
Seccomp profiles are generated beforehand and do not address runtime threats that solely rely on system calls required by the container.
SUMMARY
Some embodiments advantageously provide methods, systems, and apparatuses for securing containers and/or containerized applications/systems. In some embodiments, a system (e.g., a defensive system) is described. The system may be configured to provide dynamic security to system calls associated with containerized applications and/or containers.
In some embodiments, a system (also referred to as Phoenix or Phoenix system), e.g., an efficient monitoring and defensive system, enables the protection of containerized applications from attacks originating in the container. The attacks may include zero-day attacks (i.e., attacks with no known patches yet). In some other embodiments, a root cause is learned (i.e., determined) by performing a security analysis (e.g., a security analysis that is received from another host node). In one embodiment, a sequence of system calls (e g., operating system calls) are learned and/or determined to be related to an execution of a security attack and/or a vulnerability exploit by a set of (at least one) processes of a containerized application. The system (i.e., Phoenix) may be configured to monitor the execution of each system call in the sequence of systems call. The monitoring may be performed (e.g., separately performed) such as by dynamically updating a kernel process (e.g., an in-kemel mechanism such as a seccomp profile) at runtime based on the previously monitored/ob served system call in the sequence.
In one embodiment, the system (i.e., Phoenix) enables stopping an attack (i.e., interrupt an execution of the attack by preventing it from succeeding). In another embodiment, the attack is stopped by blocking a subsequent system call in the sequence based on the security criticality of at least one system call in the sequence. For example, some syscalls can be blocked if they are part of a known malicious sequence. If not part of a known malicious sequence, these syscalls may be allowed. This is not supported by existing technology. The sequence input to Phoenix can be learned using an anomaly detection system and/or an analysis of a provenance graph that captures an execution
dependency between system calls of different processes in containerized application, e.g., after incident occurrence.
One or more embodiments block sequences of at least one system call that represents a threat, i.e., that could be employed by an attacker. The blocking is performed without preventing the container from working normally. One or more embodiments provide one or more of the following advantages:
- Reduce the attack surface further than existing solutions while allowing the container to function normally.
Do not require offline analysis of the container images as security enforcement can be performed during runtime, without interruption.
- If the system calls used in the sequence are derived from provenance a graph, e.g., of the same system, compatibility and versioning issues are avoided.
- Most operations performed by a container involve using system call, and so do most attacks or vulnerability exploits. By enforcing security at the lowest possible level (the system call level), some embodiments can block any attack or exploit, no matter its nature or origin. Thus, it is proposed to restrict the container attack surface even further by blocking specific system calls if they are executed as part of sequences of system calls that are known (e.g., determined by a host node) to be malicious (i.e., employed to perform a malicious operation through the container). In some embodiments, blocking the specific system calls results in the attack being stopped.
According to one aspect, a host node configured to protect a software application from a security attack is described. The host node is configured to execute a kernel process associated with the software application and generate a state machine based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls. The host node is further configured to perform, using the state machine, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls. The kernel process is updated to monitor a next system call in a second sequence of system calls associated with the software application based at least in part on the performed first inspection. A second inspection of the monitored next system call is performed based at least on the updated kernel process. An execution of the next system call in the second
sequence of system calls is blocked or allowed based at least in part on the performed second inspection.
In some embodiments, the host node is further configured to one or more of obtain system call information; generate a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identify the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determine the plurality of parameters associated with each system call of the first sequence of system calls.
In some other embodiments, the plurality of parameters include a system call name, arguments, and a process name of a process invoking the corresponding system call.
In some embodiments, the state machine includes a sequence of states and a plurality of transition labels. Each state of the sequence of states indicates one system call name and an action to be performed by the host node if the corresponding state is reached. A transition label of the plurality of transition labels links at least a first state to a second state consecutive to the first state and indicates at least one parameter associated with the second state.
In some other embodiments, the action includes one or more of block a corresponding system call of the second sequence of system calls; warn about the corresponding system call; and step the corresponding system call for release.
In some embodiments, the host node is further configured to, if information associated with the next system call matches the transition label, determine whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state and trigger a transition to a subsequent state of the sequence of states.
In some other embodiments, the host node is further configured to, if information associated with the next system call does not match the transition label, allow the execution of the next system call in the second sequence of system calls.
In some embodiments, the host node is further configured to obtain the information from a notification provided by a kernel program.
In some other embodiments, blocking or allowing the execution of the next system call includes causing the kernel program to block or allow the execution the next system call.
In some embodiments, updating the kernel process to monitor the next system call includes one or both of (A) updating a monitoring profile of the kernel process using a
filter to match the next system call of the second sequence of system calls to one or more system calls of the first sequence of system calls; and (B) trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system cal ls.
In some other embodiments, performing the second inspection is based on a trigger signal associated with the monitoring profile.
In some embodiments, one or both of blocking or allowing the execution of the next system call is based a system call security criticality and at least the blocking of the execution of the next system call is associated with protecting the software application from the security attack.
In some other embodiments, one or both of the software application is a containerized application comprising a plurality of containers executable by the host node, and the plurality of containers being configured to cause at least the next system call in the second sequence of system calls to be executed.
According to another aspect, a method in a host node configured to protect a software application from a security attack is described. The host node is configured to execute a kernel process associated with the software application. The method includes generating a state machine based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls, performing, using the state machine, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls; and updating the kernel process to monitor a next system call in a second sequence of system calls associated with the software application based at least in part on the performed first inspection. The method further includes performing a second inspection of the monitored next system call based at least on the updated kernel process and blocking or allowing an execution of the next system call in the second sequence of system calls based at least in part on the performed second inspection.
In some embodiments, the method further includes one or more of obtaining system call information; generating a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identifying the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determining the plurality of parameters associated with each system call of the first sequence of system calls.
In some other embodiments, the plurality of parameters include a system call name, arguments, and a process name of a process invoking the corresponding system call.
In some embodiments, the state machine includes a sequence of states and a plurality of transition labels. Each state of the sequence of states indicates one system call name and an action to be performed by the host node if the corresponding state is reached. A transition label of the plurality of transi tion labels links at least a first state to a second state consecutive to the first state and indicates at least one parameter associated with the second state.
In some other embodiments, the action includes one or more of block a corresponding system call of the second sequence of system calls; warn about the corresponding system call; and step the corresponding system call for release.
In some embodiments, the method further includes if information associated with the next system call matches the transition label, determining whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state and tri ggering a transition to a subsequent state of the sequence of states.
In some other embodiments, the method further includes if information associated with the next system call does not match the transition label, allowing the execution of the next system call in the second sequence of system calls.
In some embodiments, the method further includes obtaining the information from a notification provided by a kernel program.
In some other embodiments, blocking or allowing the execution of the next system call includes causing the kernel program to block or allow the execution the next system call .
In some embodiments, updating the kernel process to monitor the next system call includes one or both of (A) updating a monitoring profile of the kernel process using a filter to match the next system cal l of the second sequence of system calls to one or more system calls of the first sequence of system calls; and (B) trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system calls.
In some other embodiments, performing the second inspection is based on a trigger signal associated with the monitoring profile.
In some embodiments, one or both of blocking or allowing the execution of the next system call is based a system call security criticality and at least the blocking of the
execution of the next system call is associated with protecting the software application from the security attack.
In some other embodiments, one or both of the software application is a containerized application comprising a plurality of containers executable by the host node, and the plurality of containers being configured to cause at least the next system call in the second sequence of system calls to be executed.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present embodiments, and the attendant advantages and features thereof, will be more readily understood by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein:
FIG. 1 is a schematic diagram of an example network architecture illustrating a communication system connected via an intermediate network to a host computer according to the principles in the present disclosure;
FIG. 2 is a block diagram of a host computer communicating via a network node with a wireless device over an at least partially wireless connection according to some embodiments of the present disclosure;
FIG. 3 is a flowchart of an example process in a host node according to some embodiments of the present disclosure;
FIG. 4 is a flowchart of another example process in a host node according to some embodiments of the present disclosure;
FIG. 5 shows an example system overview according to some embodiments of the present disclosure;
FIG. 6 shows an example overview of an example container runtime protection component according to some embodiments of the present disclosure;
FIG. 7 shows an example state machine (e.g., state transition system) according to some embodiments of the present disclosure;
FIG. 8 shows an example execution flow according to some embodiments of the present disclosure;
FIG. 9 shows another example execution flow according to some embodiments of the present disclosure; and
FIG. 10 shows an example execution flow according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
Before describing in detail example embodiments, it is noted that the embodiments reside primarily in combinations of apparatus components and processing steps related to securing containers and/or containerized applications/ systems. Accordingly, components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. Like numbers refer to like elements throughout the description.
As used herein, relational terms, such as “first” and “second,” “top” and “bottom,” and the like, may be used solely to distinguish one entity or element from another entity or element without necessarily requiring or implying any physical or logical relationship or order between such entities or elements. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the concepts described herein. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In embodiments described herein, the joining term, “in communication with” and the like, may be used to indicate electrical or data com unication, which may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example. One having ordinary skill in the art will appreciate that multiple components may interoperate and modifications and variations are possible of achieving the electrical and data communication.
In some embodiments described herein, the term “coupled,” “connected,” and the like, may be used herein to indicate a connection, although not necessarily directly, and may include wired and/or wireless connections.
In some embodiments, the term “state machine” is used and may refer to a model such as behavior model. A state machine may include one or more states and transitions. A state may refer to a state of a system. The state machine may include an initial state (i.e.,
where the execution of the state machine begins). In some embodiments, a transition may determine or define for which input a state is changed.
The term “host node” used herein can be any kind of node such as a standalone node and/or a node comprised in a network which may further comprise any of a network node, virtual machine node, node comprising (and/or configurable to am) one or more containers and/or one or more containerized applications, a node comprising one or more operating systems such as Linux.
In some embodiments, a host node may be a network node, a wireless device, or any other device such as a devices configurable to support communication based on standards promulgated by 3 GPP (The Third Generation Partnership Project (3 GPP)). 3GPP has developed and is developing standards for Fourth Generation (4G) (also referred to as Long Term Evolution (LTE)), Fifth Generation (5G) (also referred to as New Radio (NR)), and Sixth Generation (6G) wireless communication systems. Such systems provide, among other features, broadband communication between network nodes, such as base stations, and mobile wireless devices (WD) such as user equipment (UE), as well as communication between network nodes and between WDs. Some network functions may be deployed as containerized applications which may leverage cloud native technology and containers.
More specifically, the host node may comprise a base station (BS), radio base station, base transceiver station (BTS), base station controller (BSC), radio network controller (RNC), g Node B (gNB), evolved Node B (eNB or eNodeB), Node B, multistandard radio (MSR) radio node such as MSR BS, multi -cell/multicast coordination entity (MCE), integrated access and backhaul (IAB) node, relay node, donor node controlling relay, radio access point (AP), transmission points, transmission nodes, Remote Radio Unit (RRU) Remote Radio Head (RRH), a core network node (e.g., mobile management entity (MME), self-organizing network (SON) node, a coordinating node, positioning node, MDT node, etc ), an external node (e.g., 3rd party node, a node external to the current network), nodes in distributed antenna system (DAS), a spectrum access system (SAS) node, an element management system (EMS), etc.
The host node may also comprise test equipment and/or may be used to also denote a wireless device (WD) such as a wireless device (WD) or a radio network node. In some embodiments, the non-limiting terms wireless device (WD) or a user equipment (UE) are used interchangeably. The WD herein can be any type of wireless device capable of communicating with a network node or another WD over radio signals, such as wireless
device (WD). The WD may also be a radio communication device, target device, device to device (D2D) WD, machine type WD or WD capable of machine to machine communication (M2M), low-cost and/or low-complexity WD, a sensor equipped with WD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, Customer Premises Equipment (CPE), an Internet of Things (loT) device, or a Narrowband loT (NB-IOT) device, etc.
Also, in some embodiments the generic term “radio network node” is used. It can be any kind of a radio network node which may comprise any of base station, radio base station, base transceiver station, base station controller, network controller, RNC, evolved Node B (eNB), Node B, gNB, Multi-cell/multicast Coordination Entity (MCE), IAB node, relay node, access point, radio access point, Remote Radio Unit (RRU) Remote Radio Head (RRH).
Note that although terminology from one particular wireless system, such as, for example, 3GPP LTE and/or New Radio (NR), may be used in this disclosure, this should not be seen as limiting the scope of the disclosure to only the aforementioned system. Other wireless systems, including without limitation virtual systems, container-based systems, Kubernetes systems, Wide Band Code Division Multiple Access (WCDMA), Worldwide Interoperability for Microwave Access (WiMax), Ultra Mobile Broadband (UMB) and Global System for Mobile Communications (GSM), may also benefit from exploiting the ideas covered within this disclosure.
Note further, that functions described herein as being performed by a host node may be distributed over a plurality of host nodes (e.g., nodes in a network and/or wireless devices and/or network nodes). In other words, it is contemplated that the functions of the host described herein are not limited to performance by a single physical device and, in fact, can be distributed among several physical devices (and/or virtual devices).
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Referring now to the drawing figures, in which like elements are referred to by like reference numerals, there is shown in FIG. 1 a schematic diagram of a system 10, according to an embodiment, which comprises one or more networks 12 (e.g., such as
networks 12a, 12b). The network 12 (e.g., network 12a) may comprise a plurality of host nodes 14a, 14b, 14c (referred to collectively as host nodes 14), such as nodes configurable to support one or more containerized applications. Similarly, another network 12 such as network 12b may comprise one or more host nodes 14. Each host node 14a, 14b, 14c is connectable with any other host nodes 14 and/or network 12 (e.g., networks 12a, 12b) over a wired or wireless connection 16 (e.g.,, connections 16a, 16b, 16c) and/or connection 17 (e.g., to/from network 12b). Network 12 may refer to a network associated with a container-based environment and/or an access network and/or a core network and/or a cloud network and/or any other type of network.
A host node 14 is configured to include a security unit 18 (e.g., sequence detection unit) which is configured to perform one or more host node 14 functions described herein such as any step and/or task and/or process and/or method and/or feature described in the present disclosure, e.g., monitor at least one system call of a sequence of system calls; update a kernel process of the host node to monitor a next system call in the sequence of system calls; and perform at least one action based at least on the updated kernel process.
Example implementations, in accordance with an embodiment, of the host node 14 discussed in the preceding paragraphs will now be described with reference to FIG. 2. In a system 10, host node 14 provided in a system 10 and including hardware 20 enabling it to perform one or more host node actions. The hardware 20 may include a communication interface 22 for setting up and maintaining a wired or wireless connection with an interface of a different device of the system 10, such as another host node 14. The communication interface 22 may be formed as or may include, for example, one or more RF transmitters, one or more RF receivers, and/or one or more RF transceivers. The hardware 20 may also include a radio interface 24 for setting up and maintaining a wireless connection with a wireless interface of a different device of the system 10, such as wireless device. The radio interface 24 may be formed as or may include, for example, one or more RF transmitters, one or more RF receivers, and/or one or more RF transceivers.
In the embodiment shown, the hardware 20 of the host node 14 further includes processing circuitry 26. The processing circuitry 26 may include a processor 28 and a memory 30. In particular, in addition to or instead of a processor, such as a central processing unit, and memory, the processing circuitry 26 may comprise integrated circuitry for processing and/or control, e.g., one or more processors and/or processor cores and/or FPGAs (Field Programmable Gate Array) and/or ASICs (Application Specific
Integrated Circuitry) adapted to execute instructions. The processor 28 may be configured to access (e.g., write to and/or read from) the memory 30, which may comprise any kind of volatile and/or nonvolatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory7) and/or ROM (Read-Only Memory) and/or optical memory and/or EPROM (Erasable Programmable Read-Only Memory).
Thus, the host node 14 further has software 32 stored internally in, for example, memory 30, or stored in external memory (e.g., database, storage array, network storage device, etc.) accessible by the host node 14 via an external connection. Software 32 may include at least an operating system 34 and/or software application 36 and/or containers 38 (such as one or more containers associated with a containerized application). In some embodiments, containers 38 may be comprised by (and/or be) software application 36, e.g., a containerized application. The operating system may include a kernel 40 (i.e., an operating system kernel, kernel program, kernel process, etc.). In a nonlimiting example, the operating system is a Linux operating system, and kernel 40 is a Linux kernel. The software 32 (and/or any of its components) may be executable by the processing circuitry7 26. The processing circuitry 26 may be configured to control any of the methods and/or processes described herein and/or to cause such methods, and/or processes to be performed, e.g., by host node 14. Processor 28 corresponds to one or more processors 28 for performing host node 14 functions described herein. The memory7 30 is configured to store software 32 (and/or any of its components) data, programmatic software code and/or other information described herein. In some embodiments, the software 32 may include instructions that, when executed by the processor 28 and/or processing circuitry 26, causes the processor 28 and/or processing circuitry7 26 to perform the processes described herein with respect to host node 14. For example, processing circuitry 26 of the host node 14 may include security unit 18 configured to perform one or more host node 14 functions described herein such as any step and/or task and/or process and/or method and/or feature described in the present disclosure, e.g., monitor at least one system call of a sequence of system calls; update a kernel process of the host node to monitor a next system call in the sequence of system calls; and perform at least one action based at least on the updated kernel process.
FIG. 3 is a flowchart of an example process (i.e., method) in a host node 14 according to some embodiments of the present disclosure. One or more blocks described herein may be performed by one or more elements of host node 14 such as by one or more of processing circuitry 26 (including the security unit 18), processor 28, radio interface 24
and/or communication interface 22. Host node 14 is configured to monitor (Block S100) at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack, as described herein. The host node 14 is configured to update (Block S102) a kernel process of the host node 14 to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call, as described herein. The host node 14 is configured to perform (Block S104) at least one action based at least on the updated kernel process, where the at least one action is performed for protecting at least the containerized application from the security attack, as described herein.
In some embodiments, the updating of the kernel process includes updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of the at least one action when the predetermined system call is matched.
In some other embodiment, the performing of the at least one action includes one of blocking and allowing an execution of the next system call of the sequence of system calls based at least in part on a system call security criticality.
In an embodiment, the blocking of the execution of the next system call is associated with blocking the security attack.
FIG. 4 is a flowchart of an example process (i.e., method) in a host node 14 according to some embodiments of the present disclosure. One or more blocks described herein may be performed by one or more elements of host node 14 such as by one or more of processing circuitry 26 (including the security unit 18), processor 28, radio interface 24 and/or communication interface 22. Host node 14 is configured to monitor (Block S100) at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack, as described herein. The host node 14 is configured to protect a software application 36 from a security attack and execute a kernel process associated with the software application 36. The host node 14 is further configured to generate (Block S106) a state machine 50 based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls, perform (Block S108), using the state machine 50, a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls, and update (Block S I 10) the kernel process to monitor a next system call in a second
sequence of system calls associated with the software application based at least in part on the performed first inspection. The host node 14 is also configured to perform (Block
S I 12) a second inspection of the monitored next system call based at least on the updated kernel process and block or allow (Block SI 14) an execution of the next system call in the second sequence of system calls based at least in part on the performed second inspection.
In some embodiments, the method further includes one or more of obtaining system call information; generating a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identifying the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determining the plurality of parameters associated with each system call of the first sequence of system calls.
In some other embodiments, the plurality of parameters include a system call name, arguments, and a process name of a process invoking the corresponding system call.
In some embodiments, the state machine 50 includes a sequence of states 52 and a plurality of transition labels 56. Each state of the sequence of states 52 indicates one system call name and an action to be performed by the host node 14 if the corresponding state is reached. A transition label 56 of the plurality of transition label s 56 links at least a first state 52a to a second state 52b consecutive to the first state 52a and indicates at least one parameter associated with the second state 52b.
In some other embodiments, the action includes one or more of block a corresponding system call of the second sequence of system calls, warn about the corresponding system call, and step the corresponding system call for release.
In some embodiments, the method further includes if information associated with the next system call matches the transition label, determining whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state 52b and triggering a transition to a subsequent state 52 of the sequence of states 52.
In some other embodiments, the method further includes if information associated with the next system call does not match the transition label, allowing the execution of the next system call in the second sequence of system calls.
In some embodiments, the method further includes obtaining the information from a notification provided by a kernel program.
In some other embodiments, blocking or allowing the execution of the next system call includes causing the kernel program to block or allow the execution the next system call.
In some embodiments, updating the kernel process to monitor the next system call includes one or both of (A) updating a monitoring profile of the kernel process using a filter to match the next system call of the second sequence of system calls to one or more system calls of the first sequence of system calls; and (B) trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system calls.
In some other embodiments, performing the second inspection is based on a trigger signal associated with the monitoring profile.
In some embodiments, one or both of blocking or allowing the execution of the next system call is based a system call security criticality and at least the blocking of the execution of the next system call is associated with protecting the software application from the security attack.
In some other embodiments, one or both of the software application is a containerized application comprising a plurality of containers executable by the host node, and the plurality of containers being configured to cause at least the next system call in the second sequence of system calls to be executed.
Having described the general process flow of arrangements of the disclosure and having provided examples of hardware and software arrangements for implementing the processes and functions of the disclosure, the sections below provide details and examples of arrangements for securing containers and/or containerized applications/sy stems such as by, for example, at least detecting one or more system calls in a sequence of system calls that had been previously identified as being associated with a malicious or security attack.
One or more host node 14 functions described below may be performed by one or more of processing circuitry 26, processor 28, security unit 18, hardware 20, software 32, etc. In some embodiments, the term a state transition system is used and may refer to a state machine 50.
In some embodiments provide a system (e.g., runtime defending system) for protecting software applications 36 (e.g., containerized applications) from security attacks. Some security attacks may include zero-day attacks. One or more embodiments enable efficient runtime monitoring and blocking of an ordered execution of one or more malicious sequences of system calls. The system calls may be performed by the set of
processes inside (and/or associated with) one or more containers. A sequence of systems calls may include one or more systems calls, where the one or more system calls are associated with a containerized application. In some embodiments, the sequence of system calls may define an attack (e.g., a security attack may be determined based on the sequence of system calls and/or information associated with the sequence). In some other embodiments, a kernel process (e.g., the in-kernel mechanism such as a seccomp profile) is dynamically updated, e.g.,. so that each specific system call in a given sequence is monitored at a time based on the observation of the precedent system call in the sequence and/or by selectively blocking a given critical system call in the sequence (e.g., to stop (or abort) the security attack). In one or more embodiments, the kernel process is updated with a next system call in the sequence if the preceding system call was seen (i.e., determined, detected, determined to be associated with an attack, etc.). In an embodiment, the updating of the kernel process (e.g., kernel) is used to enable monitoring of the execution of each system call in the sequence.
In some other embodiments, an ongoing security attack and/or the progress of the ongoing security attack at runtime is stopped when there is a match with a subset of the sequence of system calls.
FIG. 5 shows an example system overview according to some embodiments of the present disclosure. System 10 comprises one or more components such as software components and/or hardware components. In some embodiments, system 10 includes a network 12a (e.g., a containerized-based environment), a kernel 40 (e.g., Linux kernel) configured to receive one or more system calls such as from one or more containers and/or processes associated with the containers. System 10 may include one or more other components such as comprised (and/or performed, executed) by security unit 18. In some other embodiments, system 10 may further include network 12b (e.g., another containerbased environment) which is configured to communicate with network 12a and may include one or more containers and/or containerized applications, etc. In an embodiment, system 10 includes a combination of network 12a and network 12b.
Further, system 10 (e.g., Phoenix) may be a runtime defending system for protecting host node 14 and/or software applications 36 (e.g., containerized applications) and/or containers 38 from zero-day security attacks originating in containers 38. System 10 (e.g., Phoenix) may receive a set of malicious sequences of enriched system calls. Enriched system calls may include system calls and a set of parameters related to each system call in a sequence of system calls (e.g., a plurality of system calls which may be
ordered in a predetermined manner such as in a sequence). The sequence(s) may be learned from a provenance-based forensics analysis performed on a provenance graph, which may capture the dependency between system calls used by several processes within the software application 36 (e.g., containerized application that system 10 aims to protect at runtime). Further, system 10 (e.g., Phoenix) may comprise of two main components: a Container Restarting component and a Container Runtime Protection component. The Container Restarting component may identify at least one compromised container to be protected and/or restart the container and/or prepare the container to be hardened. The Container Runtime Protection component may apply protective countermeasures (e.g., any task performed by any of the components of system 10) to a container 38 (e.g., compromised container) so that the container 38 is protected such as during its execution (e.g., at runtime).
In some embodiments, enriched system call sequence may refer to a sequence of system calls where each system call may be enriched with a name of the system call, argument(s) of the system call, and/or a process name of the process invoking the system call. In one non limiting example, one or more enriched system calls (e.g., enriched system call sequence) may include one or more parameters (e.g., may be annotated) as shown in Table 1 below.
Table 1. - Example of tuples (each column is a tuple) of enriched system calls sequence including <order in sequence, syscall name, calling process, arguments> annotated with an action for each system call.
In an embodiment, each system call in the enriched system call sequence may be annotated with an action to determine what to do with the corresponding system call (i.e., block or release) and/or what the next steps to be performed by system 10 (e.g., Phoenix) and/or host node 14. Although three possible actions such as STEP, WARN, and BLOCK are described, any other actions may be specified (e.g., triggers a given service to consume this system call). The annotation of actions on each system call (e.g., syscalls) may be performed by user (e.g., a security expert) such as after analyzing system logs. Further, the annotation of actions may be handled by automated tools external to system 10 (e.g., Phoenix) such as a component configured to perform Provenance Graph analysis.
FIG. 6 shows an example overview of an example Container Runtime Protection component according to some embodiments of the present disclosure.
More specifically, the Container Runtime Protection (CRP) component of system 10 may be configured to interact with the network 12 (e.g., container-based environment), particularly, with the Linux user-space and kernel-space where the containers are deployed. The CRP component may be used to monitor and protect the containers. The interactions may include updates to the Seccomp monitor profile to dynamically monitor the execution of a specific system call and interactions with other components of system 10 (e.g., the “ptrace: program) to monitor the enriched system call execution, to notify any component of system 10, and to trigger the actions requested by system 10 (e.g., by host node 14) on the currently observed system calls.
In some embodiments, Seccomp refers to a Linux Kernel feature that may be applied to processes and/or restrict and/or modify their usage of system calls, e.g., by implementing filters. In some other embodiments, “ptrace” refers to a kernel program (e.g., combination of software and hardware) that may attach to another process. Further, a Seccomp filter can specify (e.g., request) to send a signal to ptrace upon interception of a specified system call. Ptrace may then inspect the process memory (and thus its syscall, arguments, etc.). Ptrace may also enforce actions on a process by writing into its memory (and thus block its syscall, modify its arguments etc.)
The CRP component may include a Sequence preprocessing and annotation (SPA) module, Dynamic Seccomp Monitor (DSM) module, Sequence State Monitoring and Matching (SSMM) module, i.e., modules/components (e.g., a combination of software and/or hardware) such as performed by (and/or comprised by) security unit 18 of host node 14.
The SPA module may be based on a received sequence of enriched system calls as sequence of tuples. Each tuple may include one or more of the following elements: order of the system call in the sequence, its name, a calling process identifier, and an argument of the system call. SPA may first preprocess the sequence to identify each syscall and its related parameters, then add annotation to each tuple denoting the action to be performed by system 10 (e.g., Phoenix), e.g., by host node 14, when each of these system calls (e.g., step, warn, block) are detected/ observed/ determined. An action added may be either automatically generated by system 10 (e.g., Phoenix) and/or any of its components such that syscalls, before the last one, are labeled “Step” and the last one in the sequence is labeled “Block” or based on some pre-defined association between system calls and actions (e.g., based on the criticality of the syscall from security point of view). For example, executing a shell may be more critical than opening a file.
Further, SPA may generate a state transition system (i.e., state machine 50) from the tuples where a state 52 denotes the currently observed system call name and the action to be performed (by system 10 (e.g., Phoenix) and/or any of its components) if the state 52 is reached. Transitions 54 may be used to link two (or more) consecutive states 52. Labels (i.e., transition labels 56) may also be used and may include an expected next system call with the corresponding calling process and arguments. If the label of the transition 54 is matched, the next state 52 (destination of the transition) can be reached.
FIG. 7 shows an example state machine 50 (e.g., state transition system) which may include a states 52a, 52b, 52c, 52d, 52e, 52f (collectively referred to as states 52) which are linked by transitions 54. States 52 may include information about the state (e.g., initial state) and/or system call name (e.g., OPEN, CONNECT, EXECVE, etc.) and/or actions (e.g., step, warn, block, etc.). A transition may occur based on information included in transition labels 56 which may include parameters associated with system calls. In some embodiments, state machine 50 may be referred to as linked list of states. A method may be associated with state machine 50 corresponding to a sequence of system calls (e.g., as shown in Table 1) generated by the SPA module. At step S200, a transition is determined/performed if syscalls equals OPEN, process equals java, and Arguments equals Tog4j.jar.’ At step S202, a transition is determined/performed if syscalls equals CONNECT, process equals java, and Arguments equals ‘ 172.16.240.11’, ‘ 1389’. At step S204, a transition is determined/performed if syscalls equals EXECVE, process equals java, and Arguments equals ‘curl’. At step S206, a transition is determined/performed if syscalls equals CONNECT, process equals curl, and Arguments equals ‘ 172. 16.240.11’, ‘80’. At step S208, a transition is determined/performed if syscalls equals OPEN, process equals curl, and Arguments equals 7tmp/’.
FIG. 8 shows an example execution flow (e.g., S210-S216 performed by SPA, host node 14, security unit 18, etc.). At step 210, a malicious sequence of system calls is preprocessed. At step S212, an action for each system call is associated (e.g., using an annotation). At step S214, a state transition system is generated (e.g., one or more transitions are generated). At step S216, the state transition system is sent to SSMM.
The DSM module may be configured to interact with Seccomp and dynamically update the Seccomp monitor profile using a specific filter to match a specific system call to be monitored at runtime by system 10 (e.g., Phoenix) and/or any of its components such as security unit 18 of host node 14. DSM receives the name of system call(s) to be monitored from the sequence state monitoring and matching module (SSMM).
In some embodiments, a filter may be defined. The following is an example of Seccomp filter:
{
"defaultAction" : " SCMP ACT ALLOW",
"syscalls":[
{
},
{
"name": "open",
" action" : " SCMP ACT TRACE",
FIG. 9 shows a flow of execution performed by DSM (e.g., performed by security unit 18). A step S218, a Seccomp profile is created with a new rule to monitor the system call, and at step S220, the Seccomp profile of the container is replaced with a new one. After the filter is updated, when the system call occurs, Seccomp may send a signal to ptrace so ptrace later inspects it in more details (argument and calling process).
The monitored system call may be held till a decision on the action to be performed on this system call is made (e.g., by system 10, host node 14, security unit 18), which may be handled/processed by the SSMM module as follows.
The SSMM module receives a state transition system (such as shown in FIG. 6) from the SPA module. The state transition system corresponds to a specific malicious sequence of enriched system calls (e.g., that may follow the ordered execution of the malicious sequence by the container that is to be protected).
SSMM uses the state transition system to perform one or more of the following: (1) save internally the current state of execution, i.e., the last seen system call from the sequence, and take the action corresponding to when it occurred; (2) determine the next system call to be monitored in the sequence and send it to DSM; and (3 ) identify the arguments and calling process of the expected next system call in the sequence, which may be used to be matched with those of the system call actually being executed by the container to be protected (i.e., matching the currently executing system call and parameters with the enriched system call captured in the malicious sequence). Arguments and calling process may be determined by ptrace, which may notify any component of system 10 (e.g., Phoenix) about the occurrence of the system call monitored by Seccomp. Once the matching is determined to happen, SSM may send a signal to ptrace with the action to be performed on the currently observed system call.
FIG. 10 shows an execution flow performed by the SSMM module and interactions with ptrace and DSM module. At step S222, a state transition system is received. At step S224, an instance of the state transition system is initialized. At step, a next system call is determined from the next transition. At step S228, the next system call is sent to DSM. At step S230, a notification from ptrace is received and/or processed. At step S232, the current system call execution information is matched with the label of the next transition in the state transition system. At step S234, whether there is a match is determined. If there is a match, at step S236, the transition to the next state 52 in the state transition system is triggered to update the last seen system call. If there is no match, at step S238, a decision is sent to ptrace, e.g., to release the system call. At step S240, SSMM determines whether there is an action to be performed. If there is an action, such as STEP, at step S242, a decision is sent to ptrace, e.g., to release the system call. If there is an action such as BLOCK, at step S244, a decision is sent to ptrace, e.g., to block the system call.
In some embodiments, one or more of the fol lowing may be performed by host node 14:
1. Build a state machine 50 (e.g., state transition system) corresponding to an identified system attack. Depending on current and previous states 52 of the process, the state machine allows or blocks system calls. This offers the possibility of having a system call X being allowed or blocked for the same process based on the state machine 50 at different moments based the life cycle of the process. In addition, the state machine 50 can be modified.
2. Monitor at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application (e.g., software application 36 comprising containers 38) and having been identified as a system attack.
3. Update a kernel process (associated with kernel 40) of the host node 14 to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call.
• Updating the kernel process may include updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of at least one action when the predetermined system call is matched.
4. Perform a further inspection of the monitored system call based on the trigger signal of the monitoring profile.
• Performing the further inspection may include inspecting one or more parameter of the containerized application at the time of interception and matching it against a parameter in the sequence of system calls identified as a system attack.
5. Perform at least one action based at least on the updated kernel process and the further inspection, where the at least one action is performed for protecting at least the containerized application from the security attack.
• Performing at least one action may include blocking or allowing an execution of the next system call in the sequence of system calls based at least in part on a system call security criticality.
In some embodiments, making the “further inspection” operation may be less “resource costly” as a result of the “updating a kernel process” (bullet #3) above.
The following is a nonlimiting list of example embodiments.
Embodiment Al . A host node being configured to, and/or comprising processing circuitry configured to: monitor at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack; update a kernel process of the host node to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call; and
perform at least one action based at least on the updated kernel process, the at least one action being performed for protecting at least the containerized application from the security attack.
Embodiment A2. The host node of Embodiment Al, wherein the updating of the kernel process includes: updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of the at least one action when the predetermined system call is matched.
Embodiment A3. The host node of any one of Embodiments Al and A2, wherein the performing of the at least one action includes: one of blocking and allowing an execution of the next system call in the sequence of system calls based at least in part on a system call security criticality.
Embodiment A4. The host node of Embodiment A3, wherein the blocking of the execution of the next system call is associated with blocking the security attack.
Embodiment Bl. A method in a host node, the method comprising: monitoring at least one system call of a sequence of system calls, the sequence of system calls being associated with a containerized application and having been identified as a system attack; updating a kernel process of the host node to monitor a next system call in the sequence of system calls based at least in part on the monitored at least one system call; and performing at least one action based at least on the updated kernel process, the at least one action being performed for protecting at least the containerized application from the security attack.
Embodiment B2. The method of any one of Embodiment Bl, wherein the updating of the kernel process includes: updating a monitor profile to include at least one filter to match a predetermined system call to be monitored and trigger the performing of the at least one action when the predetermined system call is matched.
Embodiment B3. The method of any one of Embodiments Bl and B2, wherein the performing of the at least one action includes: one of blocking and allowing an execution of the next system call of the sequence of system calls based at least in part on a system call security criticality.
Embodiment B4. The method of Embodiment B3, wherein the blocking of the execution of the next system call is associated with blocking the security attack.
As will be appreciated by one of skill in the art, the concepts described herein may be embodied as a method, data processing system, computer program product and/or computer storage media storing an executable computer program. Accordingly, the concepts described herein may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Any process, step, action and/or functionality described herein may be performed by, and/or associated to, a corresponding module, which may be implemented in software and/or firmware and/or hardware. Furthermore, the disclosure may take the form of a computer program product on a tangible computer usable storage medium having computer program code embodied in the medium that can be executed by a computer. Any suitable tangible computer readable medium may be utilized including hard disks, CD-ROMs, electronic storage devices, optical storage devices, or magnetic storage devices.
Some embodiments are described herein with reference to flowchart illustrations and/or block diagrams of methods, systems and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer (to thereby create a special purpose computer), special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable memory or storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be
performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
It is to be understood that the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality /acts involved. Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
Computer program code for carrying out operations of the concepts described herein may be written in an object oriented programming language such as Python, Java® or C++. However, the computer program code for carrying out operations of the disclosure may also be written in conventional procedural programming languages, such as the "C" programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Many different embodiments have been disclosed herein, in connection with the above description and the drawings. It will be understood that it would be unduly repetitious and obfuscating to literally describe and illustrate every combination and subcombination of these embodiments. Accordingly, all embodiments can be combined in any way and/or combination, and the present specification, including the drawings, shall be construed to constitute a complete written description of all combinations and subcombinations of the embodiments described herein, and of the manner and process of making and using them, and shall support claims to any such combination or subcombination.
It will be appreciated by persons skilled in the art that the embodiments described herein are not limited to what has been particularly shown and described herein above. In addition, unless mention was made above to the contrary, it should be noted that all of the
accompanying drawings are not to scale. A variety of modifications and variations are possible in light of the above teachings without departing from the scope of the following claims.
Claims
1. A host node (14) confi ured to protect a software application (36) from a security attack, the host node (14) being configured to execute a kernel process associated with the software application (36), the host node (14) being configured to: generate a state machine (50) based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls; perform, using the state machine (50), a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls; update the kernel process to monitor a next system call in a second sequence of system calls associated with the software application (36) based at least in part on the performed first inspection; perform a second inspection of the monitored next system call based at least on the updated kernel process; and block or allow an execution of the next system call in the second sequence of system calls based at least in part on the performed second inspection.
2. The host node (14) of Claim 1, wherein the host node (14) is further configured to one or more of: obtain system call information; generate a provenance graph based in part on the system call information; perform forensic analysis using the provenance graph; identify the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determine the plurality of parameters associated with each system call of the first sequence of system calls.
3. The host node (14) of any one of Claims 1 and 2, wherein the plurality of parameters include: a system call name; arguments; and a process name of a process invoking the corresponding system call.
4. The host node (14) of any one of Claims 1-3, wherein the state machine (50) includes a sequence of states (52) and a plurality of transition labels (56), each state (52) of the sequence of states (52) indicating one system call name and an action to be performed by the host node (14) if the corresponding state (52) is reached, a transition label (56) of the plurality of transition labels (56) linking at least a first state (52) to a second state (52) consecutive to the first state (52) and indicating at least one parameter associated with the second state (52).
5. The host node (14) of Claims 4, wherein the action includes one or more of block a corresponding system call of the second sequence of system calls; warn about the corresponding system call; and step the corresponding system call for release.
6. The host node (14) of any one of Claims 4 and 5, wherein the host node (14) is further configured to: if information associated with the next system call matches the transition label (56): determine whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state (52); and trigger a transition to a subsequent state (52) of the sequence of states (52).
7. The host node (14) of any one of Claims 4-6, wherein the host node (14) is further configured to: if information associated with the next system call does not match the transition label (56): allow the execution of the next system call in the second sequence of system calls.
8. The host node (14) of any one of Claims 6 and 7, wherein the host node (14) is further configured to: obtain the information from a notification provided by a kernel program.
9. The host node (14) of Claim 8, wherein blocking or allowing the execution of the next system call includes: causing the kernel program to block or allow the execution of the next system call.
10. The host node (14) of any one of Claims 1-9, wherein updating the kernel process to monitor the next system call includes one or both of: updating a monitoring profile of the kernel process using a filter to match the next system call of the second sequence of system calls to one or more system calls of the first sequence of system calls; and trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system calls.
11. The host node (14) of Claim 10, wherein performing the second inspection is based on a trigger signal associated with the monitoring profile.
12. The host node (14) of any one of Claims 1-11, wherein one or both of: blocking or allowing the execution of the next system call is based a system call security criticality; and at least the blocking of the execution of the next system call is associated with protecting the software application (36) from the security attack.
13. The host node (14) of any one of Claims 1-12, wherein one or both of: the software application (36) is a containerized application comprising a plurality of containers (38) executable by the host node (14); and the plurality of containers (38) being configured to cause at least the next system call in the second sequence of system calls to be executed.
14. A method in a host node (14) configured to protect a software application (36) from a security attack, the host node (14) being configured to execute a kernel process associated with the software application (36), the method comprising: generating (SI 06) a state machine (50) based on a first sequence of system calls identified as corresponding to the security attack and a plurality of parameters associated with each system call of the first sequence of system calls;
performing (SI 08), using the state machine (50), a first inspection of one or more system calls of the first sequence of system calls and one or more parameters of the plurality of parameters corresponding the one or more system calls; updating (SI 10) the kernel process to monitor a next system call in a second sequence of system calls associated with the software application (36) based at least in part on the performed first inspection; performing (SI 12) a second inspection of the monitored next system call based at least on the updated kernel process; and blocking or allowing (SI 14) an execution of the next system call in the second sequence of system calls based at least in part on the performed second inspection.
15. The method of Claim 14, wherein the method further comprises one or more of: obtaining system call information; generating a provenance graph based in part on the system call information; performing forensic analysis using the provenance graph; identifying the first sequence of system calls as corresponding to the security attack based one the forensic analysis; and determining the plurality of parameters associated with each system call of the first sequence of system calls.
16. The method of any one of Claims 14 and 15, wherein the plurality of parameters include: a system call name; arguments; and a process name of a process invoking the corresponding system call.
17. The method of any one of Claims 14-16, wherein the state machine (50) includes a sequence of states (52) and a plurality of transition labels (56), each state (52) of the sequence of states (52) indicating one system call name and an action to be performed by the host node (14) if the corresponding state (52) is reached, a transition label (56) of the plurality of transition labels (56) linking at least a first state (52) to a second state (52) consecutive to the first state (52) and indicating at least one parameter associated with the second state (52).
18. The method of Claims 17, wherein the action includes one or more of block a corresponding system call of the second sequence of system calls; warn about the corresponding system call; and step the corresponding system call for release.
19. The method of any one of Claims 17 and 18, wherein the method further comprises: if information associated with the next system call matches the transition label (56): determining whether to block or allow the execution of the next system call in the second sequence of system calls based on the action indicated by the second state (52); and triggering a transition to a subsequent state (52) of the sequence of states (52).
20. The method of any one of Claims 17-19, wherein the method further comprises: if information associated with the next system call does not match the transition label (56): allowing the execution of the next system call in the second sequence of system calls.
21. The method of any one of Claims 19 and 20, wherein the method further comprises: obtaining the information from a notification provided by a kernel program.
22. The method of Claim 21, wherein blocking or allowing the execution of the next system call includes: causing the kernel program to block or allow the execution of the next system call.
23. The method of any one of Claims 14-22, wherein updating the kernel process to monitor the next system call includes one or both of:
updating a monitoring profile of the kernel process using a filter to match the next system call of the second sequence of system calls to one or more system calls of the first sequence of system calls; and trigger the blocking or allowing the execution of the next system call when the next system call of the second sequence of system calls matches the one or more system calls of the first sequence of system calls.
24. The method of Claim 23, wherein performing the second inspection is based on a trigger signal associated with the monitoring profile.
25. The method of any one of Claims 14-24, wherein one or both of: blocking or allowing the execution of the next system call is based a system call security criticality; and at least the blocking of the execution of the next system call is associated with protecting the software application (36) from the security attack.
26. The method of any one of Claims 14-25, wherein one or both of: the software application (36) is a containerized application comprising a plurality of containers (38) executable by the host node (14); and the plurality of containers (38) being configured to cause at least the next system call in the second sequence of system calls to be executed.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263399386P | 2022-08-19 | 2022-08-19 | |
US63/399,386 | 2022-08-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024038417A1 true WO2024038417A1 (en) | 2024-02-22 |
Family
ID=87889757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2023/058297 WO2024038417A1 (en) | 2022-08-19 | 2023-08-18 | Dynamic system calls-level security defensive system for containerized applications |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024038417A1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11301563B2 (en) * | 2019-03-13 | 2022-04-12 | International Business Machines Corporation | Recurrent neural network based anomaly detection |
-
2023
- 2023-08-18 WO PCT/IB2023/058297 patent/WO2024038417A1/en unknown
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11301563B2 (en) * | 2019-03-13 | 2022-04-12 | International Business Machines Corporation | Recurrent neural network based anomaly detection |
Non-Patent Citations (3)
Title |
---|
"Flow-graph analysis of system calls for exploit detection ED - Darl Kuhn", IP.COM, IP.COM INC., WEST HENRIETTA, NY, US, 27 June 2018 (2018-06-27), pages 1 - 7, XP013179205, ISSN: 1533-0001 * |
CLAUDIO CANELLA ET AL: "SFIP: Coarse-Grained Syscall-Flow-Integrity Protection in Modern Systems", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 28 February 2022 (2022-02-28), pages 1 - 15, XP091166092 * |
FORREST S ET AL: "A sense of self for Unix processes", SECURITY AND PRIVACY, 1996. PROCEEDINGS., 1996 IEEE SYMPOSIUM ON OAKLAND, CA, USA 6-8 MAY 1996, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 6 May 1996 (1996-05-06), pages 120 - 128, XP010164931, ISBN: 978-0-8186-7417-4, DOI: 10.1109/SECPRI.1996.502675 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12050697B2 (en) | Profiling of spawned processes in container images and enforcing security policies respective thereof | |
US9690606B1 (en) | Selective system call monitoring | |
US10445502B1 (en) | Susceptible environment detection system | |
US20240291868A1 (en) | Identifying serverless functions with over-permissive roles | |
US20220239690A1 (en) | Ai/ml approach for ddos prevention on 5g cbrs networks | |
US20240250977A1 (en) | Protecting serverless applications | |
US10586042B2 (en) | Profiling of container images and enforcing security policies respective thereof | |
US10148697B2 (en) | Unified host based security exchange between heterogeneous end point security agents | |
US10491627B1 (en) | Advanced malware detection using similarity analysis | |
US9438613B1 (en) | Dynamic content activation for automated analysis of embedded objects | |
US11689562B2 (en) | Detection of ransomware | |
US8997231B2 (en) | Preventive intrusion device and method for mobile devices | |
US10581879B1 (en) | Enhanced malware detection for generated objects | |
US9594912B1 (en) | Return-oriented programming detection | |
US20130347111A1 (en) | System and method for detection and prevention of host intrusions and malicious payloads | |
US10313370B2 (en) | Generating malware signatures based on developer fingerprints in debug information | |
US20170053120A1 (en) | Thread level access control to socket descriptors and end-to-end thread level policies for thread protection | |
US20150128206A1 (en) | Early Filtering of Events Using a Kernel-Based Filter | |
US10631168B2 (en) | Advanced persistent threat (APT) detection in a mobile device | |
US11550916B2 (en) | Analyzing multiple CPU architecture malware samples | |
WO2024038417A1 (en) | Dynamic system calls-level security defensive system for containerized applications | |
CN114726579B (en) | Method, device, equipment, storage medium and program product for defending network attack | |
CN115174192A (en) | Application security protection method and device, electronic equipment and storage medium | |
US11657143B2 (en) | Request control device, request control method, and request control program | |
US10599845B2 (en) | Malicious code deactivating apparatus and method of operating the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23764388 Country of ref document: EP Kind code of ref document: A1 |