WO2016079602A1 - Malicious code protection for computer systems based on process modification - Google Patents

Malicious code protection for computer systems based on process modification Download PDF

Info

Publication number
WO2016079602A1
WO2016079602A1 PCT/IB2015/053394 IB2015053394W WO2016079602A1 WO 2016079602 A1 WO2016079602 A1 WO 2016079602A1 IB 2015053394 W IB2015053394 W IB 2015053394W WO 2016079602 A1 WO2016079602 A1 WO 2016079602A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
library module
memory
computing process
code
Prior art date
Application number
PCT/IB2015/053394
Other languages
French (fr)
Inventor
Michael Gorelik
Mordechai GURI
David Mimran
Gabriel Kedma
Ronen YEHOSHUA
Original Assignee
Morphisec Information Security Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Morphisec Information Security Ltd. filed Critical Morphisec Information Security Ltd.
Priority to EP15723305.7A priority Critical patent/EP3123311B8/en
Priority to US15/324,656 priority patent/US10528735B2/en
Publication of WO2016079602A1 publication Critical patent/WO2016079602A1/en
Priority to IL249962A priority patent/IL249962B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/54Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by adding security routines or objects to programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system

Definitions

  • Embodiments described herein generally relate to detecting and/or neutralizing malicious code or other security threats on computer systems.
  • Modern cyber attackers employ a variety of attack patterns, ultimately aimed at running the attacker's code on the target machine without being noticed.
  • the traditional attack pattern requires an executable file that arrives at the target machine through email, through a download from a website, from a neighboring local host, or from some sort of removable media.
  • the malicious file gets executed, it spawns a full-fledged process of its own. Subsequently, the malicious process may inject some form of code into the memory-space of another running process.
  • Newer attack patterns are based on vulnerabilities found in various useful programs, such as ADOBE® ACROBAT®.
  • ADOBE® ACROBAT® the malicious code (or "payload") is embedded within a portable data file (PDF) document.
  • PDF portable data file
  • the PDF document also contains a chunk of malformed data, designed to exploit the given vulnerability. This chunk is crafted to cause some kind of overflow or similar exception when the file is being read by the vulnerable program.
  • the program or the operating system seeks to recover it returns, instead, to a tiny piece of machine code (or primary shellcode) supplied by the malformed data chunk.
  • This primary shellcode takes control of the running program (i.e., the process), completing the so-called "exploit” of the given vulnerability. Subsequently, the primary shellcode loads whatever payload (special-purpose malicious code) is available, into the context of the running process.
  • the vulnerable program In a so-called 'remote' attack, the vulnerable program is associated with some network port, either as a server or as a client.
  • the exploit happens when the vulnerable program tries to process a chunk of malformed input, essentially in the same manner as described above.
  • the primary shellcode takes control of the running process, it may choose to download secondary shellcode or payload from the network.
  • the malicious code running within the originally breached process may proceed by injecting code into the running processes of other programs.
  • ASLR Address Space Layout Randomization
  • DEP Data Execution Prevention
  • FIG. 1 depicts components of a computer system in accordance with an embodiment.
  • FIG. 2 depicts a layout of a binary image in accordance with an embodiment.
  • FIG. 3 depicts a flowchart of an example method for neutralizing runtime in- memory exploits of a process in accordance with an example embodiment.
  • FIG. 4A depicts a block diagram of a main memory including a process in accordance with an embodiment.
  • FIG. 4B depicts a block diagram of a main memory including a process in accordance with another embodiment.
  • FIG. 5 depicts a flowchart of an example method for handling procedure calls for procedures in a library module after a computing process has been modified by injected code in accordance with an embodiment.
  • FIG. 6 depicts a block diagram of a main memory including a process in accordance with another embodiment.
  • FIG. 7 depicts a flowchart of an example method for recovering execution of a computing process in response to a malicious code attack in accordance with an example embodiment.
  • FIG. 8 depicts a block diagram of a main memory in accordance with another embodiment.
  • FIG. 9 depicts a block diagram of a computer system that may be configured to perform techniques disclosed herein.
  • references in the specification to "one embodiment,” “an embodiment,” “an example embodiment,” or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Malicious code e.g., malware
  • shellcode injected into a running process has to perform some initial steps before it can proceed. It should do at least part of the initiation steps that the system's default loader would normally do when creating a running process from an executable file (e.g., a binary image).
  • the injected code it is crucial for the injected code to obtain the addresses of certain shared libraries (e.g., dynamic-link libraries (DLLs)) as they are mapped into the address space of the running process, and to further obtain the addresses of the procedures (or functions) that it intends to use.
  • shared libraries e.g., dynamic-link libraries (DLLs)
  • DLLs dynamic-link libraries
  • the injected code only needs to find the specific functionality within that library and does not need to locate the library itself.
  • Various approaches are described herein for, among other things, neutralizing and/or detecting attacks by such malicious code. This may be achieved, for example, by modifying one or more instances of a protected process upon loading by injecting a runtime protector that (a) creates a copy of each of the process' imported libraries and maps the copy into a random address inside the process' address space (to form a randomized "shadow” library), (b) replaces the procedure addresses within the original libraries, to point at a stub (thereby forming a "stub” library), and (c) intercepts procedure calls for late library loading and creates a shadow library and a stub library for such libraries.
  • a runtime protector that (a) creates a copy of each of the process' imported libraries and maps the copy into a random address inside the process' address space (to form a randomized "shadow” library), (b) replaces the procedure addresses within the original libraries, to point at a stub (thereby forming a "stub” library), and (c) intercepts procedure
  • morphing is performed dynamically during initialization of the process, where librar(ies) loaded during process initialized are morphed. In accordance with another embodiment, morphing is performed dynamically during runtime, where librar(ies) loaded during runtime (i.e., after process initialization is complete) are morphed.
  • Various embodiments described herein offer at least the following additional advantages: (a) when the presence of malicious code is detected, the malicious code can be sandboxed or otherwise diverted to a secure environment, to deceive the attacker and/or to learn the malware's behavior and intentions; (b) a user or administrator can define, for a given process, a set of procedure calls that are prohibited under any circumstances (also referred to herein as an "API Firewall"); (c) the overall impact on the system's performance may be relatively low, particularly compared to runtime behavioral monitoring, which tries to detect malicious code rather than preempt it; and (d) no prior knowledge of the current malware is assumed, therefore prevention of new, unknown, or zero-day attacks is possible.
  • API Firewall also referred to herein as an "API Firewall”
  • ASLR and DEP can be applied in concert with those techniques to gain optimal protection.
  • embodiments described herein refer to morphing techniques associated with library(ies) for the sake of brevity. However, as should be clear to any person skilled in the art, this is just one possible embodiment. Similar embodiments may protect practically all kinds of codebase elements, including, but not limited to, DLL extensions, Component Object Models (COMs), etc.
  • COMs Component Object Models
  • a method is described herein.
  • the method includes determining that a process loader of an operating system has initiated the creation of a computing process.
  • code is injected in the computing process that is configured to modify the computing process by determining that at least one library module of the computing process is to be loaded into memory, storing the at least one library module at a first address in the memory, copying the at least one library module stored at the first address to a second address in the memory that is different than the first address, and modifying the at least one library module stored at the first address into a stub library module.
  • a system is also described herein.
  • the system includes one or more processing units and a memory coupled to the one or more processing units, the memory storing software modules for execution by the one or more processing units.
  • the software modules include a runtime protector configured to load a library module for the computing process at a first address in the memory, copy the library module stored at the first address to a second address in the memory that is different than the first address, and modify the library module stored at the first address into a stub library module.
  • the code that accesses the library module stored at the second address is designated as being non-malicious code.
  • the code attempting to access the library module stored at the first address is designated as being malicious code.
  • a computer-readable storage medium having program instructions recorded thereon that, when executed by a processing device, perform a method for modifying a computing process is further described herein.
  • the method includes loading a library module for the computing process at a first address in the memory, copying the library module stored at the first address to a second, randomized address in the memory that is different than the first address, and modifying the library module stored at the first address into a stub library module.
  • the code that accesses the library module stored at the second address is designated as being non-malicious code.
  • the code attempting to access the library module stored at the first address is designated as being malicious code.
  • FIG. 1 depicts components of a computer system 100 in accordance with one embodiment that detects and/or neutralizes the execution of malicious code associated with a computing process executing thereon.
  • computer system 100 includes one or more processor(s) 102 (also called central processing units, or CPUs), a primary or main memory 104, and one or more secondary storage device(s) 106.
  • processor(s) 102, main memory 104, and secondary storage device(s) 106 are connected to a communication interface 108 via a suitable interface, such as one or more communication buses.
  • processor(s) 102 can simultaneously operate multiple computing threads, and in some embodiments, processor(s) 102 may each comprise one or more processor core(s).
  • main memory 104 examples include a random access memory (RAM) (e.g., dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual-data rate RAM (DDRRAM), etc.).
  • Secondary storage device(s) 106 include for example, one or more hard disk drives, one or more memory cards, one or more memory sticks, a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, or any other computer data storage device.
  • main memory 104 stores an operating system 110.
  • Operating system 110 may manage one or more hardware components (e.g., processor(s) 102, main memory 104, secondary storage device(s) 106, etc.) and software executing on computer system 100.
  • hardware components e.g., processor(s) 102, main memory 104, secondary storage device(s) 106, etc.
  • software executing on computer system 100 e.g., software executing on computer system 100.
  • Example hardware components of computer system 100 are described in detail below in reference to FIG. 9.
  • Operating system 110 may have one or more components that perform certain tasks relating to the execution of software on computer system 100.
  • One such component is process loader 112.
  • Process loader 112 is configured to initiate creation of a computing process (or "process") 114 in main memory 104.
  • Process 114 is an instance of a computer program being executed by processor(s) 102.
  • the computer program may comprise an application program (or "application”), a system program, or other computer program being executed by processor(s) 102.
  • the computer program is embodied in instructions and/or data included in a binary image (e.g., binary image 116).
  • process loader 112 loads (or "maps") binary image 116, which is stored in secondary storage device(s) 106, into an address space allocated for process 114 in main memory 104 based on information included in binary image 116.
  • the binary image mapped into main memory 104 is represented in FIG. 1 by mapped binary image 118.
  • Process loader 112 builds up an initial execution context of process 114.
  • Computer program execution i.e., process 114 begins when processor(s) 102 commence executing the first instruction of the computer program.
  • Process 114 may comprise one or more threads of execution that execute a subset of instructions concurrently.
  • the execution context of process 114 may comprise information about such resource allocation, a current state of the program execution, an instruction that is to be executed next, and other information related to the program execution.
  • the computer program execution continues until processor(s) 102 execute a termination or halt instruction.
  • FIG. 2 is an example layout of binary image 116 in accordance with an embodiment.
  • executable binary formats for binary image 116 include, but are not limited to, the portable executable (PE) format (e.g., files having an .exe, .dll, and a .sys extension), the Executable and Linkable Format (ELF), the Mach object (Mach-O) file format, etc.
  • PE portable executable
  • ELF Executable and Linkable Format
  • Mach-O Mach object file format
  • binary image 116 may comprise one or more headers 202 and/or one or more sections 203, which process loader 112 uses to map binary image 116 (or portions thereof) into main memory 104.
  • Header(s) 202 may comprise information regarding the layout and properties of binary image 116 (e.g., the names, number and/or location of section(s) 203 within binary image 116). Header(s) 202 may also include a base address (also referred to as an image base) that specifies a default address at which binary image 116 is to be loaded into main memory. It is noted, however, that binary image 116 may be loaded at a different address. For example, if operating system 110 supports ASLR (which is technique used to guard against buffer-overflow attacks by randomizing the location where binary images are loaded into main memory 104), the address at which binary image 116 is loaded into main memory 104 will be a randomized address.
  • ASLR which is technique used to guard against buffer-overflow attacks by randomizing the location where binary images are loaded into main memory
  • Section(s) 203 of binary image 116 may comprise an executable code section
  • Executable code section 204 comprises instructions that correspond to the computer program to be executed by processor(s) 102.
  • the instructions may be machine code instructions that are to be executed by processor(s) 102 after binary image 116 is loaded into main memory 104.
  • Data section 206 comprises uninitialized data required for executing the computer program. Such data includes, but is not limited to, static and/or global variables.
  • Resources section 208 comprises resource information that comprises readonly data required for executing the computer program. Such read-only data includes, but is not limited to, icons, images, menus, strings, etc. The read-only data may be stored in one or more tables (i.e., resource table(s)).
  • Export data section 210 may include information about the names and/or references of procedures exportable to other binary image(s) (e.g., DLL(s)).
  • the export data may include an export directory that defines the names of exportable procedures included in binary image 116.
  • the addresses of the exportable procedures may be stored in a table (e.g., an export address table (EAT)).
  • EAT export address table
  • the addresses of such exportable procedures may be provided to other binary images in response to the issuance by such other binary images of a procedure call (e.g., GetProc Address) that identifies the procedure.
  • Import data section 212 may include information about the names and/or references of procedures that are imported by binary image 116.
  • Import data section 212 may comprise an import directory, which includes information about other binary image(s) (e.g., DLL(s)) from which binary image 116 imports procedures.
  • the information may include a location (e.g., an address) or a pointer to a location of a binary image that includes at least one procedure to be imported.
  • the information may further include an import address table (IAT) that includes the name(s) of procedures to be imported and/or pointers to the procedures to be imported.
  • IAT import address table
  • process loader 112 may check the import data (e.g., the IAT) to determine if one or more additional binary images (e.g., libraries, such as DLLs) are required for process 114.
  • Process loader 112 may map any such required binary image(s) into the address space of process 114.
  • Process loader 114 may recursively parse the respective IATs of each required binary image to determine if further binary image(s) are required and map these further binary image(s) into the address space of process 114.
  • Process loader 112 replaces the pointers in the respective IATs with the actual addresses at which the procedures are loaded into main memory 104 as the procedures are imported. By using pointers, process loader 112 does not need to change the addresses of imported procedures everywhere in code of the computer program that such imported procedures are called. Instead, process loader 112 simply has to add the correct address(es) to a single place (i.e., the IAT), which is referenced by the code.
  • Relocation data section 214 comprises relocation data that enables process loader 112 to modify addresses associated with code and data items (respectively included in executable code section 204 and data section 206) specified in binary image 116.
  • a binary image is created (e.g., by a computer program, such as a linker)
  • an assumption is made that the binary image is to be mapped to a base address, as described above.
  • the linker inserts the real addresses (relative to the base address) of code and data items in the binary image. If for some reason the binary image is loaded at an address other than the image base (e.g., in the event that the image base is already occupied or due to an ASLR scheme being in place), these real addresses will be invalid.
  • the relocation data enables process loader 112 to modify these addresses in binary image 116 so that they are valid.
  • the relocation data may include a relocation table, which includes a list of pointers that each point to a real address of a code and/or data item.
  • process loader 112 updates these pointers. Thereafter, process loader 112 initiates the computer program by passing control to the program code loaded into main memory 104.
  • system 100 is configured to neutralize and/or intercept runtime in-memory exploits of processes (e.g., exploits performed by malicious code). Such exploits are carried out by identifying the memory location of a specific known object (e.g., of a procedure or data object having a predetermined fixed address) in a process' address space in main memory and using this location to calculate the location of other procedures that are required to fulfill the exploit.
  • a specific known object e.g., of a procedure or data object having a predetermined fixed address
  • system 100 may include a modification engine
  • Modification engine 120 may be configured to modify (or "morph") process 114 to include a runtime protector 122 that causes the location of the in-memory data and code segments to be changed upon being loaded into main memory 104 in a random manner and updates legitimate code segments (i.e., non-malicious code segments) with these changes, thereby preventing malicious code from accessing such data and code segments.
  • runtime protector 122 maintains the original in-memory data and code segments and intercepts any access to these segments to detect malicious activity.
  • modification engine 120 may be configured to intercept a process creation event issued by operating system 110 (or a component thereof) for process 114. Modification engine 120 may verify that process 114 is designated for protection. For example, modification engine 120 may check that process 114 is included in a list of processes that should be protected. In response to determining that process 114 is to be protected, modification engine 120 causes the creation of the process to be suspended and injects runtime protector 122 into process 114.
  • Runtime protector 122 may be a library (e.g., a DLL) that is injected into the address space of process 114.
  • Runtime protector 122 may be configured to determine whether any library modules (e.g., DLLs) have already been loaded into the address space of process 114. In response to determining that library module(s) have already been loaded into the address space of process 114, runtime protector 122 copies the library module(s) into a different, random memory range (referred to as a "shadow" library). The library module(s) loaded into the original address space are modified into a stub library (also referred to as a "shallow library”), which provides stub procedures or functions. Runtime protector 122 updates the IAT mapped into the address space of process 114 with the addresses corresponding to the random memory range. Thereafter, modification engine 120 causes process loader 112 to be released to allow process loader 112 to finalize the process creation for process 114.
  • any library modules e.g., DLLs
  • Runtime protector 122 may also be configured to create shadow and stub libraries for library module(s) that are loaded after process finalization (e.g., "late” libraries).
  • runtime protector 122 may be configured to hook memory mapping procedure calls (e.g., that map libraries to a particular section of main memory 104, such as NtMapViewOfSection) that load "late" library module(s) into main memory 104.
  • process finalization e.g., "late” libraries
  • runtime protector 122 may be configured to hook memory mapping procedure calls (e.g., that map libraries to a particular section of main memory 104, such as NtMapViewOfSection) that load "late” library module(s) into main memory 104.
  • runtime protector 122 Upon intercepting such procedure calls, runtime protector 122 allows the call to be completed, thereby resulting in the library module(s) being loaded at their intended addresses in main memory 104.
  • runtime protector 122 creates shadow and stub libraries for such library module(
  • runtime protector 122 modifies the library module(s) loaded into the original address space into stub libraries by causing operating system 110 to designate the original address spaces at which executable portions (e.g., executable code) of the library module(s) are located as being non-accessible regions.
  • Modification engine 120 may also inject an exception handler 124 into the address space of process 114, which intercepts an exception thrown by operating system 110 when code (e.g., malicious code) attempts to access the non-accessible region (i.e., the stub library).
  • runtime protector 122 may be configured to redirect the malicious code to an isolated environment and/or kill a thread spawned by the malicious code.
  • malicious code is detected by a user- configurable API firewall.
  • a user or administrator may be enabled (e.g., using a graphical user interface (GUI)) to define, for any given process, a set of procedure calls that are prohibited under any circumstances.
  • GUI graphical user interface
  • system 100 may operate in various ways to neutralize runtime in-memory exploits of process.
  • FIG. 3 depicts a flowchart 300 of an example method for neutralizing runtime in-memory exploits of a process, according to an example embodiment.
  • System 100 shown in FIG. 1 may operate according to flowchart 300.
  • flowchart 300 is described with reference to FIGS. 4A-4B.
  • FIGS. 4A-4B show block diagrams 400A and 400B of main memory 402, according to an embodiment.
  • Main memory 402 is an example of main memory 104 shown in FIG. 1.
  • operating system 404, process loader 406, modification engine 408, process 410 and runtime protector 412 are examples of operating system 110, process loader 112, modification engine 120, process 114 and runtime protector 122, as shown in FIG. 1. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart 300. Flowchart 300 and main memory 402 are described as follows.
  • Flowchart 300 begins with step 302.
  • a determination is made that a process loader of an operating system has initiated the creation of a computing process.
  • modification engine 408 determines that process loader 406 of operating system 404 has initiated the creation of process 406.
  • process loader 406 may initiate creation of process 410 in response to receiving a process creation event from operating system 404 (or from one or more other components of operating system 404).
  • Modification engine 408 is configured to detect such events (e.g., event 414).
  • modification engine 408 may verify that the corresponding process being created (e.g., process 410) is included in a list of processes to be protected.
  • modification engine 408 may query a database or search a file containing the list to determine whether the corresponding process is to be protected.
  • code is injected in the computing process that is configured to modify the process in response to determining that the process loader has initiated the creation of the computing process.
  • modification engine 408 issues a procedure call 416 to inject code (e.g., runtime protector 412) into process 410.
  • runtime protector 412 is a DLL injected into the address space of process 410.
  • the injected code is configured to modify the process in accordance with steps
  • runtime protector 412 is configured to determine that at least one library module of process 410 is to be loaded into main memory 402.
  • runtime protector 412 determines that at least one library module of process 410 is to be loaded into main memory 402 by hooking a procedure call 420 initiated by process loader 406.
  • Procedure call 420 may be configured to map at least one library module into main memory 402.
  • Procedure call 420 may identify the at least one library module and a section of main memory 404 at which the at least one library module is to be loaded.
  • procedure call 420 is an NtMapViewOfSection procedure call.
  • the at least one library module is stored at a first address in the memory.
  • library module 422 is stored at a first address OxXX.
  • the first address may be specified by procedure call 420.
  • the at least one library module stored at the first address is copied to a second address in the memory that is different than the first address.
  • runtime protector 412 copies library module 422 stored at the first address to a second address (OxYY) in main memory 404 that is different than the first address (represented by shadow library module 424).
  • the second address is a randomized address determined by runtime protector 412.
  • the at least one library module stored at the first address is modified into a stub library module.
  • runtime protector 412 modifies library module 422 (as shown in FIG. 4A) into a stub library module 422.
  • the second address is a randomized address determined by runtime protector 412.
  • runtime protector 412 modifies library module 422 (as shown in FIG. 4A) into a stub library module 422' (as shown in FIG. 4B) by causing one or more executable portions of library module 422 to be designated as being non-accessible.
  • runtime protector 412 may issue a command 426 to operating system 404 that causes operating system 404 to designate the executable portion(s) of library module 422 as non-accessible.
  • FIG. 5 depicts a flowchart 500 of an example method for handling procedure calls for procedures in the at least one library module after the computing process has been modified by the injected code, according to an example embodiment.
  • System 100 shown in FIG. 1 may operate according to flowchart 500.
  • flowchart 500 is described with reference to FIG. 6.
  • FIG. 6 shows a block diagram 600 of main memory 602, according to an embodiment.
  • Main memory 602 is similar to main memory 402, as shown in FIGS. 4A and 4B.
  • operating system 604, process loader 606, modification engine 608, process 610, runtime protector 612, stub library module 622 and shadow library module 624 are examples of operating system 404, process loader 406, modification engine 408, process 410, runtime protector 412, stub library module 422 and shadow library module 424, as shown in FIGS. 4A and 4B. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart 500. Flowchart 500 and main memory 602 are described as follows. [0065] Flowchart 500 begins with step 502.
  • a first procedure call for a procedure in the at least one library module is caused to reference the second address of the at least one library module, the first procedure call being included in at least one of a binary image from which the computing process is created and one or more other library modules imported for the computing process.
  • a first procedure call 630 for a procedure 632 is caused to reference a library module at the second address (i.e., shadow library module 624).
  • First procedure call 630 is initially included in at least one of a binary image (e.g., binary image 116 of FIG. 1) from which process 610 is created or another library module imported for the binary image.
  • First procedure call 630 is loaded into the address space of process 610 of main memory 602 during the binary image mapping process described above.
  • the first procedure call for the procedure in the at least one library module is caused to reference the second address of the at least one library module by updating a data structure that stores an address at which the at least one library module is loaded into the memory with the second address of the at least one library module, thereby causing the first procedure call to reference the second address of the at least one library module.
  • runtime protector 612 is configured to update data structure 634 with the second address of the at least one library module.
  • the data structure is the IAT of the binary image (i.e., binary image 116, as shown in FIG. 1.) that is mapped into the address space of process 610.
  • a second procedure call for a procedure in the at least one library module is caused to reference the first address of the at least one library module, the second procedure call originating from malicious code that is injected into the computing process after loading of the binary image into memory is complete.
  • a second procedure call 636 for procedure 632 is caused to reference a library module at the first address (i.e., stub library module 622).
  • second procedure call 636 originates from malicious code 638.
  • Malicious code 638 is code that was injected into process 610 after binary image 116 (as shown in FIG. 1) is mapped into main memory 604.
  • certain executable portions of a library module stored at the first address may be designated as being non-accessible.
  • a malicious code attack is detected when malicious code attempts to access such non- accessible sections.
  • FIG. 7 depicts a flowchart 700 of an example method for detecting a malicious code attack, according to an example embodiment.
  • System 100 shown in FIG. 1 may operate according to flowchart 700.
  • flowchart 700 is described with reference to FIG. 8.
  • FIG. 8 shows a block diagram 800 of a main memory 802, according to an embodiment.
  • Main memory 802 is similar to main memory 602 shown in FIG. 6.
  • operating system 804, process loader 806, modification engine 808, process 810, runtime protector 812, stub library module 822, shadow library module 824 and malicious code 838 are examples of operating system 604, process loader 606, modification engine 608, process 610, runtime protector 612, stub library module 622, shadow library module 624 and malicious code 638, as shown in FIG. 6. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart 700. Flowchart 700 and main memory 802 are described as follows.
  • Flowchart 700 begins with step 702.
  • an exception thrown by the operating system is detected, the exception being thrown as a result of malicious code attempting to access the library stored at the first address.
  • an exception handler 840 may also be injected into main memory 802; in particular, into the address space of process 810. Exception handler 840 may be injected into process 810 by modification engine 808.
  • Exception handler 840 is configured to detect an exception 842 thrown by operating system 804. Exception 842 may be thrown in response to a procedure call 844 included in malicious code 838 attempting to access a procedure included in stub library module 822.
  • runtime protector 812 may be configured to determine that a malicious attack has occurred in response to exception handler 840 detecting exception 842. Upon detecting exception 842, runtime protector 812 may be configured to redirect malicious code 838 to an isolated environment and/or kill a thread spawned by malicious code 838.
  • the location and/or the procedure name(s) may be randomized by the runtime protector (e.g., runtime protector 122, as shown in FIG. 1).
  • the runtime protector e.g., runtime protector 122, as shown in FIG. 1).
  • Malicious code attacks that use techniques that address system functionality via known locations of the IAT and/or EAT addresses or via accessing a specific entry using a procedure name included in at least one of the IAT and EAT will fail by performing these randomizations. Consequently, an attacker would need to guess the location of the IAT and/or EAT in main memory and will not be able to use attack methods that enable access to procedures included in the IAT and/or EAT based on their names
  • one or more indices within the IAT and/or EAT that correspond to procedure name(s) included therein are randomized.
  • the indices may be randomized by runtime protector 122, as shown in FIG. 1. By doing so, attacks accessing known system functionality via specific fixed indices corresponding to specific procedures will fail.
  • the IAT and/or EAT associated with the stub library module(s) of the stub library are not randomized.
  • At least one of a location at which the resource table(s) are loaded into main memory e.g., main memory 104, as shown in FIG. 1
  • names, and/or references of resources included in the resource table(s) are randomized.
  • the location of the resource table(s), the names, and/or references included in the resource table(s) may be randomized by the runtime protector (e.g., runtime protector 122, as shown in FIG. 1).
  • Some attack techniques use resource table(s) in order to get a relative orientation into the process that can serve as a basis for building a dynamic attack using learned addresses in memory. Randomization of the resource table(s) will eliminate those exploitation techniques.
  • a data structure used by the operating system to manage a process may also be loaded into the address space of the process.
  • the structure may contain context information for the process that enables the operating system to manage execution of the process.
  • Such a data structure may be referred to as a process environment block (PEB).
  • PEB process environment block
  • Another data structure that may be loaded in the address space of a process is a data structure used by the operating system to manage one or more threads associated with the process.
  • the data structure may contain context information for the thread(s) that enables the operating to manage execution of the thread(s).
  • Such a data structure may be referred to as a thread environment block (TEB).
  • At least one of one or more elements, names and/or references included in the PEB and/or the TEB and/or the location of the PEB and/or the TEB may be randomized.
  • the at least one of the element(s), name(s), and/or reference(s) and/or the locations of the PEB and/or TEB may be randomized by the runtime protector (e.g., runtime protector 122, as shown in FIG. 1).
  • the runtime protector e.g., runtime protector 122, as shown in FIG. 1).
  • Malicious attacks that attempt to leverage information included in the PEB and/or TEB to determine locations of certain procedures, library modules, and/or tables (e.g., the IAT and/or EAT) will fail.
  • modification engine 808 may inject an exception handler 840 into the address space of a process 810.
  • the location at which exception handler 840 is injected may be randomized.
  • the location may be randomized by modification engine 808.
  • Some attack techniques are abusing the exception handling mechanism embedded in processes to conduct malicious acts. Exception handlers are meant to serve program control in case of runtime errors. However, attackers are abusing this capability by accessing them in their known addresses and then injecting malicious code therein. The malicious code may cause a system error, which triggers the exception handler, thereby resulting in the malicious code taking control of the process. By randomizing the location of the exception handler, attackers will not be able to abuse it for obtaining runtime code control.
  • inventions described herein may be implemented using well known processing devices, telephones (land line based telephones, conference phone terminals, smart phones and/or mobile phones), interactive television, servers, and/or, computers, such as a computer 900 shown in FIG. 9.
  • computer 900 may represent computing devices linked to, processing devices, traditional computers, and/or the like in one or more embodiments.
  • computing system 100 of FIG. 1, and any of the sub-systems, components, and/or models respectively contained therein and/or associated therewith may be implemented using one or more computers 900.
  • Computer 900 can be any commercially available and well known communication device, processing device, and/or computer capable of performing the functions described herein, such as devices/computers available from International Business Machines®, Apple®, Sun®, HP®, Dell®, Cray®, Samsung®, Nokia®, etc.
  • Computer 900 may be any type of computer, including a desktop computer, a server, etc.
  • Computer 900 includes one or more processors (also called central processing units, or CPUs), such as a processor 906.
  • processors also called central processing units, or CPUs
  • Processor 906 is connected to a communication infrastructure 902, such as a communication bus.
  • communication infrastructure 902 such as a communication bus.
  • processor 906 can simultaneously operate multiple computing threads, and in some embodiments, processor 906 may comprise one or more processors.
  • Computer 900 also includes a primary or main memory 908, such as random access memory (RAM).
  • Main memory 908 has stored therein control logic 924 (computer software), and data.
  • Computer 900 also includes one or more secondary storage devices 910.
  • Secondary storage devices 910 include, for example, a hard disk drive 912 and/or a removable storage device or drive 914, as well as other types of storage devices, such as memory cards and memory sticks.
  • computer 900 may include an industry standard interface, such a universal serial bus (USB) interface for interfacing with devices such as a memory stick.
  • Removable storage drive 914 represents a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup, etc.
  • Removable storage drive 914 interacts with a removable storage unit 916.
  • Removable storage unit 916 includes a computer useable or readable storage medium 918 having stored therein computer software 926 (control logic) and/or data.
  • Removable storage unit 916 represents a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, or any other computer data storage device.
  • Removable storage drive 914 reads from and/or writes to removable storage unit 916 in a well- known manner.
  • Computer 900 also includes input/output/display devices 904, such as touchscreens, LED and LCD displays, monitors, keyboards, pointing devices, etc.
  • input/output/display devices 904 such as touchscreens, LED and LCD displays, monitors, keyboards, pointing devices, etc.
  • Computer 900 further includes a communication or network interface 920.
  • Communication interface 920 enables computer 900 to communicate with remote devices.
  • communication interface 920 allows computer 900 to communicate over communication networks or mediums 922 (representing a form of a computer useable or readable medium), such as LANs, WANs, the Internet, etc.
  • Network interface 920 may interface with remote sites or networks via wired or wireless connections.
  • Control logic 928 may be transmitted to and from computer 900 via the communication medium 922.
  • Any apparatus or manufacture comprising a computer useable or readable medium having control logic (software) stored therein is referred to herein as a computer program product or program storage device.
  • a computer program product or program storage device This includes, but is not limited to, computer 900, main memory 908, secondary storage devices 910, and removable storage unit 916.
  • Such computer program products having control logic stored therein that, when executed by one or more data processing devices, cause such data processing devices to operate as described herein, represent embodiments.
  • Techniques, including methods, and embodiments described herein may be implemented by hardware (digital and/or analog) or a combination of hardware with one or both of software and/or firmware.
  • Techniques described herein may be implemented by one or more components.
  • Embodiments may comprise computer program products comprising logic (e.g., in the form of program code or software as well as firmware) stored on any computer useable medium, which may be integrated in or separate from other components.
  • Such program code when executed by one or more processor circuits, causes a device to operate as described herein.
  • Devices in which embodiments may be implemented may include storage, such as storage drives, memory devices, and further types of physical hardware computer-readable storage media.
  • Examples of such computer-readable storage media include, a hard disk, a removable magnetic disk, a removable optical disk, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and other types of physical hardware storage media.
  • examples of such computer- readable storage media include, but are not limited to, a hard disk associated with a hard disk drive, a removable magnetic disk, a removable optical disk (e.g., CDROMs, DVDs, etc.), zip disks, tapes, magnetic storage devices, MEMS (micro- electromechanical systems) storage, nanotechnology-based storage devices, flash memory cards, digital video discs, RAM devices, ROM devices, and further types of physical hardware storage media.
  • Such computer-readable storage media may, for example, store computer program logic, e.g., program modules, comprising computer executable instructions that, when executed by one or more processor circuits, provide and/or maintain one or more aspects of functionality described herein with reference to the figures, as well as any and all components, capabilities, and functions therein and/or further embodiments described herein.
  • computer program logic e.g., program modules
  • Such computer-readable storage media may, for example, store computer program logic, e.g., program modules, comprising computer executable instructions that, when executed by one or more processor circuits, provide and/or maintain one or more aspects of functionality described herein with reference to the figures, as well as any and all components, capabilities, and functions therein and/or further embodiments described herein.
  • Such computer-readable storage media are distinguished from and non- overlapping with communication media (do not include communication media).
  • Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wireless media such as acoustic, RF, infrared, and other wireless media, as well as wired media and signals transmitted over wired media. Embodiments are also directed to such communication media.
  • a device as defined herein, is a machine or manufacture as defined by 35 U.S.C. ⁇ 101.
  • Devices may include digital circuits, analog circuits, or a combination thereof.
  • Devices may include one or more processor circuits (e.g., central processing units (CPUs) (e.g., processor 906 of FIG.
  • microprocessors digital signal processors (DSPs), and further types of physical hardware processor circuits
  • DSPs digital signal processors
  • BJT Bipolar Junction Transistor
  • HBT heterojunction bipolar transistor
  • MOSFET metal oxide field effect transistor
  • MESFET metal semiconductor field effect transistor
  • Such devices may use the same or alternative configurations other than the configuration illustrated in embodiments presented herein.

Abstract

Various approaches are described herein for, among other things, detecting and/or neutralizing attacks by malicious code. For example, instance(s) of a protected process are modified upon loading by injecting a runtime protector that creates a copy of each of the process' imported libraries and maps the copy into a random address inside the process' address space to form a "randomized" shadow library. The libraries loaded at the original address are modified into a stub library. Shadow and stub libraries are also created for libraries that are loaded after the process creation is finalized. Consequently, when malicious code attempts to retrieve the address of a given procedure, it receives the address of the stub procedure, thereby neutralizing the malicious code. When the original program's code (e.g., the non-malicious code) attempts to retrieve the address of a procedure, it receives the correct address of the requested procedure (located in the shadow library).

Description

MALICIOUS CODE PROTECTION FOR COMPUTER SYSTEMS BASED ON
PROCESS MODIFICATION
BACKGROUND
Technical Field
[0001] Embodiments described herein generally relate to detecting and/or neutralizing malicious code or other security threats on computer systems.
Description of Related Art
[0002] Modern cyber attackers employ a variety of attack patterns, ultimately aimed at running the attacker's code on the target machine without being noticed. The traditional attack pattern requires an executable file that arrives at the target machine through email, through a download from a website, from a neighboring local host, or from some sort of removable media. When the malicious file gets executed, it spawns a full-fledged process of its own. Subsequently, the malicious process may inject some form of code into the memory-space of another running process.
[0003] Newer attack patterns are based on vulnerabilities found in various useful programs, such as ADOBE® ACROBAT®. In the case of ADOBE® ACROBAT®, the malicious code (or "payload") is embedded within a portable data file (PDF) document. The PDF document also contains a chunk of malformed data, designed to exploit the given vulnerability. This chunk is crafted to cause some kind of overflow or similar exception when the file is being read by the vulnerable program. When the program or the operating system seeks to recover, it returns, instead, to a tiny piece of machine code (or primary shellcode) supplied by the malformed data chunk. This primary shellcode takes control of the running program (i.e., the process), completing the so-called "exploit" of the given vulnerability. Subsequently, the primary shellcode loads whatever payload (special-purpose malicious code) is available, into the context of the running process.
[0004] In a so-called 'remote' attack, the vulnerable program is associated with some network port, either as a server or as a client. The exploit happens when the vulnerable program tries to process a chunk of malformed input, essentially in the same manner as described above. In this case, when the primary shellcode takes control of the running process, it may choose to download secondary shellcode or payload from the network. In both the local and the remote vulnerability-based attacks, the malicious code running within the originally breached process may proceed by injecting code into the running processes of other programs.
[0005] Traditional malware-detection tools, such as signature-based antivirus products, are ineffective against such attacks due to the fact these attacks take form in memory, thereby resulting in no visible signature for the malicious file. Conventional runtime activity monitoring, based on the behavioral patterns of such attacks, fail to defend against attacks due to the fact that such attacks morph themselves and change their behavior, thereby making it difficult to define strict rules that lead to the identification of malicious behavior. Accordingly, conventional runtime activity monitoring has some major drawbacks, including: (a) it may miss a new, unknown pattern; (b) detection may occur too late for the monitoring program to take an effective preventive action; and (c) the required computational resources may affect the system' s performance. In general, these tools rely on some prior knowledge of an attack pattern or a vulnerability, and will miss so-called "zero-day" attacks (new forms of attack, which exploit unknown vulnerabilities in the target software), whether the attack is remote or local.
[0006] Protective techniques such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) are used in modern computerized systems to prevent malicious-code attacks. However, recent sophisticated attacks, such as attacks that are able to deduce the location of desired functionality based on relative addressing, have demonstrated the limitations of ASLR and DEP.
BRIEF SUMMARY
[0007] Methods, systems, and apparatuses are described for detecting and/or neutralizing malicious code or other security threats on computer systems, substantially as shown in and/or described herein in connection with at least one of the figures, as set forth more completely in the claims. BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The accompanying drawings, which are incorporated herein and form part of the specification, illustrate embodiments of the present invention and, together with the description, further serve to explain the principles involved and to enable a person skilled in the relevant art(s) to make and use the disclosed technologies.
[0009] FIG. 1 depicts components of a computer system in accordance with an embodiment.
[0010] FIG. 2 depicts a layout of a binary image in accordance with an embodiment.
[0011] FIG. 3 depicts a flowchart of an example method for neutralizing runtime in- memory exploits of a process in accordance with an example embodiment.
[0012] FIG. 4A depicts a block diagram of a main memory including a process in accordance with an embodiment.
[0013] FIG. 4B depicts a block diagram of a main memory including a process in accordance with another embodiment.
[0014] FIG. 5 depicts a flowchart of an example method for handling procedure calls for procedures in a library module after a computing process has been modified by injected code in accordance with an embodiment.
[0015] FIG. 6 depicts a block diagram of a main memory including a process in accordance with another embodiment.
[0016] FIG. 7 depicts a flowchart of an example method for recovering execution of a computing process in response to a malicious code attack in accordance with an example embodiment.
[0017] FIG. 8 depicts a block diagram of a main memory in accordance with another embodiment.
[0018] FIG. 9 depicts a block diagram of a computer system that may be configured to perform techniques disclosed herein.
[0019] The features and advantages of the disclosed technologies will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
DETAILED DESCRIPTION
I. Introduction
[0020] The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments of the present invention. However, the scope of the present invention is not limited to these embodiments, but is instead defined by the appended claims. Thus, embodiments beyond those shown in the accompanying drawings, such as modified versions of the illustrated embodiments, may nevertheless be encompassed by the present invention.
[0021] References in the specification to "one embodiment," "an embodiment," "an example embodiment," or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0022] Numerous exemplary embodiments are now described. Any section/subsection headings provided herein are not intended to be limiting. Embodiments are described throughout this document, and any type of embodiment may be included under any section/subsection. Furthermore, it is contemplated that the disclosed embodiments may be combined with each other in any manner.
II. Example Embodiments [0023] Malicious code (e.g., malware) including injected shellcode, relies on some basic assumptions regarding the runtime context of the target in order to initialize itself and to execute its payload properly. In general, shellcode injected into a running process has to perform some initial steps before it can proceed. It should do at least part of the initiation steps that the system's default loader would normally do when creating a running process from an executable file (e.g., a binary image). In particular, it is crucial for the injected code to obtain the addresses of certain shared libraries (e.g., dynamic-link libraries (DLLs)) as they are mapped into the address space of the running process, and to further obtain the addresses of the procedures (or functions) that it intends to use. In the case where the vulnerability resides inside a shared library, the injected code only needs to find the specific functionality within that library and does not need to locate the library itself.
[0024] Various approaches are described herein for, among other things, neutralizing and/or detecting attacks by such malicious code. This may be achieved, for example, by modifying one or more instances of a protected process upon loading by injecting a runtime protector that (a) creates a copy of each of the process' imported libraries and maps the copy into a random address inside the process' address space (to form a randomized "shadow" library), (b) replaces the procedure addresses within the original libraries, to point at a stub (thereby forming a "stub" library), and (c) intercepts procedure calls for late library loading and creates a shadow library and a stub library for such libraries.
[0025] The above technique is referred to herein as "morphing." In one implementation of this technique, the addresses of the shadow libraries (and procedures included therein) are randomized, ensuring that each process and each process' instance obtain a unique protective shield. In accordance with an embodiment, morphing is performed dynamically during initialization of the process, where librar(ies) loaded during process initialized are morphed. In accordance with another embodiment, morphing is performed dynamically during runtime, where librar(ies) loaded during runtime (i.e., after process initialization is complete) are morphed. [0026] In further accordance with this technique, when injected (e.g., malicious) code attempts to retrieve the address of a given procedure, it will be directed to the stub library (the library at the original address) and receive the address of the stub procedure. Consequently, the injected code will not be able to perform its malicious activities. Furthermore, its presence can be detected. However, when the original program's code (e.g., the non-malicious code) attempts to retrieve the address of a procedure, it will use the address of the shadow library and receive the correct address of the requested procedure. Consequently, the original program' s code will proceed normally.
[0027] Various embodiments described herein offer at least the following additional advantages: (a) when the presence of malicious code is detected, the malicious code can be sandboxed or otherwise diverted to a secure environment, to deceive the attacker and/or to learn the malware's behavior and intentions; (b) a user or administrator can define, for a given process, a set of procedure calls that are prohibited under any circumstances (also referred to herein as an "API Firewall"); (c) the overall impact on the system's performance may be relatively low, particularly compared to runtime behavioral monitoring, which tries to detect malicious code rather than preempt it; and (d) no prior knowledge of the current malware is assumed, therefore prevention of new, unknown, or zero-day attacks is possible.
[0028] Furthermore, embodiments described herein overcome the limitations of
ASLR and DEP, and can be applied in concert with those techniques to gain optimal protection.
[0029] For the sake of brevity, embodiments described herein are described in terms of the MICROSOFT® WINDOWS® Operating System (OS), published by Microsoft Corporation of Redmond, Washington. However, as should be clear to any person skilled in the art, this is just one possible embodiment. Similar embodiments may protect practically all kinds of modern operating systems, including LINUX® and other UNIX® variants, against a very wide array of malicious-code attacks, whether remote or local.
[0030] Additionally, embodiments described herein refer to morphing techniques associated with library(ies) for the sake of brevity. However, as should be clear to any person skilled in the art, this is just one possible embodiment. Similar embodiments may protect practically all kinds of codebase elements, including, but not limited to, DLL extensions, Component Object Models (COMs), etc.
[0031] In particular, a method is described herein. The method includes determining that a process loader of an operating system has initiated the creation of a computing process. In response to determining that the process loader has initiated the creation of the computing process, code is injected in the computing process that is configured to modify the computing process by determining that at least one library module of the computing process is to be loaded into memory, storing the at least one library module at a first address in the memory, copying the at least one library module stored at the first address to a second address in the memory that is different than the first address, and modifying the at least one library module stored at the first address into a stub library module.
[0032] A system is also described herein. The system includes one or more processing units and a memory coupled to the one or more processing units, the memory storing software modules for execution by the one or more processing units. The software modules include a runtime protector configured to load a library module for the computing process at a first address in the memory, copy the library module stored at the first address to a second address in the memory that is different than the first address, and modify the library module stored at the first address into a stub library module. The code that accesses the library module stored at the second address is designated as being non-malicious code. The code attempting to access the library module stored at the first address is designated as being malicious code.
[0033] A computer-readable storage medium having program instructions recorded thereon that, when executed by a processing device, perform a method for modifying a computing process is further described herein. The method includes loading a library module for the computing process at a first address in the memory, copying the library module stored at the first address to a second, randomized address in the memory that is different than the first address, and modifying the library module stored at the first address into a stub library module. The code that accesses the library module stored at the second address is designated as being non-malicious code. The code attempting to access the library module stored at the first address is designated as being malicious code.
III. Example Systems and Methods for Detecting and/or Neutralizing the Execution of Malicious Code
[0034] FIG. 1 depicts components of a computer system 100 in accordance with one embodiment that detects and/or neutralizes the execution of malicious code associated with a computing process executing thereon. As shown in FIG. 1, computer system 100 includes one or more processor(s) 102 (also called central processing units, or CPUs), a primary or main memory 104, and one or more secondary storage device(s) 106. Processor(s) 102, main memory 104, and secondary storage device(s) 106 are connected to a communication interface 108 via a suitable interface, such as one or more communication buses. In some embodiments, processor(s) 102 can simultaneously operate multiple computing threads, and in some embodiments, processor(s) 102 may each comprise one or more processor core(s). Examples of main memory 104 include a random access memory (RAM) (e.g., dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual-data rate RAM (DDRRAM), etc.). Secondary storage device(s) 106 include for example, one or more hard disk drives, one or more memory cards, one or more memory sticks, a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, or any other computer data storage device.
[0035] As shown in FIG. 1, main memory 104 stores an operating system 110.
Operating system 110 may manage one or more hardware components (e.g., processor(s) 102, main memory 104, secondary storage device(s) 106, etc.) and software executing on computer system 100. Example hardware components of computer system 100 are described in detail below in reference to FIG. 9.
[0036] Operating system 110 may have one or more components that perform certain tasks relating to the execution of software on computer system 100. One such component is process loader 112. Process loader 112 is configured to initiate creation of a computing process (or "process") 114 in main memory 104. Process 114 is an instance of a computer program being executed by processor(s) 102. The computer program may comprise an application program (or "application"), a system program, or other computer program being executed by processor(s) 102. The computer program is embodied in instructions and/or data included in a binary image (e.g., binary image 116).
[0037] To initiate creation of process 114, process loader 112 loads (or "maps") binary image 116, which is stored in secondary storage device(s) 106, into an address space allocated for process 114 in main memory 104 based on information included in binary image 116. The binary image mapped into main memory 104 is represented in FIG. 1 by mapped binary image 118. Process loader 112 builds up an initial execution context of process 114. Computer program execution (i.e., process 114) begins when processor(s) 102 commence executing the first instruction of the computer program. Process 114 may comprise one or more threads of execution that execute a subset of instructions concurrently.
[0038] As the program execution evolves, other component(s) of operating system
110 allocate various resources to process 114. The execution context of process 114 may comprise information about such resource allocation, a current state of the program execution, an instruction that is to be executed next, and other information related to the program execution. The computer program execution continues until processor(s) 102 execute a termination or halt instruction.
[0039] Additional information regarding the information included in binary image
116 and how process loader 112 maps binary image 116 into main memory 104 based on this information is described below with reference to FIG. 2.
[0040] FIG. 2 is an example layout of binary image 116 in accordance with an embodiment. Examples of executable binary formats for binary image 116 include, but are not limited to, the portable executable (PE) format (e.g., files having an .exe, .dll, and a .sys extension), the Executable and Linkable Format (ELF), the Mach object (Mach-O) file format, etc.
[0041] As shown in FIG. 2, binary image 116 may comprise one or more headers 202 and/or one or more sections 203, which process loader 112 uses to map binary image 116 (or portions thereof) into main memory 104. Header(s) 202 may comprise information regarding the layout and properties of binary image 116 (e.g., the names, number and/or location of section(s) 203 within binary image 116). Header(s) 202 may also include a base address (also referred to as an image base) that specifies a default address at which binary image 116 is to be loaded into main memory. It is noted, however, that binary image 116 may be loaded at a different address. For example, if operating system 110 supports ASLR (which is technique used to guard against buffer-overflow attacks by randomizing the location where binary images are loaded into main memory 104), the address at which binary image 116 is loaded into main memory 104 will be a randomized address.
[0042] Section(s) 203 of binary image 116 may comprise an executable code section
204, a data section 206, a resources section 208, an export data section 210, an import data section 212 and a relocation section 214. Executable code section 204 comprises instructions that correspond to the computer program to be executed by processor(s) 102. The instructions may be machine code instructions that are to be executed by processor(s) 102 after binary image 116 is loaded into main memory 104.
[0043] Data section 206 comprises uninitialized data required for executing the computer program. Such data includes, but is not limited to, static and/or global variables. Resources section 208 comprises resource information that comprises readonly data required for executing the computer program. Such read-only data includes, but is not limited to, icons, images, menus, strings, etc. The read-only data may be stored in one or more tables (i.e., resource table(s)).
[0044] Export data section 210 may include information about the names and/or references of procedures exportable to other binary image(s) (e.g., DLL(s)). The export data may include an export directory that defines the names of exportable procedures included in binary image 116. The addresses of the exportable procedures may be stored in a table (e.g., an export address table (EAT)). The addresses of such exportable procedures may be provided to other binary images in response to the issuance by such other binary images of a procedure call (e.g., GetProc Address) that identifies the procedure.
[0045] Import data section 212 may include information about the names and/or references of procedures that are imported by binary image 116. Import data section 212 may comprise an import directory, which includes information about other binary image(s) (e.g., DLL(s)) from which binary image 116 imports procedures. The information may include a location (e.g., an address) or a pointer to a location of a binary image that includes at least one procedure to be imported. The information may further include an import address table (IAT) that includes the name(s) of procedures to be imported and/or pointers to the procedures to be imported.
[0046] During process loading, process loader 112 may check the import data (e.g., the IAT) to determine if one or more additional binary images (e.g., libraries, such as DLLs) are required for process 114. Process loader 112 may map any such required binary image(s) into the address space of process 114. Process loader 114 may recursively parse the respective IATs of each required binary image to determine if further binary image(s) are required and map these further binary image(s) into the address space of process 114.
[0047] Process loader 112 replaces the pointers in the respective IATs with the actual addresses at which the procedures are loaded into main memory 104 as the procedures are imported. By using pointers, process loader 112 does not need to change the addresses of imported procedures everywhere in code of the computer program that such imported procedures are called. Instead, process loader 112 simply has to add the correct address(es) to a single place (i.e., the IAT), which is referenced by the code.
[0048] Relocation data section 214 comprises relocation data that enables process loader 112 to modify addresses associated with code and data items (respectively included in executable code section 204 and data section 206) specified in binary image 116. When a binary image is created (e.g., by a computer program, such as a linker), an assumption is made that the binary image is to be mapped to a base address, as described above. Based on this assumption, the linker inserts the real addresses (relative to the base address) of code and data items in the binary image. If for some reason the binary image is loaded at an address other than the image base (e.g., in the event that the image base is already occupied or due to an ASLR scheme being in place), these real addresses will be invalid. The relocation data enables process loader 112 to modify these addresses in binary image 116 so that they are valid. For example, the relocation data may include a relocation table, which includes a list of pointers that each point to a real address of a code and/or data item. When binary image 116 is remapped to an address other than the image base, process loader 112 updates these pointers. Thereafter, process loader 112 initiates the computer program by passing control to the program code loaded into main memory 104.
[0049] Returning to FIG. 1, in accordance with an embodiment, system 100 is configured to neutralize and/or intercept runtime in-memory exploits of processes (e.g., exploits performed by malicious code). Such exploits are carried out by identifying the memory location of a specific known object (e.g., of a procedure or data object having a predetermined fixed address) in a process' address space in main memory and using this location to calculate the location of other procedures that are required to fulfill the exploit.
[0050] To neutralize such exploits, system 100 may include a modification engine
120, which executes in main memory 104. Modification engine 120 may be configured to modify (or "morph") process 114 to include a runtime protector 122 that causes the location of the in-memory data and code segments to be changed upon being loaded into main memory 104 in a random manner and updates legitimate code segments (i.e., non-malicious code segments) with these changes, thereby preventing malicious code from accessing such data and code segments. Furthermore, runtime protector 122 maintains the original in-memory data and code segments and intercepts any access to these segments to detect malicious activity.
[0051] For example, modification engine 120 may be configured to intercept a process creation event issued by operating system 110 (or a component thereof) for process 114. Modification engine 120 may verify that process 114 is designated for protection. For example, modification engine 120 may check that process 114 is included in a list of processes that should be protected. In response to determining that process 114 is to be protected, modification engine 120 causes the creation of the process to be suspended and injects runtime protector 122 into process 114. Runtime protector 122 may be a library (e.g., a DLL) that is injected into the address space of process 114.
[0052] Runtime protector 122 may be configured to determine whether any library modules (e.g., DLLs) have already been loaded into the address space of process 114. In response to determining that library module(s) have already been loaded into the address space of process 114, runtime protector 122 copies the library module(s) into a different, random memory range (referred to as a "shadow" library). The library module(s) loaded into the original address space are modified into a stub library (also referred to as a "shallow library"), which provides stub procedures or functions. Runtime protector 122 updates the IAT mapped into the address space of process 114 with the addresses corresponding to the random memory range. Thereafter, modification engine 120 causes process loader 112 to be released to allow process loader 112 to finalize the process creation for process 114.
[0053] Runtime protector 122 may also be configured to create shadow and stub libraries for library module(s) that are loaded after process finalization (e.g., "late" libraries). For example, runtime protector 122 may be configured to hook memory mapping procedure calls (e.g., that map libraries to a particular section of main memory 104, such as NtMapViewOfSection) that load "late" library module(s) into main memory 104. Upon intercepting such procedure calls, runtime protector 122 allows the call to be completed, thereby resulting in the library module(s) being loaded at their intended addresses in main memory 104. Thereafter, runtime protector 122 creates shadow and stub libraries for such library module(s) in a similar manner as described above.
[0054] Thus, when the original, non-malicious code attempts to retrieve a library module handle of a library module including the procedure requested for and/or the address of the procedure in one of the library module(s), it will receive the library module handle of shadow library module and/or the address of the procedure in the shadow library module. Consequently, the original program's code will proceed normally as planned. However, when malicious code attempts to retrieve the library module handle of the same library module including the same procedure and/or the address of the procedure in the library module, the malicious code will receive the library module handle of the stub library module and/or the address of a procedure in the stub library module. Consequently, the malicious code will not be able perform its malicious activities.
[0055] In addition, the presence of the malicious code may be detected upon accessing the stub library. For example, in accordance with an embodiment, runtime protector 122 modifies the library module(s) loaded into the original address space into stub libraries by causing operating system 110 to designate the original address spaces at which executable portions (e.g., executable code) of the library module(s) are located as being non-accessible regions. Modification engine 120 may also inject an exception handler 124 into the address space of process 114, which intercepts an exception thrown by operating system 110 when code (e.g., malicious code) attempts to access the non-accessible region (i.e., the stub library). Upon detecting the exception, runtime protector 122 may be configured to redirect the malicious code to an isolated environment and/or kill a thread spawned by the malicious code.
[0056] In accordance with an embodiment, malicious code is detected by a user- configurable API firewall. For example, a user or administrator may be enabled (e.g., using a graphical user interface (GUI)) to define, for any given process, a set of procedure calls that are prohibited under any circumstances.
[0057] Accordingly, in embodiments, system 100 may operate in various ways to neutralize runtime in-memory exploits of process. For example, FIG. 3 depicts a flowchart 300 of an example method for neutralizing runtime in-memory exploits of a process, according to an example embodiment. System 100 shown in FIG. 1 may operate according to flowchart 300. For illustrative purposes, flowchart 300 is described with reference to FIGS. 4A-4B. FIGS. 4A-4B show block diagrams 400A and 400B of main memory 402, according to an embodiment. Main memory 402 is an example of main memory 104 shown in FIG. 1. Accordingly, operating system 404, process loader 406, modification engine 408, process 410 and runtime protector 412 are examples of operating system 110, process loader 112, modification engine 120, process 114 and runtime protector 122, as shown in FIG. 1. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart 300. Flowchart 300 and main memory 402 are described as follows.
[0058] Flowchart 300 begins with step 302. At step 302, a determination is made that a process loader of an operating system has initiated the creation of a computing process. For example, as shown in FIG. 4A, modification engine 408 determines that process loader 406 of operating system 404 has initiated the creation of process 406. For instance, process loader 406 may initiate creation of process 410 in response to receiving a process creation event from operating system 404 (or from one or more other components of operating system 404). Modification engine 408 is configured to detect such events (e.g., event 414). In response to detecting event 414, modification engine 408 may verify that the corresponding process being created (e.g., process 410) is included in a list of processes to be protected. For example, modification engine 408 may query a database or search a file containing the list to determine whether the corresponding process is to be protected.
[0059] At step 304, code is injected in the computing process that is configured to modify the process in response to determining that the process loader has initiated the creation of the computing process. For example, as shown in FIG. 4A, modification engine 408 issues a procedure call 416 to inject code (e.g., runtime protector 412) into process 410. In accordance with an embodiment, runtime protector 412 is a DLL injected into the address space of process 410.
[0060] The injected code is configured to modify the process in accordance with steps
306, 308, 310 and 312 as described below. At step 306, a determination is made that at least one library module of the computing process is to be loaded into memory. For example, as shown in FIG. 4A, runtime protector 412 is configured to determine that at least one library module of process 410 is to be loaded into main memory 402. In accordance with an embodiment, runtime protector 412 determines that at least one library module of process 410 is to be loaded into main memory 402 by hooking a procedure call 420 initiated by process loader 406. Procedure call 420 may be configured to map at least one library module into main memory 402. Procedure call 420 may identify the at least one library module and a section of main memory 404 at which the at least one library module is to be loaded. In accordance with an embodiment, procedure call 420 is an NtMapViewOfSection procedure call.
[0061] At step 308, the at least one library module is stored at a first address in the memory. For example, as shown in FIG. 4, library module 422 is stored at a first address OxXX. The first address may be specified by procedure call 420.
[0062] At step 310, the at least one library module stored at the first address is copied to a second address in the memory that is different than the first address. For example, as shown in FIG. 4, runtime protector 412 copies library module 422 stored at the first address to a second address (OxYY) in main memory 404 that is different than the first address (represented by shadow library module 424). In accordance with an embodiment, the second address is a randomized address determined by runtime protector 412.
[0063] At step 312, the at least one library module stored at the first address is modified into a stub library module. For example, with reference to FIG. 4B, runtime protector 412 modifies library module 422 (as shown in FIG. 4A) into a stub library module 422. In accordance with an embodiment, the second address is a randomized address determined by runtime protector 412. In accordance with an embodiment, runtime protector 412 modifies library module 422 (as shown in FIG. 4A) into a stub library module 422' (as shown in FIG. 4B) by causing one or more executable portions of library module 422 to be designated as being non-accessible. For example, runtime protector 412 may issue a command 426 to operating system 404 that causes operating system 404 to designate the executable portion(s) of library module 422 as non-accessible.
[0064] FIG. 5 depicts a flowchart 500 of an example method for handling procedure calls for procedures in the at least one library module after the computing process has been modified by the injected code, according to an example embodiment. System 100 shown in FIG. 1 may operate according to flowchart 500. For illustrative purposes, flowchart 500 is described with reference to FIG. 6. FIG. 6 shows a block diagram 600 of main memory 602, according to an embodiment. Main memory 602 is similar to main memory 402, as shown in FIGS. 4A and 4B. Accordingly, operating system 604, process loader 606, modification engine 608, process 610, runtime protector 612, stub library module 622 and shadow library module 624 are examples of operating system 404, process loader 406, modification engine 408, process 410, runtime protector 412, stub library module 422 and shadow library module 424, as shown in FIGS. 4A and 4B. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart 500. Flowchart 500 and main memory 602 are described as follows. [0065] Flowchart 500 begins with step 502. At step 502, a first procedure call for a procedure in the at least one library module is caused to reference the second address of the at least one library module, the first procedure call being included in at least one of a binary image from which the computing process is created and one or more other library modules imported for the computing process. For example, as shown in FIG. 6, a first procedure call 630 for a procedure 632 is caused to reference a library module at the second address (i.e., shadow library module 624). First procedure call 630 is initially included in at least one of a binary image (e.g., binary image 116 of FIG. 1) from which process 610 is created or another library module imported for the binary image. First procedure call 630 is loaded into the address space of process 610 of main memory 602 during the binary image mapping process described above.
[0066] In accordance with an embodiment, the first procedure call for the procedure in the at least one library module is caused to reference the second address of the at least one library module by updating a data structure that stores an address at which the at least one library module is loaded into the memory with the second address of the at least one library module, thereby causing the first procedure call to reference the second address of the at least one library module. For example, with reference to FIG. 6, runtime protector 612 is configured to update data structure 634 with the second address of the at least one library module. In accordance with an embodiment, the data structure is the IAT of the binary image (i.e., binary image 116, as shown in FIG. 1.) that is mapped into the address space of process 610.
[0067] At step 504, a second procedure call for a procedure in the at least one library module is caused to reference the first address of the at least one library module, the second procedure call originating from malicious code that is injected into the computing process after loading of the binary image into memory is complete. For example, as shown in FIG. 6, a second procedure call 636 for procedure 632 is caused to reference a library module at the first address (i.e., stub library module 622). As shown in FIG. 6, second procedure call 636 originates from malicious code 638. Malicious code 638 is code that was injected into process 610 after binary image 116 (as shown in FIG. 1) is mapped into main memory 604. [0068] As described above, in accordance with an embodiment, certain executable portions of a library module stored at the first address (i.e., the stub library module) may be designated as being non-accessible. In accordance with such an embodiment, a malicious code attack is detected when malicious code attempts to access such non- accessible sections.
[0069] FIG. 7 depicts a flowchart 700 of an example method for detecting a malicious code attack, according to an example embodiment. System 100 shown in FIG. 1 may operate according to flowchart 700. For illustrative purposes, flowchart 700 is described with reference to FIG. 8. FIG. 8 shows a block diagram 800 of a main memory 802, according to an embodiment. Main memory 802 is similar to main memory 602 shown in FIG. 6. Accordingly, operating system 804, process loader 806, modification engine 808, process 810, runtime protector 812, stub library module 822, shadow library module 824 and malicious code 838 are examples of operating system 604, process loader 606, modification engine 608, process 610, runtime protector 612, stub library module 622, shadow library module 624 and malicious code 638, as shown in FIG. 6. Further structural and operational embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion regarding flowchart 700. Flowchart 700 and main memory 802 are described as follows.
[0070] Flowchart 700 begins with step 702. At step 702, an exception thrown by the operating system is detected, the exception being thrown as a result of malicious code attempting to access the library stored at the first address. For example, with reference to FIG. 8, an exception handler 840 may also be injected into main memory 802; in particular, into the address space of process 810. Exception handler 840 may be injected into process 810 by modification engine 808. Exception handler 840 is configured to detect an exception 842 thrown by operating system 804. Exception 842 may be thrown in response to a procedure call 844 included in malicious code 838 attempting to access a procedure included in stub library module 822.
[0071] At step 704, a determination is made that a malicious attack has occurred in response to detecting the exception. For example, as shown in FIG. 8, runtime protector 812 may be configured to determine that a malicious attack has occurred in response to exception handler 840 detecting exception 842. Upon detecting exception 842, runtime protector 812 may be configured to redirect malicious code 838 to an isolated environment and/or kill a thread spawned by malicious code 838.
IV. Additional Embodiments for Process Modification
[0072] The foregoing description describes systems and methods for modifying a process by creating a copy of each of a process' imported libraries and mapping the copy to a randomized address to form a shadow library and modifying the libraries at the original address into a stub library module. However, as described below, a process may be modified in ways in addition to, or in lieu of, the techniques described above.
A. Import Address Table (IAT) and Export Address Table (EAT) Randomization
[0073] In accordance with an embodiment, at least one of a location at which the IAT and/or EAT (that are associated with the library module(s) of the shadow library) is loaded into main memory (e.g., main memory 104, as shown in FIG. 1) and one or more procedure names stored in the IAT and/or EAT are randomized. The location and/or the procedure name(s) may be randomized by the runtime protector (e.g., runtime protector 122, as shown in FIG. 1). Malicious code attacks that use techniques that address system functionality via known locations of the IAT and/or EAT addresses or via accessing a specific entry using a procedure name included in at least one of the IAT and EAT will fail by performing these randomizations. Consequently, an attacker would need to guess the location of the IAT and/or EAT in main memory and will not be able to use attack methods that enable access to procedures included in the IAT and/or EAT based on their names.
[0074] In accordance with another embodiment, one or more indices within the IAT and/or EAT that correspond to procedure name(s) included therein are randomized. The indices may be randomized by runtime protector 122, as shown in FIG. 1. By doing so, attacks accessing known system functionality via specific fixed indices corresponding to specific procedures will fail. [0075] It is noted that the IAT and/or EAT associated with the stub library module(s) of the stub library are not randomized.
B. Resource Table Randomization
[0076] In accordance with an embodiment, at least one of a location at which the resource table(s) are loaded into main memory (e.g., main memory 104, as shown in FIG. 1), names, and/or references of resources included in the resource table(s) are randomized. The location of the resource table(s), the names, and/or references included in the resource table(s) may be randomized by the runtime protector (e.g., runtime protector 122, as shown in FIG. 1). Some attack techniques use resource table(s) in order to get a relative orientation into the process that can serve as a basis for building a dynamic attack using learned addresses in memory. Randomization of the resource table(s) will eliminate those exploitation techniques.
C. Process Environment Block and Thread Environment Block Randomization
[0077] In addition to tables and libraries, other data structures may also be loaded into the address space of a process. For example, a data structure used by the operating system to manage a process may also be loaded into the address space of the process. The structure may contain context information for the process that enables the operating system to manage execution of the process. Such a data structure may be referred to as a process environment block (PEB).
[0078] Another data structure that may be loaded in the address space of a process is a data structure used by the operating system to manage one or more threads associated with the process. The data structure may contain context information for the thread(s) that enables the operating to manage execution of the thread(s). Such a data structure may be referred to as a thread environment block (TEB).
[0079] In accordance with an embodiment, at least one of one or more elements, names and/or references included in the PEB and/or the TEB and/or the location of the PEB and/or the TEB may be randomized. The at least one of the element(s), name(s), and/or reference(s) and/or the locations of the PEB and/or TEB may be randomized by the runtime protector (e.g., runtime protector 122, as shown in FIG. 1). Malicious attacks that attempt to leverage information included in the PEB and/or TEB to determine locations of certain procedures, library modules, and/or tables (e.g., the IAT and/or EAT) will fail.
D. Exception Handler Randomization
[0080] As described above with reference to FIG. 8, modification engine 808 may inject an exception handler 840 into the address space of a process 810. In accordance with an embodiment, the location at which exception handler 840 is injected may be randomized. The location may be randomized by modification engine 808. Some attack techniques are abusing the exception handling mechanism embedded in processes to conduct malicious acts. Exception handlers are meant to serve program control in case of runtime errors. However, attackers are abusing this capability by accessing them in their known addresses and then injecting malicious code therein. The malicious code may cause a system error, which triggers the exception handler, thereby resulting in the malicious code taking control of the process. By randomizing the location of the exception handler, attackers will not be able to abuse it for obtaining runtime code control.
E. Other Randomizations
[0081] It is noted that tables and/or structures in addition to or in lieu of the tables and/or structures described above in subsections A-D may also be randomized to modify a process. However, such tables and/or structures are not described for the sake of brevity.
V. Example Computer System Implementation
[0082] The embodiments described herein, including systems, methods/processes, and/or apparatuses, may be implemented using well known processing devices, telephones (land line based telephones, conference phone terminals, smart phones and/or mobile phones), interactive television, servers, and/or, computers, such as a computer 900 shown in FIG. 9. It should be noted that computer 900 may represent computing devices linked to, processing devices, traditional computers, and/or the like in one or more embodiments. For example, computing system 100 of FIG. 1, and any of the sub-systems, components, and/or models respectively contained therein and/or associated therewith, may be implemented using one or more computers 900.
[0083] Computer 900 can be any commercially available and well known communication device, processing device, and/or computer capable of performing the functions described herein, such as devices/computers available from International Business Machines®, Apple®, Sun®, HP®, Dell®, Cray®, Samsung®, Nokia®, etc. Computer 900 may be any type of computer, including a desktop computer, a server, etc.
[0084] Computer 900 includes one or more processors (also called central processing units, or CPUs), such as a processor 906. Processor 906 is connected to a communication infrastructure 902, such as a communication bus. In some embodiments, processor 906 can simultaneously operate multiple computing threads, and in some embodiments, processor 906 may comprise one or more processors.
[0085] Computer 900 also includes a primary or main memory 908, such as random access memory (RAM). Main memory 908 has stored therein control logic 924 (computer software), and data.
[0086] Computer 900 also includes one or more secondary storage devices 910.
Secondary storage devices 910 include, for example, a hard disk drive 912 and/or a removable storage device or drive 914, as well as other types of storage devices, such as memory cards and memory sticks. For instance, computer 900 may include an industry standard interface, such a universal serial bus (USB) interface for interfacing with devices such as a memory stick. Removable storage drive 914 represents a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup, etc.
[0087] Removable storage drive 914 interacts with a removable storage unit 916.
Removable storage unit 916 includes a computer useable or readable storage medium 918 having stored therein computer software 926 (control logic) and/or data. Removable storage unit 916 represents a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, or any other computer data storage device. Removable storage drive 914 reads from and/or writes to removable storage unit 916 in a well- known manner.
[0088] Computer 900 also includes input/output/display devices 904, such as touchscreens, LED and LCD displays, monitors, keyboards, pointing devices, etc.
[0089] Computer 900 further includes a communication or network interface 920.
Communication interface 920 enables computer 900 to communicate with remote devices. For example, communication interface 920 allows computer 900 to communicate over communication networks or mediums 922 (representing a form of a computer useable or readable medium), such as LANs, WANs, the Internet, etc. Network interface 920 may interface with remote sites or networks via wired or wireless connections.
[0090] Control logic 928 may be transmitted to and from computer 900 via the communication medium 922.
[0091] Any apparatus or manufacture comprising a computer useable or readable medium having control logic (software) stored therein is referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer 900, main memory 908, secondary storage devices 910, and removable storage unit 916. Such computer program products, having control logic stored therein that, when executed by one or more data processing devices, cause such data processing devices to operate as described herein, represent embodiments.
[0092] Techniques, including methods, and embodiments described herein may be implemented by hardware (digital and/or analog) or a combination of hardware with one or both of software and/or firmware. Techniques described herein may be implemented by one or more components. Embodiments may comprise computer program products comprising logic (e.g., in the form of program code or software as well as firmware) stored on any computer useable medium, which may be integrated in or separate from other components. Such program code, when executed by one or more processor circuits, causes a device to operate as described herein. Devices in which embodiments may be implemented may include storage, such as storage drives, memory devices, and further types of physical hardware computer-readable storage media. Examples of such computer-readable storage media include, a hard disk, a removable magnetic disk, a removable optical disk, flash memory cards, digital video disks, random access memories (RAMs), read only memories (ROM), and other types of physical hardware storage media. In greater detail, examples of such computer- readable storage media include, but are not limited to, a hard disk associated with a hard disk drive, a removable magnetic disk, a removable optical disk (e.g., CDROMs, DVDs, etc.), zip disks, tapes, magnetic storage devices, MEMS (micro- electromechanical systems) storage, nanotechnology-based storage devices, flash memory cards, digital video discs, RAM devices, ROM devices, and further types of physical hardware storage media. Such computer-readable storage media may, for example, store computer program logic, e.g., program modules, comprising computer executable instructions that, when executed by one or more processor circuits, provide and/or maintain one or more aspects of functionality described herein with reference to the figures, as well as any and all components, capabilities, and functions therein and/or further embodiments described herein.
[0093] Such computer-readable storage media are distinguished from and non- overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared, and other wireless media, as well as wired media and signals transmitted over wired media. Embodiments are also directed to such communication media.
[0094] The techniques and embodiments described herein may be implemented as, or in, various types of devices. For instance, embodiments may be included in mobile devices such as laptop computers, handheld devices such as mobile phones (e.g., cellular and smart phones), handheld computers, and further types of mobile devices, desktop and/or server computers. A device, as defined herein, is a machine or manufacture as defined by 35 U.S.C. § 101. Devices may include digital circuits, analog circuits, or a combination thereof. Devices may include one or more processor circuits (e.g., central processing units (CPUs) (e.g., processor 906 of FIG. 9), microprocessors, digital signal processors (DSPs), and further types of physical hardware processor circuits) and/or may be implemented with any semiconductor technology in a semiconductor material, including one or more of a Bipolar Junction Transistor (BJT), a heterojunction bipolar transistor (HBT), a metal oxide field effect transistor (MOSFET) device, a metal semiconductor field effect transistor (MESFET) or other transconductor or transistor technology device. Such devices may use the same or alternative configurations other than the configuration illustrated in embodiments presented herein.
VI. Conclusion While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined in the appended claims. Accordingly, the breadth and scope of the present invention should not be limited by any of the above- described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising:
determining that a process loader of an operating system has initiated the creation of a computing process; and
in response to determining that the process loader has initiated the creation of the computing process, injecting code in the computing process that is configured to modify the computing process by:
determining that at least one library module of the computing process is to be loaded into memory;
storing the at least one library module at a first address in the memory; copying the at least one library module stored at the first address to a second address in the memory that is different than the first address; and modifying the at least one library module stored at the first address into a stub library module.
2. The method of claim 1 , wherein determining that at least one library module of the computing process is to be loaded into memory comprises:
intercepting a procedure call initiated by the process loader to determine that at least one library module of the computing process is to be loaded into memory, the procedure call identifying the at least one library module of the computing process that is to be loaded.
3. The method of claim 1, further comprising:
causing a first procedure call for a procedure in the at least one library module to reference the second address of the at least one library module, the first procedure call being included in at least one of a binary image from which the computing process is created and one or more other library modules imported for the computing process; and
causing a second procedure call for a procedure in the at least one library module to reference the first address of the at least one library module, the second procedure call originating from malicious code that is injected into the computing process after loading of the binary image into memory is complete.
4. The method of claim 3, wherein said causing a first procedure call for a procedure in the at least one library module to reference the second address of the at least one library module comprises:
updating a data structure that stores an address at which the at least one library module is loaded into the memory with the second address of the at least one library module, thereby causing the first procedure call to reference the second address of the at least one library module.
5. The method of claim 4, wherein the data structure is an import address table.
6. The method of claim 5, further comprising:
randomizing at least one of the following:
a location at which the import address table is loaded into the memory; one or more procedure names stored in the import address table; and one or more indices within the import address table that correspond to the one or more procedure names.
7. The method of claim 1, wherein modifying the library module stored at the first address into a stub library module comprises:
causing one or more executable portions of the library module stored at the first address to be designated as non-accessible.
8. The method of claim 7, further comprising:
detecting an exception thrown by the operating system, the exception being thrown as a result of malicious code attempting to access the library module stored at the first address; and
determining that a malicious attack has occurred in response to detecting the exception.
9. The method of claim 1, further comprising:
randomizing the second address.
10. The method of claim 1, further comprising:
randomizing at least one of the following:
a location at which an export address table is loaded into the memory, the export address table including one or more addresses of one or more procedures that are exportable by the computing process;
one or more procedure names stored in the export address table; and one or more indices within the export address table that correspond to the one or more procedure names.
11. The method of claim 1 , further comprising:
randomizing at least one of one or more elements of a first data structure used by the operating system to manage the computing process and one or more elements of a second data structure used by the operating system to manage one or more threads associated with the computing process.
12. The method of claim 11, wherein the first data structure is a process environment block and the second data structure is a thread environment block.
13. The method of claim 1, further comprising:
randomizing one or more elements of one or more resource tables including resource information for the computing process.
14. A system, comprising:
one or more processing units; and
a memory coupled to the one or more processing units, the memory storing software modules for execution by the one or more processing units, the software modules comprising: a runtime protector configured to:
load a library module for the computing process at a first address in the memory;
copy the library module stored at the first address to a second address in the memory that is different than the first address, wherein code that accesses the library module stored at the second address is designated as being non-malicious code; and
modify the library module stored at the first address into a stub library module, wherein code attempting to access the library module stored at the first address is designated as being malicious code.
15. The system of claim 14, wherein the runtime protector is further configured to: update an import address table that stores one or more addresses at which one or more library modules are loaded into the memory with the second address of the library module, thereby causing code originating from a binary image from which the computing process is created to access the library module at the second address instead of the first address.
16. The system of claim 14, wherein the runtime protector is further configured to: cause one or more executable portions of the library module stored at the first address to be designated as non-accessible.
17. The system of claim 16, the software modules further comprising:
an exception handler configured to detect an exception thrown by an operating system, the exception being thrown as a result of malicious code attempting to access the library module stored at the first address, the runtime protector further configured to:
determine that a malicious attack has occurred in response to a determination that the exception handler has detected the exception.
18. The system of claim 17, wherein the runtime protector is further configured to: randomize a location at which the exception handler is loaded into the memory.
19. The system of claim 14, wherein the runtime protector is further configured to: randomize the second address.
20. A computer-readable storage medium having program instructions recorded thereon that, when executed by a processing device, perform a method for modifying a computing process, the method comprising:
loading a library module for the computing process at a first address in the memory;
copying the library module stored at the first address to a second, randomized address in the memory that is different than the first address, wherein code that accesses the library module stored at the second address is designated as being non- malicious code; and
modifying the library module stored at the first address into a stub library module, wherein code attempting to access the library module stored at the first address is designated as being malicious code.
PCT/IB2015/053394 2014-11-17 2015-05-08 Malicious code protection for computer systems based on process modification WO2016079602A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP15723305.7A EP3123311B8 (en) 2014-11-17 2015-05-08 Malicious code protection for computer systems based on process modification
US15/324,656 US10528735B2 (en) 2014-11-17 2015-05-08 Malicious code protection for computer systems based on process modification
IL249962A IL249962B (en) 2014-11-17 2017-01-08 Malicious code protection for computer systems based on process modification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462080841P 2014-11-17 2014-11-17
US62/080,841 2014-11-17

Publications (1)

Publication Number Publication Date
WO2016079602A1 true WO2016079602A1 (en) 2016-05-26

Family

ID=53189102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/053394 WO2016079602A1 (en) 2014-11-17 2015-05-08 Malicious code protection for computer systems based on process modification

Country Status (4)

Country Link
US (1) US10528735B2 (en)
EP (1) EP3123311B8 (en)
IL (1) IL249962B (en)
WO (1) WO2016079602A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9898615B1 (en) * 2015-08-20 2018-02-20 Symantec Corporation Methods to impede common file/process hiding techniques
WO2018052510A1 (en) * 2016-09-13 2018-03-22 Symantec Corporation Systems and methods for detecting malicious processes on computing devices
WO2018193429A1 (en) * 2017-04-20 2018-10-25 Morphisec Information Security Ltd. System and method for runtime detection, analysis and signature determination of obfuscated malicious code
WO2019180667A1 (en) * 2018-03-22 2019-09-26 Morphisec Information Security 2014 Ltd. System and method for preventing unwanted bundled software installation
US10528735B2 (en) 2014-11-17 2020-01-07 Morphisec Information Security 2014 Ltd. Malicious code protection for computer systems based on process modification

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9710648B2 (en) 2014-08-11 2017-07-18 Sentinel Labs Israel Ltd. Method of malware detection and system thereof
US11507663B2 (en) 2014-08-11 2022-11-22 Sentinel Labs Israel Ltd. Method of remediating operations performed by a program and system thereof
US10102374B1 (en) 2014-08-11 2018-10-16 Sentinel Labs Israel Ltd. Method of remediating a program and system thereof by undoing operations
US10108798B1 (en) 2016-01-04 2018-10-23 Smart Information Flow Technologies LLC Methods and systems for defending against cyber-attacks
US9894085B1 (en) * 2016-03-08 2018-02-13 Symantec Corporation Systems and methods for categorizing processes as malicious
US10372909B2 (en) * 2016-08-19 2019-08-06 Hewlett Packard Enterprise Development Lp Determining whether process is infected with malware
US10482248B2 (en) * 2016-11-09 2019-11-19 Cylance Inc. Shellcode detection
US11695800B2 (en) 2016-12-19 2023-07-04 SentinelOne, Inc. Deceiving attackers accessing network data
US11616812B2 (en) 2016-12-19 2023-03-28 Attivo Networks Inc. Deceiving attackers accessing active directory data
EP3568790B1 (en) * 2017-01-11 2022-02-23 Morphisec Information Security 2014 Ltd. Protecting computing devices from a malicious process by exposing false information
US10783246B2 (en) 2017-01-31 2020-09-22 Hewlett Packard Enterprise Development Lp Comparing structural information of a snapshot of system memory
US10783239B2 (en) * 2017-08-01 2020-09-22 Pc Matic, Inc. System, method, and apparatus for computer security
EP3643040A4 (en) 2017-08-08 2021-06-09 SentinelOne, Inc. Methods, systems, and devices for dynamically modeling and grouping endpoints for edge networking
US10698752B2 (en) 2017-10-26 2020-06-30 Bank Of America Corporation Preventing unauthorized access to secure enterprise information systems using a multi-intercept system
US10754950B2 (en) * 2017-11-30 2020-08-25 Assured Information Security, Inc. Entity resolution-based malicious file detection
US11470115B2 (en) 2018-02-09 2022-10-11 Attivo Networks, Inc. Implementing decoys in a network environment
KR102186221B1 (en) * 2018-11-29 2020-12-03 한국전자통신연구원 Method for randomzing address space layout of embedded system based on hardware and apparatus using the same
US11061829B2 (en) * 2019-04-09 2021-07-13 Red Hat, Inc. Prefetch support with address space randomization
WO2020236981A1 (en) 2019-05-20 2020-11-26 Sentinel Labs Israel Ltd. Systems and methods for executable code detection, automatic feature extraction and position independent code detection
US11340915B2 (en) 2019-11-26 2022-05-24 RunSafe Security, Inc. Encaching and sharing transformed libraries
EP3916598A1 (en) * 2020-05-26 2021-12-01 Argus Cyber Security Ltd System and method for detecting exploitation of a vulnerability of software
US11579857B2 (en) 2020-12-16 2023-02-14 Sentinel Labs Israel Ltd. Systems, methods and devices for device fingerprinting and automatic deployment of software in a computing network using a peer-to-peer approach
CN115145571A (en) * 2021-03-31 2022-10-04 武汉斗鱼鱼乐网络科技有限公司 Method, apparatus and medium for hiding system function calls in program core code
US11681794B2 (en) * 2021-04-07 2023-06-20 Oracle International Corporation ASLR bypass
US11899782B1 (en) 2021-07-13 2024-02-13 SentinelOne, Inc. Preserving DLL hooks

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8627305B1 (en) * 2009-03-24 2014-01-07 Mcafee, Inc. System, method, and computer program product for hooking code inserted into an address space of a new process

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2102883A1 (en) * 1993-02-26 1994-08-27 James W. Arendt System and method for lazy loading of shared libraries
US6728963B1 (en) * 1998-09-09 2004-04-27 Microsoft Corporation Highly componentized system architecture with a loadable interprocess communication manager
US6499137B1 (en) * 1998-10-02 2002-12-24 Microsoft Corporation Reversible load-time dynamic linking
CA2447451C (en) 2000-05-12 2013-02-12 Xtreamlok Pty. Ltd. Information security method and system
US6941510B1 (en) * 2000-06-06 2005-09-06 Groove Networks, Inc. Method and apparatus for efficient management of XML documents
US7631292B2 (en) * 2003-11-05 2009-12-08 Microsoft Corporation Code individualism and execution protection
US7415702B1 (en) 2005-01-20 2008-08-19 Unisys Corporation Method for zero overhead switching of alternate algorithms in a computer program
US7591016B2 (en) 2005-04-14 2009-09-15 Webroot Software, Inc. System and method for scanning memory for pestware offset signatures
CA2604544A1 (en) 2005-04-18 2006-10-26 The Trustees Of Columbia University In The City Of New York Systems and methods for detecting and inhibiting attacks using honeypots
US8763103B2 (en) 2006-04-21 2014-06-24 The Trustees Of Columbia University In The City Of New York Systems and methods for inhibiting attacks on applications
US8689193B2 (en) 2006-11-01 2014-04-01 At&T Intellectual Property Ii, L.P. Method and apparatus for protecting a software application against a virus
WO2008074527A1 (en) * 2006-12-21 2008-06-26 International Business Machines Corporation Method, system and computer program for identifying interpreted programs through class loading sequences
US8245289B2 (en) 2007-11-09 2012-08-14 International Business Machines Corporation Methods and systems for preventing security breaches
US8341627B2 (en) 2009-08-21 2012-12-25 Mcafee, Inc. Method and system for providing user space address protection from writable memory area in a virtual environment
US9678747B2 (en) 2011-02-08 2017-06-13 Openspan, Inc. Code injection and code interception in an operating system with multiple subsystem environments
US8713679B2 (en) 2011-02-18 2014-04-29 Microsoft Corporation Detection of code-based malware
US9432298B1 (en) * 2011-12-09 2016-08-30 P4tents1, LLC System, method, and computer program product for improving memory systems
US8694738B2 (en) 2011-10-11 2014-04-08 Mcafee, Inc. System and method for critical address space protection in a hypervisor environment
US20150294114A1 (en) 2012-09-28 2015-10-15 Hewlett-Packard Development Company, L.P. Application randomization
US9135435B2 (en) 2013-02-13 2015-09-15 Intel Corporation Binary translator driven program state relocation
US9218467B2 (en) 2013-05-29 2015-12-22 Raytheon Cyber Products, Llc Intra stack frame randomization for protecting applications against code injection attack
US10311227B2 (en) * 2014-09-30 2019-06-04 Apple Inc. Obfuscation of an address space layout randomization mapping in a data processing system
US10528735B2 (en) 2014-11-17 2020-01-07 Morphisec Information Security 2014 Ltd. Malicious code protection for computer systems based on process modification

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8627305B1 (en) * 2009-03-24 2014-01-07 Mcafee, Inc. System, method, and computer program product for hooking code inserted into an address space of a new process

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10528735B2 (en) 2014-11-17 2020-01-07 Morphisec Information Security 2014 Ltd. Malicious code protection for computer systems based on process modification
US9898615B1 (en) * 2015-08-20 2018-02-20 Symantec Corporation Methods to impede common file/process hiding techniques
WO2018052510A1 (en) * 2016-09-13 2018-03-22 Symantec Corporation Systems and methods for detecting malicious processes on computing devices
US10049214B2 (en) 2016-09-13 2018-08-14 Symantec Corporation Systems and methods for detecting malicious processes on computing devices
WO2018193429A1 (en) * 2017-04-20 2018-10-25 Morphisec Information Security Ltd. System and method for runtime detection, analysis and signature determination of obfuscated malicious code
US11822654B2 (en) 2017-04-20 2023-11-21 Morphisec Information Security 2014 Ltd. System and method for runtime detection, analysis and signature determination of obfuscated malicious code
WO2019180667A1 (en) * 2018-03-22 2019-09-26 Morphisec Information Security 2014 Ltd. System and method for preventing unwanted bundled software installation
US11847222B2 (en) 2018-03-22 2023-12-19 Morphisec Information Security 2014 Ltd. System and method for preventing unwanted bundled software installation

Also Published As

Publication number Publication date
IL249962A0 (en) 2017-03-30
IL249962B (en) 2020-08-31
EP3123311B1 (en) 2020-04-08
EP3123311A1 (en) 2017-02-01
EP3123311B8 (en) 2021-03-03
US10528735B2 (en) 2020-01-07
US20170206357A1 (en) 2017-07-20

Similar Documents

Publication Publication Date Title
US10528735B2 (en) Malicious code protection for computer systems based on process modification
EP3230919B1 (en) Automated classification of exploits based on runtime environmental features
US11822654B2 (en) System and method for runtime detection, analysis and signature determination of obfuscated malicious code
RU2589862C1 (en) Method of detecting malicious code in random-access memory
US8099596B1 (en) System and method for malware protection using virtualization
US8904537B2 (en) Malware detection
JP2017527864A (en) Patch file analysis system and analysis method
US11171987B2 (en) Protecting computing devices from a malicious process by exposing false information
US10713357B2 (en) Detecting lateral movement using a hypervisor
US11847222B2 (en) System and method for preventing unwanted bundled software installation
US11914710B2 (en) System and method for application tamper discovery
RU2592383C1 (en) Method of creating antivirus record when detecting malicious code in random-access memory
US20220092171A1 (en) Malicious code protection for computer systems based on system call table modification and runtime application patching
EP4310707A1 (en) System and method for detecting malicious code by an interpreter in a computing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15723305

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015723305

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015723305

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15324656

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 249962

Country of ref document: IL

NENP Non-entry into the national phase

Ref country code: DE