AU2007221811A1 - Detecting an audio/visual threat - Google Patents

Detecting an audio/visual threat Download PDF

Info

Publication number
AU2007221811A1
AU2007221811A1 AU2007221811A AU2007221811A AU2007221811A1 AU 2007221811 A1 AU2007221811 A1 AU 2007221811A1 AU 2007221811 A AU2007221811 A AU 2007221811A AU 2007221811 A AU2007221811 A AU 2007221811A AU 2007221811 A1 AU2007221811 A1 AU 2007221811A1
Authority
AU
Australia
Prior art keywords
entity
threat
activity
processing system
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2007221811A
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PC Tools Technology Pty Ltd
Original Assignee
PC Tools Technology Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2006905486A external-priority patent/AU2006905486A0/en
Application filed by PC Tools Technology Pty Ltd filed Critical PC Tools Technology Pty Ltd
Priority to AU2007221811A priority Critical patent/AU2007221811A1/en
Publication of AU2007221811A1 publication Critical patent/AU2007221811A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Storage Device Security (AREA)

Description

Australian Patents Act 1990 Regulation 3.2 ORIGINAL COMPLETE SPECIFICATION STANDARD PATENT Invention Title Detecting an audio/visual threat The following statement is a full description of this invention, including the best method of performing it known to me/us:- P/00/011 5102 1- O DETECTING AN AUDIONISUAL THREAT Technical Field [001] The present invention generally relates to a method, system, computer readable 00 5 medium of instructions and/or computer program product for detecting and optionally restricting a threat which transmits audio and/or visual data indicative of user activity at a processing system.
Copyright [002] A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in a Patent Office patent files or records, but otherwise reserves all copyrights whatsoever.
Background Art [003] As used herein a "threat" includes malicious software, also known as "malware" or "pestware", which includes software that is included or inserted in a part of a processing system for a harmful purpose. The term threat should be read to include possible, potential and actual threats. Types of malware can include, but are not limited to, malicious libraries, viruses, worms, Trojans, adware, malicious active content and denial of service attacks. In the case of invasion of privacy for the purposes of fraud or theft of identity, malicious software that passively observes the use of a computer is known as "spyware".
[004] A hook (also known as a hook procedure or hook function), as used herein, generally refers to a callback function provided by a software application that receives certain data before the normal or intended recipient of the data. A hook function can thus examine or modify certain data before passing on the data. Therefore, a hook function allows a software application to examine data before the data is passed to the intended recipient.
-2- [005] An API ("Application Programming Interface") hook (also known as an API O interception), as used herein as a type of hook, refers to a callback function provided by an application that replaces functionality provided by an operating system's API. An API generally refers to an interface that is defined in terms of a set of functions and procedures, 00 and enables a program to gain access to facilities within an application. An API hook can
(N
N be inserted between an API call and an API procedure to examine or modify function Oparameters before passing parameters on to an actual or intended function. An API hook (Ni may also choose not to pass on certain types of requests to an actual or intended function.
[006] A process, as used herein, is at least one of a running software program or other computing operation, or a part of a running software program or other computing operation, that performs a task.
[007] A hook chain as used herein, is a list of pointers to special, application-defined callback functions called hook procedures. When a message occurs that is associated with a particular type of hook, the operating system passes the message to each hook procedure referenced in the hook chain, one after the other. The action of a hook procedure can depend on the type of hook involved. For example, the hook procedures for some types of hooks can only monitor messages, others can modify messages or stop their progress through the chain, restricting them from reaching the next hook procedure or a destination window.
[008] In a networked information or data communications system, a user has access to one or more terminals which are capable of requesting and/or receiving information or data from local or remote information sources. In such a communications system, a terminal may be a type of processing system, computer or computerised device, personal computer mobile, cellular or satellite telephone, mobile data terminal, portable computer, Personal Digital Assistant (PDA), pager, thin client, or any other similar type of digital electronic device. The capability of such a terminal to request and/or receive information or data can be provided by software, hardware and/or firmware. A terminal may include or C -3be associated with other devices, for example a local data storage device such as a hard disk drive or solid state drive.
[009] An information source can include a server, or any type of terminal, that may be 00 oO associated with one or more storage devices that are able to store information or data, for ,I example in one or more databases residing on a storage device. The exchange of information (ie. the request and/or receipt of information or data) between a terminal and I an information source, or other terminal(s), is facilitated by a communication means. The communication means can be realised by physical cables, for example a metallic cable such as a telephone line, semi-conducting cables, electromagnetic signals, for example radio-frequency signals or infra-red signals, optical fibre cables, satellite links or any other such medium or combination thereof connected to a network infrastructure.
[010] A system registry is a database used by operating systems, for example Windows platforms. The system registry includes information needed to configure the operating system. The operating system refers to the registry for information ranging from user profiles, to which applications are installed on the machine, to what hardware is installed and which ports are registered.
[0 11] An entity can include, but is not limited to, a file, an object, a class, a collection of grouped data, a library, a variable, a process, and/or a device.
[012] Local communication devices such as video cameras (also commonly referred to as "webcams") and microphones are becoming more commonplace in modem processing systems. For example, current laptop computers are provided with in-built webcams and microphones.
[013] Due to such devices becoming more popular, threats, such as malware, have recently been configured to utilise local communication devices for exploitation. Herein, this form of threat is referred to as an "audio/visual threat".
I
S-4- 0 [014] In some instances audio/visual threats have been configured to spy on an Sunsuspecting user of a compromised processing system using a webcam or microphone.
The visual and/or audio data recorded by a webcam can be transferred to a third party, wherein the third party may use the visual and/or audio data for exploitation, such as 00 determining when a user has left their premises so that a robbery can be performed. In
(N
NI some instances the audio/visual data has simply been used for voyeuristic activities.
I [015] In other instances, if the user has unsuspectingly left private information, such as details of their credit card, within visual range of the webcam, the visual data captured by the threat can be analysed by a third party to determine the details of the credit card for financial exploitation.
[016] In other instances, the webcam can be controlled by the threat to record typing performed by the user on the keyboard of the processing system, in order to determine secret information such as usernames and passwords.
[017] Recently, proof of concept computer programs have been developed which can utilise the sound of a user typing on the keyboard recorded by a microphone to determine keystrokes performed by the user within an acceptable accuracy. Again, secret information such as usernames and passwords can be determined using the audio data obtained by appropriately configured threat controlling the microphone of the compromised processing system.
[018] Current approaches to detect audio/visual threats have involved using signature based detection software. Such software includes a database of signatures, wherein each signature generally represents a file size of the malware, a file name associated with the malware, a cryptographic hash or checksum value of the malware, and pseudocode which represents program flow of the threat.
[019] However, signature based approaches are becoming unsuitable as it can take a number of days for a vendor of such software to develop an appropriate signature which
O
can detect and restrict the audio/visual threat. During the period of time when audio/visual theat is compromising a user's processing system, and the time when an appropriate signature is released by the vendor, the audio/visual threat can exploit audio/visual data obtained from the compromised processing system. Furthermore, unless a user continually 00 updates signatures for their malware detection software, this compromised time period can ,i also be unsatisfactorily extended.
C [020] Other approaches to deal with audio/visual threats have been to unplug microphones and webcams from the processing system. In some instances, placing an object such as a container over the webcam or microphone has also been suggested in order to overcome the compromised time period prior to a signature being released. Not only is this unsightly, but it can sometimes be extremely difficult and inconvenient for users of processing systems where the webcam and/or the microphone is in-built, such as a laptop computer.
[021] Therefore, there exists a need for a method, system, computer readable medium of instructions, and/or a computer program product which can detect an audio/visual threat which has compromised a processing system and optionally restrict an audio/visual threat performing malicious activity in the processing system which overcomes or at least ameliorates at least one of the above mentioned disadvantages.
[022] The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.
Disclosure Of Invention [023] In one broad form there is provided a method of detecting if a processing system has been compromised with audio/visual threat, wherein the method includes: -6intercepting one or more requests to perform an activity associated with an audio O and/or visual communication device of the processing system; and performing a behavioural analysis of the processing system to determine if the processing system exhibits behavioural characteristics indicative of the processing system 00 having been compromised with an audio/visual threat.
O[024] In one form, the method includes: N determining, using the request to perform the activity, an entity associated with the activity; and performing the behavioural analysis in relation to the entity.
[025] In another form, performing the behavioural analysis includes applying one or more behavioural rules.
[026] In one embodiment, the one or more behavioural rules includes at least one of: determining if the entity indicative of at least one of audio signal and visual signals being obtained by the audio and/or visual communication device; determining if the entity is being interacted via a graphical user interface currently displayed on the desktop of the processing system; determining if the entity is recording data indicative of at least one of audio data and visual data; determining if the entity was launched by the user; determining if the entity is attempting to connect to a remote network; and determining if the entity is requesting the activity to be performed at regular intervals.
[027] In another embodiment, a requesting entity requests the activity to be performed in relation to a target entity, wherein the method includes: determining, using a filter module, if the activity is suspicious or non-suspicious; and
O
-7in response to determining that the activity is suspicious, analysing, using an C analysis module, at least one of the activity, the requesting entity and the target entity.
[028] In one aspect, the filter module filters the activity according to the requesting entity 00 and the target entity to determine if the activity is suspicious or non-suspicious.
(N
O[029] In another aspect, the analysis module includes a list of activity sequences N indicative of an audio/visual threat, wherein analysing the suspicious activity includes comparing the suspicious activity and at least one of activities which occurred prior to the suspicious activity and activities which occurred after the suspicious activity to the list of activity sequences, wherein in response to a positive comparison, the activity is determined to be associated with an audio/visual threat.
[030] In one form, performing the behavioural analysis includes: determining an entity associated with the intercepted activity; determining an entity threat value for the entity, the entity threat value being indicative of a level of threat that the entity represents to the processing system, wherein the entity threat value is determined based on one or more characteristics of the entity; and comparing the entity threat value to an entity threat threshold to identify if the entity is malicious.
[031] In another form, each of the one or more characteristics of the entity is associated with a respective characteristic threat value, wherein the method includes calculating the entity threat value using at least some of the characteristic threat values for the one or more characteristics of the entity.
[032] In one embodiment, at least one of the one or more characteristics of the entity is associated with a characteristic threat value formula, wherein the method includes calculating, using the characteristic threat value formula, the characteristic threat value.
-8- [033] In another embodiment, at least one characteristic threat value is temporally O dependent, wherein the method includes calculating the at least one characteristic threat value for the entity using the characteristic threat value formula and a temporal value.
00 [034] In one aspect, the at least one characteristic is a behaviour associated with the
(N
N entity, wherein the method includes calculating the at least one characteristic threat value Ofor the entity using the characteristic threat value formula and a frequency of instances the N behaviour has been performed.
[035] In another aspect, the one or more characteristics includes at least one of one or more legitimate characteristics indicative of non-malicious activity and one or more illegitimate characteristics indicative of malicious activity, wherein the method includes determining the entity threat value using characteristic threat values associated with the one or more legitimate characteristics and the one or more illegitimate characteristics of the entity.
[036] Optionally, the step of determining the entity threat value for an entity includes calculating a difference between the characteristic threat values for the one or more legitimate characteristics of the entity, and the characteristic threat values for the one or more illegitimate characteristics of the entity, wherein the difference is indicative of the entity threat value.
[037] In an optional form, the method includes: determining one or more related entities to the activity, wherein each related entity has an associated entity threat value; and calculating the entity threat value for the activity using the entity threat value for at least some of the one or more related entities.
[038] In another optional form, the method includes: determining one or more related entities to the activity, wherein each related entity has an associated entity threat value; and S-9calculating a group threat value for the activity and one or more related entities using the entity threat value for at least some of the one or more related entities and the activity.
00 [039] Optionally, the method includes weighting the entity threat value for at least one related entity according to a relatedness of the at least one related entity relative to the activity.
[040] In another broad form there is provided a system to detect if a processing system has been compromised with audio/visual threat, wherein the system is configured to: intercept one or more requests in the processing system to perform an activity associated with an audio and/or visual communication device of the processing system; and perform a behavioural analysis of the processing system to determine if the processing system exhibits behavioural characteristics indicative of the processing system having been compromised with an audio/visual threat.
[041] In one form, the system is configured to: determine, using the request to perform the activity, an entity associated with the activity; and perform the behavioural analysis in relation to the entity.
[042] In another form, the system is configured to apply one or more behavioural rules to perform the behavioural analysis.
[043] In one embodiment, application of the one or more behavioural rules determines at least one of: if the entity is indicative of at least one of audio signal and visual signals being obtained by the audio and/or visual communication device; if the entity is being interacted via a graphical user interface currently displayed on the desktop of the processing system;
O
if the entity is recording data indicative of at least one of audio data and visual data; Sif the entity was launched by the user; if the entity is attempting to connect to a remote network; and if the entity is requesting the activity to be performed at regular intervals; 00 wherein results of the application of the one or more behavioural rules is used to determine N whether the processing system having been compromised with an audio/visual threat.
N [044] In another embodiment, a requesting entity requests the activity to be performed in relation to a target entity, wherein the system is configured to: determine, using a filter module, if the activity is suspicious or non-suspicious; and analyse, using an analysis module, at least one of the activity, the requesting entity and the target entity in response to determining that the activity is suspicious.
[045] In one aspect, the filter module filters the activity according to the requesting entity and the target entity to determine if the activity is suspicious or non-suspicious.
[046] In another aspect, the analysis module includes a list of activity sequences indicative of an audio/visual threat, wherein the system is configured to analyse the suspicious activity by comparing the suspicious activity and at least one of activities which occurred prior to the suspicious activity and activities which occurred after the suspicious activity to the list of activity sequences, wherein in response to a positive comparison, the activity is determined to be associated with an audio/visual threat.
[047] In another form, the system is configured to: determine an entity associated with the intercepted activity; determine an entity threat value for the entity, the entity threat value being indicative of a level of threat that the entity represents to the processing system, wherein the entity threat value is determined based on one or more characteristics of the entity; and compare the entity threat value to an entity threat threshold to identify if the entity is malicious.
11-
O
[048] In another form, each of the one or more characteristics of the entity is associated with a respective characteristic threat value, wherein the system is configured to calculate the entity threat value using at least some of the characteristic threat values for the one or more characteristics of the entity.
00 [049] In another broad form there is provided a computer program product including a computer readable medium having a computer program recorded therein or thereon, the i computer program being configured to detect if a processing system has been compromised with audio/visual threat, wherein the computer program product configures the processing system to: intercept one or more requests in the processing system to perform an activity associated with an audio and/or visual communication device of the processing system; and perform a behavioural analysis of the processing system to determine if the processing system exhibits behavioural characteristics indicative of the processing system having been compromised with an audio/visual threat.
Brief Description Of Figures [050] An example embodiment of the present invention should become apparent from the following description, which is given by way of example only, of a preferred but nonlimiting embodiment, described in connection with the accompanying figures.
[051] Figure I A is a block diagram illustrating an example of a processing system; [052] Figure I B is a block diagram illustrating an example of a distributed system; [053] Figure 2 is a block diagram illustrating an example request; [054] Figure 3 is a flow diagram illustrating an example method of intercepting a request; [055] Figure 4A is a flow diagram illustrating an example to detect an audio/visual threat; S-12- 0 S[056] Figure 4B is a block diagram illustrating an example system to detect an audio/visual threat; 00 [057] Figure 5 is a more detailed flow diagram illustrating an example method of
(N
C detecting an audio/visual threat; ,I [058] Figure 6 is a block diagram illustrating an example of a filter module; [059] Figure 7 is a block diagram illustrating an example of determining an order of filter rules of the filter module; [060] Figure 8 is a block diagram illustrating an example process of determining filter ratings of filter rules of the filter module; [061] Figure 9 is a block diagram illustrating an example of an analysis module; [062] Figure 10 is a block diagram illustrating an example of a group of related entities; [063] Figure 11A and 11 B is a flow diagram illustrating an example method to determine related suspicious entities; and [064] Figure 12 is a block diagram illustrating an example of a group of related entities with corresponding entity threat values (ETV).
Modes for Carrying Out The Invention [065] The following modes, given by way of example only, are described in order to provide a more precise understanding of the subject matter of a preferred embodiment or embodiments.
O
-13- [066] In the figures, incorporated to illustrate features of an example embodiment, like O reference numerals are used to identify like parts throughout the figures.
[067] A particular embodiment of the present invention can be realised using a 00 processing system, an example of which is shown in Figure IA.
O[068] In particular, the processing system 100 generally includes at least one processor N 102, or processing unit or plurality of processors, memory 104, at least one input device 106 and at least one output device 108, coupled together via a bus or group of buses 1 The at least one input device can take the form of an audio/visual communication device such as a webcam or a microphone. In certain embodiments, input device 106 and output device 108 could be the same device. An interface 112 can also be provided for coupling the processing system 100 to one or more peripheral devices, for example interface 1 12 could be a PCI card or PC card. At least one storage device 114 which houses at least one database 116 can also be provided. The memory 104 can be any form of memory device, for example, volatile or non-volatile memory, solid state storage devices, magnetic devices, etc. The processor 102 could include more than one distinct processing device, for example to handle different functions within the processing system 100. Input device 106 receives input data 118 and can include, for example, a keyboard, a pointer device such as a pen-like device or a mouse, audio receiving device for voice controlled activation such as a microphone, data receiver or antenna such as a modem or wireless data adaptor, data acquisition card, etc. Input data 118 could come from different sources, for example keyboard instructions in conjunction with data received via a network. Output device 108 produces or generates output data 120 and can include, for example, a display device or monitor in which case output data 120 is visual, a printer in which case output data 120 is printed, a port for example a USB port, a peripheral component adaptor, a data transmitter or antenna such as a modem or wireless network adaptor, etc. Output data 120 could be distinct and derived from different output devices, for example a visual display on a monitor in conjunction with data transmitted to a network. A user could view data output, or an interpretation of the data output, on, for example, a monitor or using a printer. The
I
-14storage device 114 can be any form of data or information storage means, for example, volatile or non-volatile memory, solid state storage devices, magnetic devices, etc.
[069] In use, the processing system 100 can be adapted to allow data or information to be 00 oO stored in and/or retrieved from, via wired or wireless communication means, the at least I one database 116. The interface 112 may allow wired and/or wireless communication between the processing unit 102 and peripheral components that may serve a specialised N purpose. The processor 102 receives instructions as input data 118 via input device 106 and can display processed results or other output to a user by utilising output device 108.
More than one input device 106 and/or output device 108 can be provided. It should be appreciated that the processing system 100 may be any form of terminal, server processing system, specialised hardware, or the like.
[070] Referring now to Figure 1 B, there is shown a distributed system 150 which can also be used to implement particular embodiments, wherein the distributed system 150 includes one or more client processing systems 180 in data communication via a network 170 with one or more server processing systems 160. The one or more client processing systems 180 and the one or more server processing systems 160 are forms of processing system 100 illustrated in Figure IA. Input data 118 and output data 120 can be communicated to other devices via the network 170. The transfer of information and/or data over the network 170 can be achieved using wired communications means or wireless communications means.
The server processing system 160 can facilitate the transfer of data between the network 170 and one or more databases. The server processing system 160 and one or more databases provide an example of an information source.
[071] Referring to Figure 2, there is shown a block diagram illustrating a request 200 to perform an activity 230. Generally, the request 200 is associated with an activity 230, a target entity 220 and a requesting entity 210. In particular, the requesting entity 210 causes the activity 230 to be performed in relation to the target entity 220.
[072] For example, an executable object in a processing system 100 may request 200 to O obtain access to an input stream of a communication device such as a microphone or a webcam. In this example, the executable object would be considered the requesting entity 210, the activity 230 would be considered the act of obtaining access to an input stream, 00oO and the target entity 220 would be input stream of the communication device. The
(N
N, requesting entity 210 is a starting point in the processing system, or network of processing systems, which requests 200 the activity 230 to be performed, and the target entity 220 is IC an end point in the processing system, or network of processing systems, which the action 230 is performed.
[073] As will be described in more detail, a request 200 to perform an activity 230 can be analysed to determine at least one of the requesting entity 210 and the target entity 220. By determining at least one of the requesting entity 210 and the target entity 220, an accurate and efficient process of detecting a threat in a processing system 100 can be performed.
[074] Referring to Figure 3 there is shown an example of a method 300 of intercepting an activity in a processing system 100.
[075] At step 310, an event occurs in the processing system 100. The event can be a request 200 by a requesting entity 210 to perform an action 230 in relation to a target entity 220. At step 320, an operating system running in the processing system 100 registers the occurrence of the event. At step 330, the operating system passes the registered event to the hook chain. At step 340, the event is passed to each hook in the hook chain such that different applications, processes, and devices may be notified of the registered event. Once the event has propagated throughout the hook chain, the method 300 includes at step 350 an application receiving notification of the event being registered by the processing system 100.
[076] At step 360, the method 300 includes the application initiating an API call to an API procedure so as to carry out a response to the registered event, wherein the response may be the execution of the action 230 in relation to the target entity 220. If an API hook
O
-16has been established between the API call and the API procedure, the API call is intercepted before it reaches the API procedure at step 370. Processing can be performed by an API hook function once the API call has been intercepted prior to the API procedure being called. The API call may be allowed to continue calling the API procedure at step 00 380 such that the action 230 is performed in relation to the target entity 220.
O[077] Referring to Figure 4A, there is shown an example method 400 of detecting an N audio/video threat.
[078] At step 410, the method 400 includes intercepting one or more requests to perform an activity associated with an audio and/or visual communication device of the processing system.
[079] At step 420, the method 400 includes performing a behavioural analysis of the processing system to determine if the processing system exhibits behaviour indicative of the processing system having been compromised with an audio/visual threat.
[080] The behavioural analysis allows for dynamic detection of an audio/visual threat. If a particular version of an audio/visual threat has been modified such that a threat signature for the version of the audio/visual threat does not detect the modified audio/visual threat, the behaviour exhibited by the compromised processing system can be detected during the behavioural analysis of the processing system to detect the modified audio/visual threat. A detailed explanation of detecting threatening and malicious activity based upon behavioural analysis is described in the Applicant's co-pending US Patent application 11/780,113 and co-pending Australian Patent application 2007203373 entitled "Detecting Malicious Activity", the content of which is herein incorporated by cross-reference.
[081] At step 430, in the event that the request 200 is determined to be associated with the audio/visual threat, the method 400 includes restricting the request 200 to perform the activity 230 in the processing system 100 at optional step 440. In the event that the request 200 is not associated with an audio/visual threat, the method 400 optionally proceeds to -17step 450 which includes allowing the request 200 to perform the activity 230 in the O processing system 100.
[082] Referring now to Figure 4B, there is shown an example of a system I to detect an 00 oO audio/visual threat which has compromised the processing system 100.
(N
C[083] In particular the system 1 includes an interception module 470 configured to I intercept one or more requests to perform an activity 230 associated with an audio and/or visual communication device of the processing system.
[084] The system 1 can optionally include a filter module 600 which filters intercepted requests to determine suspicious requests 475 requiring analysis.
[085] The system 1 also includes an analysis module 480 configured to execute a behavioural analysis of the processing system to determine if the processing system exhibits behaviour indicative of the processing system having been compromised with an audio/visual threat.
[086] The analysis module 480 can also include a number of sub-modules, which can be used to determine if the request 200 is associated with an audio/visual threat, as will be explained in more detail later in this document. The analysis module 480 can be configured to control the number of sub-modules to determine if the request is associated with an audio/visual threat.
[087] Optionally the system 1 can include a restriction module 490 which is configured to restrict the activity to be performed by the processing system 100 in the event that results of the behavioural analysis indicate that the request 200 is associated with an audio/visual threat. In an alternate form, the interception module 470 is configured to restrict the request 200 associated with an audio/visual threat, as will be explained in more detail later in this document.
S-18- [088] Referring now to Figure 5, there is shown a more detailed flow diagram of an Sexample to detect an audio/visual threat.
[089] In particular, at step 510, the method 500 includes intercepting the request 200, 00 which can be performed using the technique explained in relation to Figure 3. The request
(N
N 200 can be in the form of an API call, and the interception module 470 can be provided in the form of an API hook function.
[090] Specific requests 200 can be intercepted which relate to one or more local communication devices, such as a webcam or a microphone. For example, requests 200 can be intercepted which attempt to obtain access to an input stream for one or more communication devices such as a web-cam or a sound card input/line in for a microphone.
[091] At step 520, the method 500 includes determining at least one of the requesting entity 210 and the target entity 220 of the request 200 to perform the activity 230. This can be performed by the interception module 470. The requesting entity 210 and the target entity 220 of the request 200 can be determined using one or more parameters which are passed to the hook function.
[092] The interception module 470 can be configured to record intercepted requests in a intercepted request log file. The interception module can also record associated data such as at least one of the target entity, the requesting entity, properties associated with one of the target entity and the requesting entity, the time/date that the request was intercepted, processing usage, and memory usage. As will be explained in more detail later in this document, the intercepted request log file can be used to analyse a trend in behaviour in the processing system.
[093] At steps 530 and 540, the method 500 can include using a filter module 600 to determine if the request 200 is suspicious. The filter module 600 is configured to apply one or more filter rules to minimise false positive analysis of requesting entities 210 and target entities 220 which are generally not associated with an audio/visual threat. The filter -19-
O
module 600 can also be configured to maximise analysis of requesting entities 210 and O target entities 220 which are generally associated with an audio/visual threat. A more detailed explanation of the filter module 600 is provided later in this document.
00 [094] In the event that the target entity 220 and/or the requesting entity 210 of the request I 200 are identified as being suspicious by the filter module 600, the method 500 proceeds to step 550. Otherwise the activity 230 of the request 200 is allowed to be performed.
[095] At step 550, the method 500 includes determining, using the analysis module 480, if the processing system exhibits behaviour associated an audio/visual threat.
[096] The analysis module 480 is passed data indicative of at least one of the requesting entity 210 and the target entity 220. The analysis module 480 includes a behaviour analysis sub-module 910 having a plurality of behaviour rules which can be applied to at least one of the requesting entity 210 and the target entity 220 to determine if the processing system 100 exhibits illegitimate behaviour generally associated with an audio/visual threat. In an optional form, the behaviour analysis sub-module 480 includes a plurality of behaviour rules which when applied determine if the processing system exhibits legitimate behaviour which is not generally associated with an audio/visual threat. It will be appreciated that illegitimate and legitimate behaviour can be detected simultaneously using the behaviour rules.
[097] At step 560, the method 500 includes determining, based on the results of the behaviour analysis performed in step 550, whether the processing system 100 exhibits behaviour associated with an audio/visual threat.
[098] The analysis module 480 can be configured to determine if a threshold number of illegitimate behaviour rules are satisfied, indicating that the request 200 is associated with an audio/visual threat. In another form, if legitimate behaviour rules are also applied during the behaviour analysis, the difference between the number of satisfied illegitimate and legitimate behaviour rules can be compared to the threshold number to determine if the Srequest 200 is associated with an audio/visual threat.
[099] In another form, the analysis module 480 includes a threat assessment sub-module 00 990 which is configured to determine a threat value using at least the results of the
(N
Ni behaviour analysis. The threat value can be used in comparisons to a threat threshold value to determine if the request 200 is associated with an audio/visual threat. The threat Ni assessment sub-module 990 will be explained in more detail later in this document.
[0100] In response to a determination that the request 200 is associated with an audio/visual threat, the method 500 proceeds to step 570 where the request 200 to perform the activity 230 associated with the audio/visual threat is restricted.
[0101] Restricting the activity 230 can be performed by the interception module 470 by failing to call the API procedure. In another form, an operating system defined error code may be returned to the requesting entity 210. In an alternative form, audio and/or visual data may be modified or replaced with predefined, random and or invalid data, and subsequently an operating system defined success code is returned to the requesting entity 210.
[0102] In other forms, the restriction module 490 can be used to terminate the requesting entity 210 associated with the request 200. Additionally or alternatively, a main executable entity associated with the requesting entity 210 and/or target entity 220 may be removed from the processing system's memory. In an additional or alternate form, data indicative of the main executable entity and/or one or more related entities associated with the requesting entity 210 and/or target entity of the request 200 is transferred to a server processing system 160 for further analysis. It will be appreciated that a combination of the above approaches can be used to restrict the audio/visual threat.
S-21- [0103] In response to a negative determination, the method 500 proceeds to step 580 where the request 200 to perform the activity 230 is satisfied. This may include passing the parameters to the API procedure, as explained in Figure 3.
00oO [0104] Optionally the method 500 can include informing a user of the processing system ri 100 of the detection of the audio/visual threat 230; prompting the user of the processing system 100 regarding the detected audio/visual threat and optionally receiving input from N the user regarding steps to deal with the malicious activity (ie. deny the activity 230, or allow the activity 230). In the event that the processing system 100 in this example is a client processing system 810, the method 500 can optionally include reporting the detection of the audio/visual threat to the server processing system 840.
[0105] Referring to Figure 6, there is shown a block diagram illustrating an example of the filter module 600. The filter module 600 includes a number of lists of filter rules for filtering intercepted requests 200. The filter module 600 can include at least one of a susceptible target entity filter list 610, a non-susceptible target entity filter list 620, a trusted requesting entity filter list 630, and a non-trusted requesting entity filter list 640.
[0106] The susceptible target entity filter list 610 includes one or more target entity filter rules which, when applied to a target entity, determine if the target entity 220 relating to the intercepted request 200 is of interest, thereby identifying that the request 200 is suspicious. For example, a common back door entity in a processing system 100 may be known to be susceptible to an audio/visual threat. One of the target entity filtering rules may require a comparison of the name of the target entity 220 to the name of the common back door entity, and if the susceptible target entity rule is satisfied, the target entity 220 is considered of interest, therefore identifying the request 200 as suspicious.
[0107] The non-susceptible target entity filter list 620 includes one or more target entity filter rules which, when applied to a target entity, filter out target entities 220 which are not susceptible to malicious activity and thus are not considered of interest, thereby identifying the request 200 as non-suspicious. By using the non-susceptible target entity filter list 620, -22an activity 230 that occurs in relation to a non-susceptible target entity 220 can be Cc, dismissed as being associated with an audio/visual threat, and thus analysis does not need to be performed in relation to the request 200.
00oO [0108] The trusted requesting entity filter list 630 includes one or more requesting entity filter rules which when applied, filter out trusted requesting entities 210 which are not considered of interest (ie. there is a high probability that the requesting entity 210 is not I associated with a malicious request), thereby identifying that the request 230 is nonsuspicious.
[0109] The non-trusted requesting entity filter list 640 is similar to the susceptible target entity filter list 610 except this list 640 includes one or more requesting entity filter rules to identify requesting entities 210 which are of interest (ie. there is a high probability that the requesting entity 210 is associated with a malicious request). By identifying a non-trusted requesting entity 210, the request 200 can generally be identified as being suspicious.
[0110] Each filter rule in each list can have an associated filter rule identity. When a filter rule is satisfied, an identity of the satisfied filter rule can be recorded. Over time, particular filter rules are satisfied more frequently than others. The frequency which each rule is satisfied can be used to determine a filter rating which can be used to order the rules in each list. As will be described in more detail below, the filter rating can be used to determine an order which a list of filter rules are applied to intercepted requests 200 such that, on average, the number of filter rules used, prior to a filter rule being satisfied, is reduced.
[0111] In some instances, a request 200 may have been identified as being non-suspicious using one of the lists of the filter module 600, whereas a different list of the filter module 600 may have identified the same request 200 as being suspicious. In this instance, the worst case scenario should be applied, which would be to identify the request 200 as suspicious. One approach to is to use the susceptible target entity filter list 610 and the non-trusted requesting entity filter list 640 prior to the non-susceptible target entity filter -23list 620 and the trusted requesting entity filter list 630 such that the worst case scenario is given priority.
[0112] In other instances, a request 200 may fail to be identified as suspicious and non- 00 suspicious. In this instance, a default identification can be assigned to the request 200. The Sdefault identification may be to identify the request 200 as being suspicious. However, a more lenient approach may be to set the default identification as being non-suspicious. In C one form, the default identification can be defined by the user of a processing system 100.
[0113] Referring now to Figure 7, there is shown a block diagram illustrating an example of ordering filter rules to facilitate efficient analysis of intercepted requests 200.
[0114] Figure 7 shows an example list 705 of filter rules 710, 720, 730, 740. Each filter rule has a respective associated filter rating 715, 725, 735, 745. Each filter rating is at least indicative of the frequency that the respective filter rule has been previously satisfied. In this example, "Rule 1" 710 has an associated filter rating 715 of"70" and "Rule 2" 720 has an associated filter rating 725 of"10". This indicates that "Rule I" has been satisfied more frequently than "Rule 2".
[0115] As shown in ordered list 790, the filter rules are ordered in descending order according to the respective filter ratings for each filter rule in the list 705. Thus, "Rule 4" 740 has the highest filter rating and therefore this filter rule is positioned at the start 750 of the list. "Rule 1" has the next highest filter rating and is therefore positioned second 760 in the list, followed by "Rule 3" and then "Rule 2".
[0116] This process of determining an order of filter rules can be performed by a single processing system 100 or alternatively in a distributed system. A distributed system 150 advantageously allows the generation of the filter ratings and an order of the filter rules using a larger sample of feedback data obtained from a plurality of client processing systems 180. A single processing system 100 advantageously allows for the determination 1 -24of filter ratings and an order of the filter rules which are customised for that particular 0 processing system 100.
[0117] In a distributed system 150, order data 790 indicative of the order of the list 790 00 can be transferred to one or more client processing systems 150 such that the order I indicated by the order data 790 can be used when applying the filter rules to determine suspicious requests 200. In one form, one of the client processing systems 180 in a CK1 distributed system 150 may transfer a request for an updated order of the filter rules, and in response, the server processing system 160 transfers the order data 790 to the requesting client processing system 180. In another additional or alternative form, the server processing system 160 may be scheduled to periodically transfer the order data to the plurality of the client processing systems 180.
[0118] Referring to Figure 8, there is shown a block diagram illustrating the determination of filter ratings.
[0119] As previously indicated, each filter rule has an associated frequency indicative of the number of times the filter rule has been satisfied. The frequency can be split into a number of portions. In this example, each frequency is split into two portions: a first portion 810, 830 being the frequency that the filter rule had been satisfied within the past ten days; and a second portion 820, 840 being the frequency that the filter rule had been satisfied outside the past ten days.
[0120] As seen from Figure 8, "Rule 1" 710 has been satisfied ten times within the past ten days and has also been satisfied one-hundred times outside the past ten days. "Rule 4" 740 has been satisfied forty-five times within the past ten days and has also been satisfied twenty times outside the past ten days.
[0121] This distribution of frequencies can indicate a trend of requests associated with audio/visual threats. For example, in regard to "Rule 1" 710, which may be a susceptible target entity filter rule, there may have been a threat signature which has been recently ,1 distributed amongst client processing systems that has resulted in "Rule 1" being satisfied O less often compared to past frequencies that occurred outside the ten day period. In regard to "Rule 4" 740, which may also be a susceptible target entity filter rule, there may have been an outbreak of an audio/visual threat which is targeting particular susceptible entities 00 and accordingly "Rule 4" has recently been satisfied more often compared to past
(N
C frequencies that occurred outside this ten day period, as indicated by the rise of the frequency within the past ten days.
(N
[0122] In order to take into account trends in activity associated with an audio/visual threat, such as outbreaks of specific malicious requests and distributions of software patches, a filter rating formula 850 is used to weight the distribution of frequencies for each filter rule. In this example the filter rating formula is shown below: FilterRating 2 x recentFreq 0.5 x olderFreq Where: recentFreq frequency of instances when the rule was satisfied within last 10 days olderFreq frequency of instances when the rule was satisfied outside last 10 days [0123] It will be appreciated that different weights can be used. Furthermore, it will be appreciated that a larger breakdown of frequency distribution can be used.
[0124] As can be seen from the filter rating formula 850, the frequency of instances when the filter rule was satisfied within the past ten days is weighted more heavily in order to take into account recent trends of malicious requests. Thus, the filter rating for "Rule 1" and "Rule 4" are calculated to be: FilterRatingRulel 2 x 10 0.5 x 100 20 50 FilterRatingRule4 2 x 45 0.5 x 20 90 10 100 S-26- 0 [0125] As can be seen from the respective filter ratings 715, 745 for "Rule 1" 710 and O "Rule 4" 740, even though "Rule 1" 710 has been satisfied more often in the past, as indicated by frequency 820, it appears that "Rule 4" 740 has recently been satisfied more often (indicated by frequency 830) due to an outbreak of one or more threats targeting 00 susceptible entities which "Rule 4" 740 determines to be of interest. Thus, "Rule 4" 740 receives a higher filter rating 745 compared to the filter rating 715 of"Rule 1" 710 due to the recent trend in malicious requests.
[0126] When the list of filter rules is ordered, "Rule 4" 740 is ranked higher in the list compared to "Rule 1" 710 and therefore "Rule 4" 740 is used prior to "Rule 1" 710 when determining suspicious requests. On average, this ordering of the filter rules can reduce the number of applications of filter rules by the filter module 600, thereby resulting in an efficient filtering process.
[0127] Referring to Figure 9, there is shown a more detailed block diagram of an example of an analysis module 480. As previously described, the analysis module 480 includes a number of sub-modules which the analysis module 480 can control and use individually or in combination to determine if the processing system is compromised with an audio/visual threat.
[0128] The analysis module 480 can include a behaviour analysis sub-module 910, a property analysis sub-module 920, a cryptographic hash sub-module 930, a checksum submodule 940, a disassembly sub-module 950, a black-list/white-list sub-module 960, a pattern matching sub-module 970, a relationship analysis sub-module 980, and a threat assessment sub-module 990.
[0129] Data returned by the above sub-modules can be indicative of whether the one or more entities are associated with an audio/visual threat. However, data returned may require further processing by other sub-modules. Therefore, the analysis module 480 is configured to pass data requiring further processing onto the appropriate sub-module to thereby determine if the one or more entities are associated with an audio/visual threat.
C-27-
C.)
0 [0130] As previously indicated, the behaviour analysis sub-module 910 includes a plurality of behaviour rules. The analysis module 480 passes the behaviour analysis sub-module 910 one or more entities which require behaviour analysis.
00 C [0131] Generally, at least one of the requesting entity and the target entity of the suspicious request are passed to the behaviour analysis sub-module 910 for behaviour C analysis. However, other entities can be passed by the analysis module 480 to the behaviour analysis sub-module 910. For example, a group of related entities determined by the relationship analysis sub-module 980 can be passed by the analysis module 480 to the behaviour analysis sub-module 910 to determine if a group of related entities for the suspicious request 479 exhibits behaviour associated with an audio/visual threat.
[0132] The behaviour analysis sub-module 910 can include the following example behaviour rules: SIs the entity indicative of audio/visual signals being obtained by the one or more communication devices? Is the entity being interacted with via a graphical user interface currently displayed on the desktop of the processing system? Is the entity recording data indicative of audio and/or visual data? Was the entity launched by the user? SIs the entity attempting to connect to a remote network? SIs the entity requesting the activity 230 to be performed at regular intervals? [0133] The behaviour analysis sub-module 910 can return data to the analysis module 480 indicative of the behaviour rules which were satisfied. As will be explained in more detail below in relation to the threat assessment module 990, the number of satisfied behaviour rules, or threat values associated with satisfied behaviour rules can be used to determine whether the processing system is compromised with an audio/visual threat.
-28- [0134] The behaviour analysis sub-module 910 may also query the intercepted request log file to determine whether particular behaviour rules are satisfied. For example, the last example behaviour rule above may require a search to be performed of the intercepted request log file to determine if the requesting entity 210 is requesting the activity 230 to be 00 performed at regular intervals. Furthermore, the behaviour analysis sub-module 910 may CK, query the intercepted request log file to determine if a sequence of requests have been intercepted which are indicative of the processing system being compromised with an i audio/visual threat.
[0135] The property analysis sub-module 920 is configured to determine one or more properties of one or more entities. The property analysis sub-module 920 receives one or more entities from the analysis module 480 and applies one or more property rules to determine one or more properties of the one or more entities which can be used in determining if the processing system has been compromised with an audio/visual threat.
[0136] The property analysis sub-module 920 generally receives from the analysis module 480 at least one of the requesting entity 210 and the target entity 220 of a suspicious request 479. However, it will be appreciated that other entities can be passed the property analysis sub-module 920 such as a group of related entities determined by the relationship analysis sub-module 980.
[0137] Property rules can be configured to determine illegitimate properties of an entity which is generally associated with an audio/visual threat, and/or legitimate properties of an entity which is not generally associated with an audio/visual threat. The property analysis sub-module 920 can include the following example property rules: SIs the entity configured to be hidden in the processing system memory? SIs the entity located in a system directory of the operating system (ie.
"C:\Windows\system32\")? Has the entity been modified recently? Does the entity have a tray icon? Does the entity have unlimited file permissions (ie read, write, and execute) S-29-
C.)
O [0138] Data indicative of satisfied property rules can be returned to the analysis module 480. As will be explained in more detail regarding the threat assessment sub-module 990, the number of satisfied property rules or threat values associated with satisfied property 00 rules can be used to determine whether the one or more entities are associated with an ri audio/visual threat.
i [0139] The cryptographic hash sub-module 930 is configured to generate a cryptographic hash value of an entity received from the analysis module 480. As the cryptographic hash value can be used as an identity, the cryptographic hash value can be used in comparisons with the blacklist/whitelist sub-module to determine whether the target entity 220 and/or requesting entity 210 of the request 200 is associated with an audio/visual threat.
[0140] Other entities such as a group of related entities determined by the relationship analysis sub-module 980 can also be passed to the cryptographic hash sub-module 930 to determine if one or more of the entities of the group of related entities is associated with an audio/visual threat. Data indicative of whether the one or more entities is associated with an audio/visual threat is returned to the analysis module 480. If the analysis module 480 receives data indicating that the one or more entities are associated with an audio/visual threat, the analysis module 480 initiates at least one of the interception module 470 and the restriction module 490 to restrict the request 200.
[0141] The checksum sub-module 940 is configured to determine a checksum of one or more entities of the processing system 100. The checksum can be compared to a database (blacklist/whitelist module) to determine whether the one or more entities received from the analysis module are malicious. Data indicative of whether the one or more entities is associated with an audio/visual threat is returned to the analysis module 480. If the analysis module 480 receives data indicating that the one or more entities are associated with an audio/visual threat, the analysis module 480 initiates at least one of the interception module 470 and the restriction module 490 to restrict the request 200.
[0142] The pattern matching sub-module 950 is configured to search one or more entities, O received from the analysis module 480, for particular patterns of strings or instructions which are indicative of malicious audio/visual activity. The pattern matching sub-module 0950 may operate in combination with the disassembly module 960. Although strings of 0O instructions can be compared by the pattern matching sub-module 950, the pattern I matching sub-module 950 may be configured to perform functional comparisons of groups of instructions to determine whether the functionality of the one or more entities is N, indicative of an audio/visual threat. Data indicative of whether the one or more entities is associated with an audio/visual threat is returned to the analysis module 480. If the analysis module 480 receives data indicating that the one or more entities are associated with an audio/visual threat, the analysis module 480 initiates at least one of the interception module 470 and the restriction module 490 to restrict the request 200.
[0143] The disassembly sub-module 960 is configured to disassemble binary code of one or more entities received from the analysis module 480 such that the disassembly submodule 960 determines processing system instructions for the entity. The processing system instructions of the one or more entities can then be used by the pattern matching sub-module 960 to determine whether the one or more entities is associated with an audio/visual threat. Data indicative of disassembled instructions are returned to the analysis module 480, wherein the analysis module 480 transfers the disassembled instructions to the pattern matching sub-module 950 to determine whether the one or more disassembled instructions of the one or more entities is associated with an audio/visual threat.
[0144] The blacklist/whitelist sub-module 970 includes a list of malicious and/or nonmalicious entities associated with an audio/visual threat. The blacklist/whitelist submodule 970 may be provided in the form of a table or database which includes data indicative of malicious and non-malicious entities. The table may include checksums and cryptographic hash values for malicious and non-malicious entities. The data stored in the blacklist/whitelist sub-module can be used to determine whether one or more entities received from the analysis module 480 is malicious or non-malicious. Data indicative of -31whether the one or more entities is associated with an audio/visual threat is returned to the analysis module 480. If the analysis module 480 receives data indicating that the one or more entities are associated with an audio/visual threat, the analysis module 480 initiates at least one of the interception module 470 and the restriction module 490 to restrict the 00 request 200.
[0145] The relationship analysis sub-module 980 can be used to determine related entities i relative to a starting entity 1000. As shown by example in Figure 10, once a request 200 has been identified as suspicious 479 using the filter module 600, the target entity 220 and/or requesting entity 210 of the suspicious request 200 can be treated as a starting entity 1000, and then using the relationship analysis sub-module 980, a group of related entities 1060 (resembling a web of entities) relative to the starting entity 1000 can be determined.
A detailed explanation of detecting one or more related entities is described in the Applicant's co-pending US Patent application 11/707,425 and co-pending Australian Patent application AU2007200605 entitled "Determination of related entities", the content of which is herein incorporated by cross-reference.
[0146] Generally, threats, such as malware, include a bundle of malicious entities. By only considering a single entity by itself, it may not be accurately possible to determine if a target entity 220 and/or requesting entity 210 are malicious. However, by determining a group of related entities 1060 relative to the target entity 220 and/or requesting entity 210, a more accurate assessment can be made in relation to whether or not the request 200 is malicious.
[0147] Furthermore, removing a single malicious entity may not necessarily disable the audio/visual threat from performing some malicious activity. Some particular forms of threats can perform repairs in relation to a single malicious entity being removed or disabled. Therefore, detecting a group of related entities 1060 can be beneficial for disabling the threat.
-32- [0148] Referring now to Figures 1 IA and I IB, there is shown a method 1100 of determining related entities relative to the starting entity 1000. The method represents the operation of the relationship analysis sub-module 980. Method 1100 determines a group of suspicious related entities relative to the starting entity 1000. However, it will be 00 appreciated that method 1100 can be adapted to determine any form of related entities, C1 such as trusted related entities relative to the starting entity 1000.
[0149] At step 1110, the method 1100 includes recording the starting entity 1000. This generally includes the processing system 100 recording at least one of the target entity 220 and/or the requesting entity 210 as the starting entity 1000 in the processing system memory, such as a data store. The starting entity 1000 may be stored in the form of a table or list.
[0150] At step 1120, the method 1100 includes determining an entity property associated with the starting entity 1000. The entity property may be an entity type of the entity, such as whether the starting entity 1000 is an executable entity, a run key entity or a dynamic linked library entity. The entity property may also be a time that the starting entity 1000 was created or modified. The entity property may include the directory which the starting entity 1000 is contained within. The entity property may also be a vendor name associated with the starting entity 1000. The entity property may also be a particular network address from which the starting entity 1000 was downloaded.
[0151] It will be appreciated that more than one entity property may be determined for the starting entity 1000. However, for the purposes of simplicity, throughout this example it will be assumed that one entity property has been determined for the starting entity 1000.
[0152] At step 1130, the method 1100 includes selecting, based on the entity property of the starting entity 1000, one or more related entity rules. In this particular example, the one or more related entity rules take the form of one or more rules for determining suspicious entities related to the starting entity 1000.
-33- [0153] Step 1130 can include selecting, based on the entity property, the one or more O related entity rules from a larger set of related entity rules. Each related entity rule is associated with a particular entity property, and as such, a selection of a related entity rules can be performed based on the entity property of the starting entity 1000. An example list 00oO of entity properties and corresponding related entity rules is shown below in List 1.
if the starting entity includes a vendor name, the at least one suspicious related N, entity is one or more entities including the same vendor name; (ii) if the starting entity includes a product name, the at least one suspicious related entity is one or more entities including the same product name; (iii) if the starting entity includes a version name, the at least one suspicious related entity is one or more entities including the same version name; (iv) if the starting entity was created at a particular time in the one or more processing systems, the at least one suspicious related entity is one or more entities which were created at a similar time to that of the starting entity; if the starting entity accesses a particular network address or network address range or network address names, the at least one suspicious related entity is one or more entities which also access the same particular network address or network address range or network address names; (vi) if the starting entity accesses a particular network address or network address range, the at least one suspicious related entity is the particular network address or network address range or network address names; (vii) if the starting entity causes another process to execute, the at least one suspicious related entity is one or more entities which was executed by it; (viii) if the starting entity was executed by a process, the at least one suspicious related entity is one or more entities which executed the starting entity; (ix) if the starting entity creates or modifies an entity, the at least one suspicious related entity is one or more entities which it creates or modifies; if the starting entity is found in a directory not in a list of whitelist directories, the at least one suspicious related entity is one or more entities which also exist in the same directory; -34- (xi) if the starting entity is downloaded from the internet tcpip, the at least one suspicious related entity is one or more entities which were downloaded at the same time or by the same process or from the same particular network address or network address range or network address names; List 1: Example of Entity Properties and corresponding related entity rules [0154] It will be appreciated that a more detailed list of entity properties and corresponding related entity rules can be obtained using the above general rules. An example of a more detailed list of entity properties and corresponding related entity rules are provided below.
Entity Property Related Entity Rule The one or more suspicious related entities are triggerable trigger entity entities which are triggerable by the run-key entity The one or more suspicious related entities are one or more executable entity files in an INF file associated with the starting entity The one or more suspicious related entities are one or more executable entity trigger entities which trigger the starting entity The one or more suspicious related entities are one or more executable entity favourites which trigger the starting entity The one or more suspicious related entities are one or more executable entity items of embedded executable content inside the starting entity The one or more suspicious related entities are one or more executable entity instances of windows created by the executable entity The one or more suspicious related entities are one or more executable entity desktop link files (short cuts) which trigger the executable entity The one or more suspicious related entities are one or more executable entityloaded by the starting entity modules loaded by the starting entity The one or more suspicious related entities are one or more executable entity classids or guids assocaiated with the starting entity The one or more suspicious related entities are one or more executable entity network addresses or network address ranges or network address names associated with the starting entity The one or more suspicious related entities are one or more classid/guid entity BHO or TOOLBAR names associated with the classid/guid The one or more suspicious related entities are one or more classid/guid entity one or more class names associated with the classid/guid The one or more suspicious related entities are one or more classid/guid entity network addresses or network address ranges or network address names associated with the starting entity The one or more suspicious related entities are one or more classid/guid entity executable entities related to the classid/guid The one or more suspicious related entities are one or more module entity executable entities that are loaded by the module entity network address/ network The one or more suspicious related entities are one or more address range/ network files associated with the network address or network address address name range or network address name network address/ network The one or more suspicious related entities are one or more address range/ network links or short cuts associated with the network address or address name network address range or network address name network address/ network The one or more suspicious related entities are one or more address range/ network classids associated with the starting entity address name network address/ network The one or more suspicious related entities are one or more address range! network favourites associated to the starting entity address name network address/ network The one or more suspicious related entities are one or more address range/ network executable entities related to the starting entity address name -36network address/ network The one or more suspicious related entities are one or more address range/ network start pages related to the starting entity address name network address/ network The one or more suspicious related entities are one or more address range/ network cookies related to the starting entity address name The one or more suspicious related entities are one or more BHO Tool Bar entity classids associated with the starting entity The one or more suspicious related entities are one or more BHO Tool Bar entity names associated with the starting entity The one or more suspicious related entities are one or more BHO Tool Bar entity executable entities executed by the starting entity The one or more suspicious related entities are one or more Favourites entity network addresses or network address ranges or network address names The one or more suspicious related entities are one or more Favourites entity executable entities executed by the starting entity The one or more suspicious related entities are one or more Links entity network addresses or network address ranges or network address names The one or more suspicious related entities are one ore more Links entity executable entities executed by the starting entity The one or more suspicious related entities are one or more Cookie entity network addresses or network address ranges or network address names associated with the starting entity The one or more suspicious related entities are one ore more windows instance entity executable entities that create the starting entity Directory (not in a The one or more suspicious related entities are one or more whitelist) entity entities that exist in that same directory.
INF entity The one or more suspicious related entities are one or more
O
C.)
0 00 -37entities referenced in the starting entity The one or more suspicious related entities are one ore more Archive entity entities within the archive entity The one or more suspicious related entities are one or more entities in the same directory as the archive entity which fail Archive entity to appear in a whitelist The one or more suspicious related entities are one or more vendor name of entity entities which share the same vendor name as the starting entity The one or more suspicious related entities are one or more product name entity entities which share the same product name as the starting entity The one or more suspicious related entities are one or more version name entities which share the same version name as the starting entity Creation/Modification The one or more suspicious related entities are one or more time of entity entities which have a similar creation/modification time Table 1: Further example of Entity Properties and corresponding related entity rules [0155] It will be appreciated that a starting entity having a trigger entity property could be any one of the following entities: run keys, Appinit, Uninstall Key, Service, Hooks, protocol filter, and a startup list. It will further be appreciated that a starting entity having an executable entity property could be any one of the following entities: executables, dynamic linked libraries, and other modules.
[0156] It will be appreciated from List 1 that the general entity properties and related entity rules can be extended to specific entity types, such as the entity types shown in Table 1, for example INF entities, Cookies entity, windows instance entity and the like shown above. The more specific rules in Table 1 allow for a more specific selection of C -38rules based on the more specific entity property, which can therefore result in accurately Sdetermining the relevant related entity rules.
[0157] It will also be appreciated from Table 1 that more than one related entity rule can 00 oO be obtained based on the one or more entity properties of the starting entity. As shown C above in Table 1, if the entity property indicates that the starting entity is an executable entity, then nine separate types of related entity rules can be applicable for determining the Ni related entities to the starting entity which are considered suspicious.
[0158] Additionally or alternatively, in a distributed system 150, the client processing system 160 may transfer, to a server processing system 180, one or more entity properties of the starting entity 1000, and receive, from the server processing system 180, the one or more related entity rules. In this step, the server processing system 180 may select the one or more related entity rules using the entity property from a server set of related entity rules, and then transfer the one or more related entity rules to the processing system 100.
[0159] At step 1140, the method 1100 includes determining, using the one or more related entity rules, the at least one related entity 1010, 1020. In this particular example the related entity rules determine related suspicious entities. For simplicity purposes, the following example is presented using one related entity rule. However, it will be appreciated that more than one related entity rule can be used. Using an example entity of"Spywarz.exe" which has an entity property of a vendor name equalling "Spywarz Software Enterprises", the following related entity rule can be obtained: "The one or more related entities have a vendor name equalling 'Spywarz Software Enterprises [0160] This related entity rule is then used to determine one or more entities in the processing system 100 which satisfy this rule. Once a scan has been performed using the related entity rule, it is determined that "Spywarz.dll" also shares a vendor name of 'Spywarz Software Enterprises'. As the related entity rule has been satisfied, 'Spywarz.dll'
O
-39is considered a related entity to the starting entity 'Spywarz.exe'. As such, a group of O suspicious related entities has been determined which includes 'Spywarz.exe' and 'Spywarz.dll'.
00 5 [0161] Optionally, weighted values may be associated with the related entity rules.
(Ni [0162] Steps 1110 to 1140 represent a single iteration to determine a group of suspicious Ni related entities 1000, 1010, 1020. However, if a more detailed group of related entities is required, it is possible to perform multiple iterations of steps I I 10 to 1140, as will now be discussed.
[0163] At step 1150, the at least one related entity 1010, 1020 is recorded. This may involve adding the at least one related entity 1010, 1020 to a list or a table which includes the starting entity 1000 recorded at step 1110. Furthermore, the list or table may include data indicative of the relationship between the at least one related entity 1010, 1020 and entities which have been previously recorded.
[0164] At step 1160, the method 1100 includes determining if an end condition has been met. For example, the end condition may be satisfied when no new related entities are determined; when a period of time or a number of processing cycles have elapsed; when the current starting entity has an entity type which is indicative of the end condition; and/or when a selected number of repetitions have been performed. If the end condition has not been met, the method continues to step 1170.
[0165] At step 1170, the method 1100 includes setting the at least one related entity 1010, 1020 as the starting entity 1000. This may be performed in memory by reassigning the value of the starting entity 1000. By setting the at least one related entity 1010, 1020 as the starting entity 1000, steps 1120 to 1150 can be repeated until an end condition is met. After step 1170, the method proceeds back to step 1120 to perform the next iteration, therefore determining the related entities for the newly set starting entity. As such, a web or network of related entities is determined until the end condition is met.
C.)
0 O [0166] Once the end condition is satisfied, the determination of the group of suspicious related entities 1060 has been completed. At step 1180, at least some of the related entities can be quarantined, as will be discussed in more detail below.
00oO CK1 [0167] Optionally, data indicative of direct or indirect links between entities in the group Ccan be recorded. For example, 'Spywarz.exe' and 'Spywarz.dll' for the above example N would have a direct link. However, if a subsequent related entity to 'Spywarz.dll' was determined to be a system variable 'SPYWARZ_VARIABLE', then there would be an indirect link between 'Spywarz.exe' and 'SPYWARZ_VARIABLE'. The number of links between the original starting entity 1000 and other related entities in the group is referred to herein as the "link distance".
[0168] Once a group of suspicious entities has been obtained using the relationship analysis sub-module 980, the group of related entities 1060 can be returned to the analysis module. The analysis module can then pass the group of related entities to one or more of the other sub-modules to determine if the group of related entities is associated with an audio/visual threat. Optionally, data indicative of link distances may be also returned to the analysis module.
[0169] The threat assessment sub-module 990 is configured to determine, using the received data from the analysis module 480, a threat value indicative of the risk which the one or more suspicious entities represents to the processing system. A detailed explanation of the threat assessment sub-module is described in the Applicant's co-pending US Patent application 11/829,592 and co-pending Australian Patent application 2007203543 entitled "Threat Identification", the content of which is herein incorporated by cross-reference.
[0170] The threat assessment sub-module 990 receives, from the analysis module 480, data indicative of one or more satisfied behaviour rules for one or more suspicious entities, and one or more satisfied property rules for one or more suspicious entities. The one or more suspicious entities may be a group of related entities 1060. The one or more suspicious CN -41entities can be the target entity 220 and/or the requesting entity 210 of a suspicious request O 479. Additional data may be received by the threat assessment module 990 indicative of the original starting entity 1000 of the group of related entities 1060, and the relatedness of related entities in the group relative to the starting entity 1000. The relatedness of entities 00oO may be provided in the form of link distances.
(N
[0171] The determined threat value can then be compared to a threshold to determine if the IC one or more suspicious entities are malicious. Data indicative of whether the one or more suspicious entities is malicious is returned to the analysis module 480, wherein the analysis module 480 passes control and results of the analysis to the interception module 470 and/or the restriction module 490, where either the request 200 is restricted or allowed to be performed in accordance with the results of the analysis.
[0172] The threat value can take three different forms: an entity threat value (ETV), a relational entity threat value (RETV), and a group threat value (GTV). Each of these values, and a method for calculating each, will be discussed in more detail below.
[0173] An ETV is indicative of the threat that a single suspicious entity represents to the processing system.
[0174] The threat assessment sub-module 990 can be configured to determine a characteristic threat value (CTV) for each satisfied behaviour rule and/or property rule for the suspicious entity. The threat assessment sub-module 990 can include a CTV formula associated with each behaviour rule and/or property rule used by the behaviour analysis sub-module 910 and the property analysis sub-module 920. Ifa behaviour or property rule has been satisfied, as indicated by the received data, the corresponding CTV formula is used to calculate the CTV for the respective behaviour or property rule for the entity. The CTVs are then used by the threat assessment sub-module 990 to determine the ETV for the suspicious entity.
-42- C [0175] Some CTV formulas can be configured to assign a constant CTV for the satisfied behaviour rule or property rule. For example, if the suspicious entity has a hidden property, the associated CTV formula may assign a constant value indicative a level of threat that the hidden property represents to the processing system 100, as shown below: 00oO CI CTV 0.3 CI [0176] In additional or alternative forms, CTV formulas can be configured to use a recorded frequency as an input when calculating the CTV. For example, if one of the satisfied behaviour rules indicates that the suspicious entity has caused the processing system to connect to a remote network address on ten occasions, the CTV is adjusted according to the frequency of the behaviour, as shown below: CTV 0.01 x freq 0.01 x 10 0. 1 [0177] The frequency may also be determined for a period of time. For example, if the suspicious entity is connected to the remote network address on ten instances within the past five minutes, then the CTV is adjusted accordingly for this frequency within this period of time. The frequency over a period of time may be determined by the analysis module using the intercepted request log file.
[0178] In further additional or alternative forms, at least one CTV is temporally dependent. The CTV formula can be configured to calculate the CTV using a temporal value. For example, a suspicious entity may have connected to a remote network ten days ago. Again, the temporal value may be determined using the intercepted request log file.
This period of time is used by the CTV formula in determining the CTV, as shown below: I I CTV Ixeri r e =0.1xell =0.1x1.11=0.12 -43rj O [0179] In the event that the suspicious entity caused the processing system 100 to connect 0 to the remote network address one day ago, the CTV would be calculated as: 00 CTV 0.1 x e T n 0.1 x e 0.1 x 2.72 0.27 C [0180] As can be seen from the above CTVs, the CTV formulas can be configured to Sdetermine a CTV according to how malicious the behaviour or property rule satisfied is considered for the processing system.
[0181] As previously discussed, behaviour and property rules can be indicative of nonmalicious and malicious activity. CTVs for legitimate characteristics and illegitimate characteristics can be calculated using the associated CTV formulas. In one form, illegitimate characteristics have a positive CTV, and legitimate characteristics have a negative CTV. However, it will be appreciated that this is not essential.
[0182] Once CTVs for the satisfied behaviour and property rules have been determined, the threat assessment module determines an ETV for the suspicious entity using the determined CTVs.
[0183] For example, a suspicious entity may have the following CTVs: CTVI 0.1 CTV2 CTV3 0.7 CTV4 -0.4 [0184] Referring to the above CTVs, four characteristics of the suspicious entity have been determined. Three of the characteristics are illegitimate (as indicated by the positive CTVs) and one of the characteristics is legitimate (as indicated by the negative CTV 1770).
The ETV can be determined by summing the CTVs for the suspicious entity. In this example the ETV would be calculated as: -44cI C.) 0 4 ETV CTVx =0.1+0.5+0.7-0.4=0.9 x=I 00 [0185] In some instances an ETV may have been previously calculated for the suspicious N, 5 entity and recorded in the processing system's 100 memory. In this event, the new ETV can be determined by using the CTVs and the previously stored ETV. The previous stored N, ETV can be weighted accordingly.
[0186] The threat assessment module is configured to compare the ETV of the suspicious entity to the ETT to determine if the suspicious entity of the suspicious request is considered malicious. In one form, if the ETV is greater than or equal to the ETT, the suspicious entity is identified as being malicious.
[0187] For example, the ETT may be equal to In this example the ETV equals which is greater than the ETT. Therefore, the suspicious entity is identified as being a malicious entity, thereby identifying that the processing system has been compromised with an audio/visual threat.
[0188] An RETV is a threat value for a single suspicious entity which is calculated according to one or more adjusted ETVs of related suspicious entities relative to the single suspicious entity. In this instance, the single suspicious entity is a starting entity in the group of related entities.
[0189] Referring to Figure 12, there is shown a group of related entities 1200 and corresponding ETVs for each entity in the group. The RETV can be calculated by summing ETVs for each entity in the group which is adjusted according to the relatedness of each entity relative to the starting entity. In one form, the link distance is used to adjust the ETVs for each entity in the group.
O
[0190] Therefore, a related entity which has a direct link (ie. a low link distance) to the O starting entity 1000 is given more weight compared to a related entity which has an indirect link (ie. a higher link distance) to the starting entity 1000. The higher the link distance, the less weight is provided for the respective ETV when calculating the RETV.
00 An example RETV formula to calculate the RETV is provided below:
(N
0RETV ETV x Where: LinkDist is the link distance [0191] For example, the RETV for the group of related entities 1200 illustrated in Figure 12 would be calculated as: RETV ETV x 0.5LinkD s RETV 0.9 x 0.50 (0.2 0.4) x 0.5' (0.6 0.3 0.7)x RETV 0.9 -0.1+0.05 0.85 [0192] The RETV can then be compared to a relational entity threat threshold (RETT) to determine whether the suspicious entity 1000, based at least partially on the related entities 1010, 1020, 1030, 1040, 1050, is malicious. In this example, the RETT may be Therefore, the RETV is greater than RETT, thereby identifying the starting entity 1000 as a malicious entity, and thereby identifying that the processing system is compromised with an audio/visual threat.
[0193] The GTV can be calculated by summing the ETVs for each entity 1000, 1010, 1020, 1030, 1040, 1050 in the group of related entities 1200, and then averaging the sum over the number of entities in the group 1200. An example GTV formula to calculate the GTV is provided below: GTV
ET
n C, -46where n is the number of entities in the group of related entities 1200
O
[0194] Referring to the group of related entities 1200 shown in Figure 12, the GTV would be calculated as: 00 N I ETV Ur GTV Sn 0.2 0.4 0.6 0.3 0.7 6 GTV 6 1.3 GTV 0.22 6 [0195] The GTV can then be compared to a group threat threshold (GTT) to determine whether the group of related entities 1200 is malicious, or whether at least a portion of the related entities 1200 is malicious. In this example, the GTT may be In this instance, the GTV is more than the GTT which indicates that the group of related entities 1200 is malicious, thereby identifying that the processing system has been compromised with an audio/visual threat.
[0196] Example pseudo-code to implement an embodiment of the present invention is provided below: main PROCEDURE
BEGIN
CALL function to set up interception of system APIs IF a call to intercepted system APIs THEN execute interceptEvent PROCEDURE
END
interceptEvent PROCEDURE
BEGIN
CALL behaviour analysis procedure -47c, 0 IF behaviour analysis procedure returns OK THEN BEGIN pass parameters to intended system API return result END ELSE BEGIN 00 IF mode prompt THEN BEGIN C- Action CALL prompt user function END ELSE BEGIN C Action CALL determine action function
END
CALL perform action function (Action) CALL notify user with action details
END
END
[0197] Optionally, the one or more client processing systems 180 may receive, one or more updated behaviour rules, property rules, filter rules and/or related entity rules. The one or more client processing systems 180 may receive updated rules from the server processing system 160 or via a data store such as a compact disk or the like.
[0198] Optionally, the one or more client processing systems 180 may receive one or more updated formulas. The updated formulas can include one or more updated CTV formulas, ETV formulas, RETV formulas and GTV formulas.
[0199] In another optional form, the one or more client processing systems 180 may receive one or more updated thresholds. The updated thresholds can include one or more updated ETT, RETT and GTT.
[0200] In one form, statistical processes, fuzzy logic processes and/or heuristical processes can be used in combination with filter rules, the related entity rules and/or the malicious assessment rules to determine whether a rule has been satisfied.
-48c, [0201] The embodiments discussed may be implemented separately or in any combination as a software package or component. Such software can then be used to pro-actively notify, restrict, and/or prevent malicious activity being performed. Various embodiments can be implemented for use with the Microsoft Windows operating system or any other 00 modem operating system.
(N
[0202] In one optional form, although four types of filter lists 610, 620, 630, 640 have ,IC been herein described for the filter module 600, these filter lists 610, 620, 630, 640 can be used separately or in combination.
[0203] In another optional form, the user may define user defined filter rules. For example, there may be an activity 230 in the client processing system 180 which is being analysed by the analysis module 480, however, the user is aware that the activity 230 is not associated with an audio/visual threat. As such, the user is able to define a user defined rule such as to prevent the request 200 being analysed by the analysis module 480. In one form, user defined filter rules are applied prior to the other filter rules.
[0204] In optional forms, a mode of operation of an entity may be used to weight the ETV, the RETV or the GTV. For example, an entity may be operating in an administrative mode when it was recorded connecting to a remote network address. The entity is therefore considered a high threat and therefore the ETV for the entity is weighted accordingly to indicate this high risk threat.
[0205] In other optional forms, the method of installation for an entity, or installation files associated with an entity, can be analysed to determine one or more characteristics of an entity to allow the identification of a malicious entity. Such analysis may include determining: whether an installation file was automatically executed without user input; whether the installation file is designed to delete itself after execution; whether the installation file is not an executable file; whether the installation file does not create a new sub-directory in the processing system 100; whether the installation file does not install itself in "add and remove wizards" of the operating system; whether the installation file O -49- 0 uses hidden or deceptive methods to install the entity, such as using run keys; whether the installation file is configured to install the entity in a directory which includes a large number of other entities; whether the installation file was not initially downloaded using an Internet browser; whether the installation file does not download ongoing updates using an 00 5 Internet browser and/or requesting user input; and whether the installation file uses social N engineering to install the entity (ie "SCVHOST.exe" instead of"SVCHOST.exe").
[0206] Other characteristics that can be determined regarding an entity can include: where the entity was downloaded from (ie which country); run-key changes performed by the entity; contents of the entity whether the entity creates auto-startup points; the type of packer/compression means used in relation to the entity. Associated CTV formulas can be used to calculate an appropriate CTV indicative of the severity of the threat which the characteristic represents to the processing system 100. For example, if the entity was downloaded from the US, a small CTV may be calculated which contrasts to an entity which was downloaded from Russia which may result in a large CTV being calculated due to entities being downloaded from Russia being considered to represent a more severe threat to the processing system 100.
[0207] In another form, when particular criteria are met in the processing system, one or more local communication devices are locked from operating. For example, if the processing system 100 has not been used for a period of time, and a screen saver is activated, the method may include deactivating one or more of the communication devices.
This may be performed by intercepting an API call to initiate the screen saver, wherein the intercepting hook function deactivates one or more of the communications devices as the screen saver is being activated.
[0208] The embodiments described throughout can be implemented via hardware, software or a combination of both.
[0209] Optional embodiments of the present invention may also be said to broadly consist in the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
00 CI [0210] Although a preferred embodiment has been described in detail, it should be understood that various changes, substitutions, and alterations can be made by one of C ordinary skill in the art without departing from the scope of the present invention.

Claims (21)

1. A method of detecting if a processing system has been compromised with audio/visual threat, wherein the method includes: 00 intercepting one or more requests to perform an activity associated with an audio (N N and/or visual communication device of the processing system; and Operforming a behavioural analysis of the processing system to determine if the N processing system exhibits behavioural characteristics indicative of the processing system having been compromised with an audio/visual threat.
2. The method according to claim 1, wherein the method includes: determining, using the request to perform the activity, an entity associated with the activity; and performing the behavioural analysis in relation to the entity.
3. The method according to claim 2, wherein performing the behavioural analysis includes applying one or more behavioural rules.
4. The method according to claim 3, wherein the one or more behavioural rules includes at least one of: determining if the entity indicative of at least one of audio signal and visual signals being obtained by the audio and/or visual communication device; determining if the entity is being interacted via a graphical user interface currently displayed on the desktop of the processing system; determining if the entity is recording data indicative of at least one of audio data and visual data; determining if the entity was launched by the user; determining if the entity is attempting to connect to a remote network; and determining if the entity is requesting the activity to be performed at regular intervals. -52- The method according to any one of claims I to 4, wherein a requesting entity O requests the activity to be performed in relation to a target entity, wherein the method includes: determining, using a filter module, if the activity is suspicious or non-suspicious; 00 and C"1 in response to determining that the activity is suspicious, analysing, using an Canalysis module, at least one of the activity, the requesting entity and the target entity.
6. The method according to claim 5, wherein the filter module filters the activity according to the requesting entity and the target entity to determine if the activity is suspicious or non-suspicious.
7. The method according to claim 5 or 6, wherein the analysis module includes a list of activity sequences indicative of an audio/visual threat, wherein analysing the suspicious activity includes comparing the suspicious activity and at least one of activities which occurred prior to the suspicious activity and activities which occurred after the suspicious activity to the list of activity sequences, wherein in response to a positive comparison, the activity is determined to be associated with an audio/visual threat.
8. The method according to any one of claims I to 7, wherein performing the behavioural analysis includes: determining an entity associated with the intercepted activity; determining an entity threat value for the entity, the entity threat value being indicative of a level of threat that the entity represents to the processing system, wherein the entity threat value is determined based on one or more characteristics of the entity; and comparing the entity threat value to an entity threat threshold to identify if the entity is malicious.
9. The method according to claim 8, wherein each of the one or more characteristics of the entity is associated with a respective characteristic threat value, wherein the method -53 includes calculating the entity threat value using at least some of the characteristic threat 0 values for the one or more characteristics of the entity. The method according to claim 9, wherein at least one of the one or more 00 characteristics of the entity is associated with a characteristic threat value formula, wherein cK, the method includes calculating, using the characteristic threat value formula, the Ocharacteristic threat value.
11. The method according to claim 10, wherein at least one characteristic threat value is temporally dependent, wherein the method includes calculating the at least one characteristic threat value for the entity using the characteristic threat value formula and a temporal value.
12. The method according to claim 10 or 11, wherein the at least one characteristic is a behaviour associated with the entity, wherein the method includes calculating the at least one characteristic threat value for the entity using the characteristic threat value formula and a frequency of instances the behaviour has been performed.
13. The method according to any one of claims 9 to 12, wherein the one or more characteristics includes at least one of one or more legitimate characteristics indicative of non-malicious activity and one or more illegitimate characteristics indicative of malicious activity, wherein the method includes determining the entity threat value using characteristic threat values associated with the one or more legitimate characteristics and the one or more illegitimate characteristics of the entity.
14. The method according to claim 13, wherein the step of determining the entity threat value for an entity includes calculating a difference between the characteristic threat values for the one or more legitimate characteristics of the entity, and the characteristic threat values for the one or more illegitimate characteristics of the entity, wherein the difference is indicative of the entity threat value. -54- The method according to any one of claims I to 14, wherein the method includes: O determining one or more related entities to the activity, wherein each related entity has an associated entity threat value; and calculating the entity threat value for the activity using the entity threat value for at 00 least some of the one or more related entities. (N
16. The method according to claim 15, wherein the method includes: N determining one or more related entities to the activity, wherein each related entity has an associated entity threat value; and calculating a group threat value for the activity and one or more related entities using the entity threat value for at least some of the one or more related entities and the activity.
17. The method according to claim 15 or 16, wherein the method includes weighting the entity threat value for at least one related entity according to a relatedness of the at least one related entity relative to the activity.
18. A system to detect if a processing system has been compromised with audio/visual threat, wherein the system is configured to: intercept one or more requests in the processing system to perform an activity associated with an audio and/or visual communication device of the processing system; and perform a behavioural analysis of the processing system to determine if the processing system exhibits behavioural characteristics indicative of the processing system having been compromised with an audio/visual threat.
19. The system according to claim 18, wherein the system is configured to perform the method of any one of claims I to 17.
20. A computer program product including a computer readable medium having a computer program recorded therein or thereon, the computer program being configured to detect if a processing system has been compromised with audio/visual threat, wherein the O computer program product configures the client processing system to: intercept one or more requests in the processing system to perform an activity associated with an audio and/or visual communication device of the processing system; 00 and (N i perform a behavioural analysis of the processing system to determine if the processing system exhibits behavioural characteristics indicative of the processing system (i having been compromised with an audio/visual threat.
21. The computer program product according to claim 20, wherein the computer program product configures the processing system to perform the method of any one of claims 1 to 17.
22. A method of detecting if a processing system has been compromised with audio/visual threat, the method being substantially herein before described.
23. A system to detect if a processing system has been compromised with audio/visual threat, the system being substantially herein before described with reference to the accompanying drawings.
24. A computer program product, the computer program product being substantially herein before described with reference to the accompanying drawings.
AU2007221811A 2006-10-04 2007-10-03 Detecting an audio/visual threat Abandoned AU2007221811A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2007221811A AU2007221811A1 (en) 2006-10-04 2007-10-03 Detecting an audio/visual threat

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2006905486 2006-10-04
AU2006905486A AU2006905486A0 (en) 2006-10-04 Detecting audio/visual malware
AU2007221811A AU2007221811A1 (en) 2006-10-04 2007-10-03 Detecting an audio/visual threat

Publications (1)

Publication Number Publication Date
AU2007221811A1 true AU2007221811A1 (en) 2008-04-24

Family

ID=39399283

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2007221811A Abandoned AU2007221811A1 (en) 2006-10-04 2007-10-03 Detecting an audio/visual threat

Country Status (1)

Country Link
AU (1) AU2007221811A1 (en)

Similar Documents

Publication Publication Date Title
US7941852B2 (en) Detecting an audio/visual threat
US8196201B2 (en) Detecting malicious activity
US11343280B2 (en) System and method for identifying and controlling polymorphic malware
US7877806B2 (en) Real time malicious software detection
US8392996B2 (en) Malicious software detection
US7801840B2 (en) Threat identification utilizing fuzzy logic analysis
US8769674B2 (en) Instant message scanning
US7926111B2 (en) Determination of related entities
US7650639B2 (en) System and method for protecting a limited resource computer from malware
US7676845B2 (en) System and method of selectively scanning a file on a computing device for malware
US9117075B1 (en) Early malware detection by cross-referencing host data
US20080022378A1 (en) Restricting malicious libraries
US20130061325A1 (en) Dynamic Cleaning for Malware Using Cloud Technology
CN109891422B (en) Dynamic reputation indicator for optimizing computer security operations
US20240045954A1 (en) Analysis of historical network traffic to identify network vulnerabilities
Yadav et al. A complete study on malware types and detecting ransomware using API calls
US9396328B2 (en) Determining a contributing entity for a window
AU2007221811A1 (en) Detecting an audio/visual threat
AU2007204089A1 (en) Malicious software detection
AU2007216638A1 (en) Instant message scanning
Deep et al. Security In Smartphone: A Comparison of Viruses and Security Breaches in Phones and Computers
AU2007203543A1 (en) Threat identification
AU2007203373A1 (en) Detecting malicious activity
AU2007200605A1 (en) Determination of related entities
AU2007203534A1 (en) Real time malicious software detection

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period