WO2007117705A2 - Software enabled video and sensor interoperability system and method - Google Patents

Software enabled video and sensor interoperability system and method Download PDF

Info

Publication number
WO2007117705A2
WO2007117705A2 PCT/US2007/008918 US2007008918W WO2007117705A2 WO 2007117705 A2 WO2007117705 A2 WO 2007117705A2 US 2007008918 W US2007008918 W US 2007008918W WO 2007117705 A2 WO2007117705 A2 WO 2007117705A2
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
data
system
gt
lt
Prior art date
Application number
PCT/US2007/008918
Other languages
French (fr)
Other versions
WO2007117705A9 (en
WO2007117705A3 (en
Inventor
Sandeep Gulati
Vijay Daggumati
Robin S. Hill
Dan Mandutianu
Tong Ha
Original Assignee
Vialogy Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US79076706P priority Critical
Priority to US60/790,767 priority
Application filed by Vialogy Corp. filed Critical Vialogy Corp.
Publication of WO2007117705A2 publication Critical patent/WO2007117705A2/en
Publication of WO2007117705A9 publication Critical patent/WO2007117705A9/en
Publication of WO2007117705A3 publication Critical patent/WO2007117705A3/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/4185Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by the network communication
    • G05B19/4186Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by the network communication by protocol, e.g. MAP, TOP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance or administration or management of packet switching networks
    • H04L41/02Arrangements for maintenance or administration or management of packet switching networks involving integration or standardization
    • H04L41/0246Arrangements for maintenance or administration or management of packet switching networks involving integration or standardization exchanging or transporting network management information using Internet, e.g. aspects relating to embedding network management web servers in network elements, web service for network management purposes, aspects related to Internet applications or services or web-based protocols, simple object access protocol [SOAP]
    • H04L41/0273Arrangements for maintenance or administration or management of packet switching networks involving integration or standardization exchanging or transporting network management information using Internet, e.g. aspects relating to embedding network management web servers in network elements, web service for network management purposes, aspects related to Internet applications or services or web-based protocols, simple object access protocol [SOAP] involving the use of web services for network management, e.g. SOAP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance or administration or management of packet switching networks
    • H04L41/08Configuration management of network or network elements
    • H04L41/0893Assignment of logical groupings to network elements; Policy based network management or configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/12Network-specific arrangements or communication protocols supporting networked applications adapted for proprietary or special purpose networking environments, e.g. medical networks, sensor networks, networks in a car or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance or administration or management of packet switching networks
    • H04L41/02Arrangements for maintenance or administration or management of packet switching networks involving integration or standardization
    • H04L41/0246Arrangements for maintenance or administration or management of packet switching networks involving integration or standardization exchanging or transporting network management information using Internet, e.g. aspects relating to embedding network management web servers in network elements, web service for network management purposes, aspects related to Internet applications or services or web-based protocols, simple object access protocol [SOAP]
    • H04L41/0266Arrangements for maintenance or administration or management of packet switching networks involving integration or standardization exchanging or transporting network management information using Internet, e.g. aspects relating to embedding network management web servers in network elements, web service for network management purposes, aspects related to Internet applications or services or web-based protocols, simple object access protocol [SOAP] involving management internet meta-data, objects or commands, e.g. by using mark-up language

Abstract

A novel method and software system design are presented for interchanging sensors from different vendors in a plug and play manner to drive mission critical applications within a sensor based architecture, policy based framework, event-based architecture. This technique is applicable to sensors directly attached to a computer, or sensors attached to a network, or sensors attached to a controller node which is attached to a network or directly attached to a computer.

Description

SOFTWARE ENABLED VIDEO AND SENSOR INTEROPERABILITY SYSTEM AND

METHOD

Sandeep Gulati Vijay Daggumati

Robin Hill Dan Mandutianu

Tong Ha Priority Claim

This application claims priority under 35 USC 119(e) and 120 to U.S. Provisional Patent Application Serial No. 60/790,767 filed on April 8, 2006 and entitled "Software Enabled Video and Sensor Interoperability" which is incorporated herein by reference. Appendices

Appendix A (9 pages) contains an example of a policy of the system; Appendix B (17 pages) contains the grammar for the Sensor Policy Engine; Appendix C (40 pages) contains information about a sensor policy framework. All of the appendices form part of the specification. Field of the Invention

The invention relates generally to a sensor network and in particular to a system and method for providing internet protocol (IP) and software enabled video and sensor interoperability.

Background of the Invention

In March 2006, a new version of a web services policy framework (WS-Policy Framework

V 1.2) was published (See http://schemas.xmlsoap.org/ws/2004/09/policv/ for more details of this framework) that defined various aspects of a policy framework. The Web Services Policy Framework (WS-Policy) provides a general purpose model and corresponding syntax to describe the policies of a Web Service. WS-Policy defines a base set of constructs that can be used and extended by other Web services specifications to describe a broad range of service requirements and capabilities.

The entire WS-Policy terminology and hierarchy is applicable to sensor-based policies. In fact, a lot of theoretical work exists for a sensor policy framework system including DARPA (DAML, CoABS, CoAX), Nomads (Oasis and Spring) and KAoS. However, no one has developed a productizable and scaleable sensor-based policy system. The theoretical work typically provided a generalized implementation and could not be scaled for productization. It would be desirable to provide an in-network, in-stream, real-time intrusive content analysis as sensor data moves through a system and it is to this end that the present invention is directed.

Summary of the Invention

A software enabled video and sensor interoperability system and method are provided that is a novel method and software system design for interchanging sensor data and information from different vendors in a plug and play manner to drive mission critical applications within a sensor based architecture, policy based framework, event-based architecture . The sensors used in the system may include any information source or information proxy regardless of the underlying device physics, physical measurement, or the format in which the physical measurement is communicated to a computer. In accordance with the invention, the computational steps are specified for hiding all the hardware details from the user or implementer. The process allows replacement of one sensor from one manufacturer with another sensor making the same measurement using another sensor from another manufacturer, with no loss of performance or functional utility. This policy based framework is denoted as Sensor Policy Manager (SPM). It is a generalized network-centric policy-based platform for delivering sensor, video and geographical information system (GIS) interoperability. This technique is applicable to sensors directly attached to a computer, or sensors attached to a network, or sensors attached to a controller node which is attached to a network or directly attached to a computer. The system and method are extremely scalable and can be applied to sensor deployments ranging from few sensors to millions of sensory nodes. The system can be implemented in a centralized manner, or distributed on a number of computers or embedded within a broader processing architecture.

The system also provides systematic, reusable, scalable introduction of different types of sensors within the client policy framework. The system is therefore capable of addressing the needs to multiple vertical markets such as security systems, supply chain systems, etc., with varying sensor deployments. The system also provides a sensor-based policy primitives and sensor based policy actions for implementing domain specific policies at the user level.

Thus, in accordance with the invention, a sensor interoperability system is provided that has one or more sensors that each generate a piece of sensor data and one or more sensor proxies, each sensor proxy coupled to one of the one or more sensors wherein each sensor proxy processes the piece of sensor data from the sensor to create a sensor system piece of data wherein the system piece of data has a common format. The system also has a policy unit coupled to the one or more sensor proxies over a communications path, wherein the policy unit further comprises one or more policies wherein each policy further comprises a condition and an action wherein the condition tests a sensor system piece of data and the action is an action that occurs when the condition is satisfied and a policy engine that applies the one or more policies to the sensor system piece of data.

In accordance with another aspect of the invention, a method for monitoring a system using a sensor network is provided. In the method, a piece of sensor data at each of one or more sensors is generated and each piece of sensor data of each sensor is processed using a sensor proxy to create a sensor system piece of data wherein the sensor system piece of data has a common format. Then, the system determines if the sensor piece of data triggers a policy contained in a policy engine wherein the policy is triggered when the sensor system piece of data meets a condition of one or more policies stored in the policy engine. The system then performs the action of the triggered policy.

Brief Description of the Drawings

Figures 1-2 illustrate an example of a web-based sensor and video interoperability system and a deployment of that system, respectively;

Figure 3 illustrates more details of a policy engine of the system shown in Figures 1 and 2;

Figure 4 illustrates more details of a policy of the system shown in Figures 1 and 2;

Figure 5 illustrates more details of a sensor proxy of the system shown in Figures 1 and 2;

Figure 6 illustrates an example of an application for the system shown in Figures 1 and 2;

Figure 7 is a diagram illustrating an example of another deployment of the web-based sensor and video interoperability system;

Figures 8- 11 illustrate examples of the operation of the web-based sensor and video interoperability system when processing a sensor;

Figures 12-16 illustrates examples of the user interface of the web-based sensor and video interoperability system; Figure 17 illustrates more details of the inputs and outputs from the policy engine;

Figure 18 is a metadata class diagram of the policy engine;

Figure 19 illustrates more details of the fastset and namedset classes;

Figure 20 illustrates more details of the fastarray class;

Figure 21 illustrates more details of the AttributeDataRepository class;

Figure 22 illustrates more details of the AttributeData class;

Figure 23 illustrates a file loader/writer class of the policy engine;

Figure 24 illustrates an I/O handler object model of the policy engine;

Figure 25 illustrates a dialog box/command object class of the policy engine;

Figure 26 illustrates a statistics command inheritance hierarchy of the policy engine;

Figure 27 illustrates a statistics dialog box/command containment hierarchy of the policy engine;

Figure 28 illustrates a file load/write commands and data managers of the policy engine;

Figure 29 illustrates a first command class of the policy engine;

Figure 30 illustrates a second command class of the policy engine;

Figure 31 illustrates a third command class of the policy engine;

Figure 32 illustrates a view class model of the policy engine;

Figure 33 illustrates a modeless dialog box class of the policy engine;

Figure 34 illustrates a modal dialog box class of the policy engine;

Figure 35 illustrates a graphing module class of the policy engine;

Figure 36 illustrates a profile module class of the policy engine;

Figure 37 illustrates an observer base class of the policy engine; Figures 38A and 38B illustrate the SPM distributed architecture;

Figure 39 illustrates the sensor proxy framework protocol support; and

Figure 40 illustrates more details of the policy engine.

Detailed Description of Exemplary Embodiments

The invention is particularly applicable to a web-based, software enabled sensor and video interoperability system and method using the sensors described below and it is in this context that the invention will be described. It will be appreciated, however, that the system and method in accordance with the invention has greater utility since the system may be used with a variety of sensors not specifically disclosed below and may be implemented using other architectures that those specifically described below.

Figures 1- 7 illustrate an example of an implementation of a web-based sensor and video interoperability system 30 which may be known as VSIS. As shown in Figure 1, the system 30 may be interposed between a client network 32 and a set of sensors 34 wherein the set of sensors may provide raw as well as derived and correlated data from sensors 34 connected to the client network. For example, the set of sensors may include one or more of real time location system (RTLS) sensors including passive and active radio frequency identification (RFID) readers, building management system (BMS) sensors including programmable logic controller (PLC) systems, distributed control systems (DCS) and supervisory control and data analysis (SCADA) systems, biometric sensors including automatic fingerprint identification systems and other biometric systems, chemical sensors, biosensors, nuclear sensors, video sensors including low resolution (less than 1 megapixel) visible spectrum sensors, high resolution visible spectrum sensors and thermal (infrared) sensors, and geographical information system (GIS) sensors that pull from a CAD format or a GIS database. The client network one or more client installations (for example, clients 36a and 36b in Figure 1) wherein each client installation may interact with the system 30 using a client application 38 connected to a database 40 or a web server 42 as shown.

The system 30 is a product that works within the client architecture to provide user access to raw as well as derived and correlated data from sensors 34 connected to the client network. The sensors will be integrated in a manner that will completely hide form the user all the specifics of the sensors (protocols, data format, and network connection). Alerts and alarms are some examples of derived data but they can be as complex as reports or statistical correlations. The derived data are generated by executing policies that can be triggered either by events or scheduled at predefined times. The policies can be triggered by arbitrarily complex conditions and the actions are structures composed of a large set of primitive commands. There will be no limitation on the number of sensors and or policies. In the system, a policy is a prescribed plan or course of action of an entity intended to influence and determine decisions, action and otheτ matters. The policies permits an automated response and control of a complex system. The sensors permit construction of a semantically rich policy representations that can reduce human error and address complex events. The sensor interoperability provided by the system allows administrators and users to reuse the policy management framework and modify system behavior (by modifying the policies or the sensor proxies) without changing the source code of the system every time a sensor is added, removed or changed.

The system 30 leverages existing standards which makes it easy to support new sensors as well as to add more primitive commands. The system is not specific to any domain of application as it is a framework that could be used with minimum effort to deploy and write applications in different domains. A rich set of generic primitives together with the system tools will assist the user in deploying and writing applications in different domains. The system is also a repository for domain specific expertise (experience) by allowing the user to instantiate new sensor proxies (drivers) and primitive (or derived) commands and store them in domain specific libraries. A new application in the same domain can build upon such libraries.

The system 30 may include one or more components (implemented in software, a combination of software and hardware or hardware such as an field programmable gate array (FPGA)) including a policy engine unit 50 known as VIPER, a configuration unit 52, a repository unit 54 and a set of sensor proxy units 56. The system may also optionally include a messaging middleware unit 58. In a preferred embodiment, the system 30 consists of a suite of distributed servers (for example, a VIPER server, a configuration server, etc.) covering different aspects of sensor data processing. Each unit is accessible through standard protocols that allow easy access of the clients to all system functionality as well as (network) location independence. The system 30 can be configured so that any number of servers can be started as needed to optimize the traffic and reduce overall response time and the system is therefore not limited to any particular number of or configuration of the servers. A client API (shown in Figure 2) enables integration with any other Java client that may want access to the sensor network. In accordance with the invention, the system 30 is a distributed system since the processes of the system (described below in more detail) can be executed on different machines as long as they are on the same network with no firewall (or the port is open) to permit communication between the machines.

Figure 2 illustrates an example of the deployment of the system 30. This deployment may contain two VIPER servers 50a, 50b wherein the server 50b is configured to do early processing of raw data from the three sensors and it incorporates the corresponding sensor proxies 56. The second VIPER server 50a does further processing but has no sensor proxy. As shown, each VIPER server may further include a policy engine 50al, 50bl, a configuration API (50a2, 50b2) and a messaging API (50a3, 5Ob3). The configuration server 52 also has a configuration API 52a. The client 36a with the client application 38 may include a configuration API 38a that is used to configure and monitor the two servers 50a, 50b (as well as the configuration server 52) and a messaging API 38b that is used to subscribe to one or more data channels. The client 36a and the VSIS servers 50a, 50b can be hosted by the same machine or they can be distributed on different machines depending on availability and the expected traffic and how much CPU time it takes to run the policies. For each type of sensor, the system may have a proxy 56 for each type of sensor wherein the sensors are wrapped into objects (proxies) that will create a standard interface and implement the protocol used by all the other components in the system. Thus, the proxies will insulate the system from the sensor specifics and create a framework for quick integration of new sensors in the system. In accordance with the invention, the system can integrate all kinds of sensors including: 1 ) read only sensors (these sensors publish data spontaneously, and do not implement any command); 2) read-write sensors (these sensors support configuration and polling commands and publish data on demand or spontaneously); and 3) read-only sensors (not really sensors because they do not publish data and are devices that can be controlled).

The policy engine 50al or 50bl in Figure 2 may store and implement a set of policies. The system 30 relies on policies to describe the processing that takes place with the system. Generally, a policy is a (condition, action) pair where a particular condition (if true) causes a particular action to occur. The policies are hosted and executed in the policy engines 50al, 50bl in Figure 2. Each policy engine loads a different set of policies depending on the function it has in the system and the sensors it needs to support (to process the information published by the sensor) wherein the set of policies for each policy engine may be loaded from the repository 54. In the system, the user does not need to know where or how policies are being executed since the system will try to propose a distribution that is optimal meeting constraints on computing and network resources. While the clients use an remote procedure call (RPC) mechanism to execute configuration and control commands, the data is acquired using an asynchronous communication paradigm. In particular, the data producers (sensor proxies and policies) publish data and the consumers (client and the policy engine) subscribe and get notified when data become available (they are event driven). For example, the conditions that are supported by the system may include one or more of a redline (a threshold condition for an engine/motor), a focus (if and when in range), an alarm, a track (if and when acquired), a watchlist (identifier for an object and a condition list), locate (geolocate, LBS ED, RTLS spot), correlate (auto and cross-correlate), assert (true, fail, unknown, ID, breach), rate (count, bin, frequency), count, spot (report, miss, hit, ED), confirm (success or fail), deny, trend, subscribe channel, export channel, encrypt channel, record channel and group channel. The actions supported by the system may include one or more of alert, alarm/silent alarm, locate/geolocate, geofence, enable/disable, activate/reset, syndicate, track, mobilize, log/archive, confirm, tag, deny/shutdown/lockdown, message, broadcast (mobilize, evacuate, quarantine) and acknowledge. The invention however, is not limited to the exemplary list of conditions or actions set forth above.

The system may also support user authentication and authorization as well as encryption on all the data feeds to provide a secure system. The system also permit the selective logging of the different data sources. In addition, the system will enable access to the historic data for any client application that might need to do forensic analysis. Now, each unit of the system (the VIPER, configuration unit, messaging middleware unit and the sensor proxies will be described in more detail. It should be understood that each unit can be deployed independently and the location of each component is controlled through the configuration.

Figure 3 illustrates more details of the VIPER unit 50 of the system shown in Figures 1 and 2. The VEPER unit 50 may further comprise a policy scheduler 60 and a policy knowledge base 62 wherein the policy knowledge base contains one or more policies 62a, 62b, 62c that are executed to perform the processing of the sensor data if the condition of the particular policy is met. VEPER is a central processing unit for the sensor data wherein VIPER subscribes to data sources over one or more input channels 64, such as input channels 64a, 64b shown in this example, and publishes data over one or more output channels 66, such as output channels 66a, 66b, to other data sources. The subscription to data and publication of data is done by the VIPER 50 based on the policies 62 executing on the particular VIPER. The VIPER 50 guarantees that a particular policy is triggered when the condition(s) for the particular policy are met. The system 30 shown in Figures 1 and 2 may include multiple instances of VIPER 50 in any deployment wherein each VIPER would load a different set of policies in their knowledge base. The decision of what policies are being used in each VIPER 50 can be made based on various criteria, such as the proximity of the sensors or other load balancing criteria. The system 30 can provide assistance on configuring the different VIPER 50 instances by suggesting default configurations but the final decision will be made by the system administrator. The configuration of each VTPER is loaded at start up time from the persistent repository 54 of the system. The configuration contains all the policies that execute within each instance as well as the definitions of the channels that use these policies. In accordance with the invention, since the configuration for each VIPER is loaded at start-up time, the operation and functions of each VIPER 50 in a system 30 can be easily altered.

Figure 4 illustrates more details of a policy 70 of the system. A policy is the basic building block for the system 30 and the policies are hosted and executed by a policy engine 50. A deployment of the system 30 may come with some default utility policies and some domain specific policies but the administrator can add more or customize the existing policies. Each policy 70 may be a data consumer (subscribing to one or more data sources through one or more channels) and a data producer (publishing generated/raw data to one or more data sources through one or more channels). Each policy is a pair: (<condition>, <action>) where the condition is a Boolean expression over the input channels 64 and the action is a sequence of primitive commands that may (among other things) publish data on output channels 66. The execution of the action part of a policy is controlled by the condition so that, only if the condition evaluates to true will the action part be triggered.

The condition of the policy may use logical operators which are the typical logical operators (AND, OR, NOT, exclusive OR) with no depth limitation. The operands they act on are either channel fields or they are generated by comparison operators such as less than (LT), less than or equal to (LE), greater than (GT), greater than or equal to (GE), equal to (EQ) and not equal (NE) applied to arbitrary numeric or string expressions. A function invocation can be another source of Boolean operands. An example of a policy is attached hereto as Appendix A that is a part of the specification and is incorporated herein by reference. The numeric expressions used in comparisons for a condition can use any/all arithmetic operators as well as predefined functions that return numeric values. When a function is invoked, the actual parameters can be general expressions of any type (numeric, string, Boolean). The condition can be an expression as complex as one can imagine (given the operators) but caution is recommended because of the way the conditions are used. The Policy Engine will evaluate a condition every time an event that might affect it is posted on the input channels. In an application with high frequency data reads, the Policy Engine may become a processing power hog just for checking the conditions, even if the action parts of the policies are never executed. The condition can use the current (most recent) values on the input channels or special statistical functions can be applied to historical data. For example a function can be used to return true if the values on a temperature channel have dropped more than 20 degrees within the last 10 minutes. Another function can return true if there is any jump of more than 5 degrees from one reading to the next (assuming a one Hertz reading frequency). The action part of the policy is executed when the values on the input channels make the condition true. The action part of the policy may include a primitive action and/or a composite action. The primitive actions are predefined functions that perform the following categories of activities: 1) send commands to the sensors (read a value, set/change reading frequency, lock); 2) perform intermediate data processing on input data and publish results on output channels; 3) generate alerts or other signals for end user consumption or to trigger other policies; and 4) perform management and monitoring functions (generate trace information). In addition to the actual functions available in a library (deployed with the system) all the primitive actions also have associated metadata that describe their signature. A mechanism will be in place to allow easy addition of new entries in the library of predefined actions. This will allow creating a richer set of primitives that best serve a specific domain by reducing the effort to write applications.

The composite actions are when the primitive actions are combined to create complex actions to be used in the policies. Sequences, conditionals and different kinds of loops are examples of patterns used to create composite actions. The composite actions can in their turn participate to create actions of any complexity. Each policy may start with a policy template since, in the process of writing an application, the user must first define the policy templates as generic rules acting upon input parameters (specified only by name and data types) and generating output parameters (also specified by name and data types) with no reference to specific data channels. These templates look very much like library functions for a programming language wherein only the function signature is known. The functions can be called with any actual parameters provided they have the appropriate (or compatible) type and a policy template specifies the condition and the action operating on the types of the input and output channels with no commitment on any actual channels to be used. In accordance with the invention, an application consists of a set of policy instances loaded into one (or multiple) Policy Engines 50. In summary, the Sensor Policy Manager (SPM) is a generalized network-centric, policy-based platform for delivering sensor, video and GIS interoperability. Specifically, the SPM provides IP-based sensor interoperability, standards-based sensor event notifications, sensor virtualization in a "product- format", integration of multiple sensors from multiple vendors providing data in multiple formats to a policy engine, extensive and complete policy framework for addressing multiple vertical requirements, capability for real-time response to mission critical events and guaranteed timeliness in sensor event processing.

The primitive actions have associated metadata describing their signature which makes them behave like templates. The types of the parameters can be scalar or structured (see the description below of the channels for more details of the supported types). In the system, no assumption is made on what the parameters actually are: entire messages or just fields in a composite message and the same applies to all policy templates. Both the expressions and the actions in a template use one or more parameter(s) that are described by their types only although all the parameters must also have a name. The templates can also specify that certain parameters are attributes of the same structure by using a distinguished name as described below.

An instance of a policy is a template used in a particular context created by real input and output channels of an application. The process of identifying the channels to be used by a certain policy is called binding. A policy instance is nothing more than a template and its bindings and a template can be used in multiple instances. An example of the grammar for the Sensor Policy Engine is contained in Appendix B.

The binding is a map from the set of formal parameters of the policy to the set of actual channels (or their attributes). No formal parameter should be left unmapped (unbound). Multiple parameters can be mapped to the same channel (attribute).

The policy scheduler 60 shown in Figure 3 performs a policy scheduling strategy. In particular, every time a message is posted on a channel, all policies that have the condition bound to it are candidates for execution. Thus, those conditions are evaluated and if satisfied, the one or more policy actions are triggered. There is no predefined order in which the policies are tried. In accordance with the invention, channels with high frequency updates can generate a lot of activity in the policy engine hosting policies that depend on that channel so caution must be used when the policies are designed in this case. One possible solution is to have a high frequency channel used by a single (simple) policy that filters the messages and dispatches them on different channels (with lower update frequency) based on criteria that would make the data more specific (relevant) to second level policies.

The policy engine 50 may have a set of interfaces, such as an XML mediation layer, a SOAP layer, a socket layer (for secure messages) and a web services interface, that permit the policy engine 50 to communicate with the other components of the system 30. The policy engine 50 may further comprise a secure data syndication and caching unit, a command, control and reporting unit, a metadata layer, a business intelligence layer and a model layer. Now, the sensor proxies in accordance with the invention are described in more detail.

Figure 5 illustrates more details of the sensor proxy 56 of the system 30. The only way a sensor can participate with the system 30 is by wrapping the sensor and its data into a sensor proxy 56. The sensor proxy creates a level of abstraction that will hide all hardware details from the other components in the system and from the client. Each sensor proxy is preferably a piece of software/software code resident on a machine local to the sensor. In particular, the unique aspects for each type of sensor are handled by the proxy sensor that is configured for the particular type of sensor so that the sensor data output from the sensor proxy has a uniform format, protocol and the like. In addition, the sensor proxies mean that a new type of sensor can be easily added into the system 30 at any time by simply creating a new sensor proxy for the new type of sensor.

The sensor proxy 56 supports command and control calls through a well known simple object access protocol (SOAP) interface. Each sensor proxy 56 may be set by the system using a setO command and the proxy sensor may get data from a sensor using a get() command or set the sensor parameters using a setO command. The sensor proxy 56 may then publish the data from the sensor (once the sensor data has been converted into a common format and protocol) using a publishO command to one or more output channels 66 as shown in Figure 5. The interface of the sensor proxy is defined for each sensor class/type without going into the details of the specific sensor. The commands may refer to setting typical features of the sensor (like sampling rate, precision) or they may enhance the sensor with in-situ functionality like data filtering or conversion. A simple example of filtering would be to instruct the sensor proxy for a temperature sensor not to publish data unless the temperature gets over a certain low threshold. The proxy could also be instructed to publish, in addition to the actual reading, an alert (an event on an "alert" channel) when the temperature exceeds a high threshold level. Another feature that can be controlled at the proxy level is the units of the temperature. For example, the proxy can be configured to always publish the temperature in degrees Fahrenheit no matter what the physical thermometer is actually reading. The important point here is that the proxy can enable a sensor to comply to a higher standard (sophisticated, configurable) as long as the basic data is available. There may be sensors that can read both Celsius and Fahrenheit, and sensors that support low and high thresholds as they come from the manufacturer, but, using the sensor proxy, even a primitive sensor can do the same things as the more sophisticated sensor. In accordance with the invention, 5 due to the sensor proxy, there is no system visible difference between the two different sensors.

Some of the sensors used in the system are read-only so that those sensors publish data periodically and do not accept any commands. For these read-only sensors, any configuration would be done using the hardware features of the sensor itself (knobs, switches). Some sensors are read-write so that they accept commands to change parameters like (sampling rate or 0 precision) or to return data (they have to be polled - they do not publish data spontaneously).

There is yet another category of devices that are not sensors (we can call them degenerate sensors) but are supported by the system 30 because they may play an important role in sensor related applications. These degenerate sensors are devices that are write-only since these devices do not provide any information about the environment but they can change the environment. Examples S of these degenerate sensors are locks and switches. In most of the cases, these degenerate sensors will also confirm the success of the operation (turn of a switch for example) but the lack of the ability to confirm the success of the operation does not prevent the degenerate sensor from being integrated within the system 30 using a sensor proxy that is customized for the degenerate sensor.

The sensor proxy 56 may also provide another benefit since the proxy sensor can provide 0 the physical location of the sensor, such as a mobile sensor that is GPS enabled or a fixed sensor. For those mobile sensors, the proxy sensor will simply make it available (publish) to a channel. The connection type and the protocol used by the sensor are also normalized by the sensor proxy 56. The sensor proxy is described in more detail in Figures 38-40 below and in Appendix C which is incorporated herein by reference. 5 In accordance with the invention, all sensors appear to the system 30 as having an IP address and a port (the connection type). As for the protocol, all sensor proxies will publish data using the known sensor model language (SensorML) which is an XML-based language for describing the geometric, dynamic, and radiometric properties of dynamic in-situ and remote sensors. More details of this known sensor model language is provided at 0 http://www.opengeospatial.org/. In accordance with the invention, regardless of whether the sensor spontaneously pushes data or it has to be polled, the sensor proxy will make the sensor look < like the sensor is always publishing data to channels of the system. In accordance with the invention, any number of sensor proxies can be configured in the system 30. The sensor proxy configuration is loaded at startup time and controls all the aspects of its execution. The configuration can be subsequently changed through commands issued by the policies or by the client (the Administration Console for example as described below in more detail).

The sensor proxies 56 may have a set of interfaces, such as an XML mediation layer, a SOAP layer, a socket layer (for secure messages) and a web services interface, that permit the sensor proxy 56 to communicate with the other components of the system 30. Each sensor proxy 56 also has a secure data syndication unit, a command, control and reporting unit, a metadata layer, a business intelligence layer, a hardware abstraction layer and one or more sensor interfaces. Now, the sensor proxies in accordance with the invention are described in more detail.

The channels of the system (shown in the above figures) is the mechanism by which all data communication takes place asynchronously in the system. The channels in the system 30 are strongly typed so that each channel has a declared type and all the messages posted in the channel comply with that declared type. This type is defined by an XML schema that governs the format for each data packet on a specific channel. If for any reason a channel is meant to allow multiple types of messages to be posted, a catch all type (untyped or "any") will be declared but that would put the burden on the consumers to parse the content. In accordance with the invention, All messages will have an XML schema associated with them. For the untyped messages, the XML schema can be referenced in each message but no assumption can be made about the fields

(attributes) so the message will be delivered as a block to the consumer as opposed to a collection of fields. Messages can also have MIME encoded attachments, as is the case with IP based camera video feeds which can be syndicated frame by frame and each frame would arrive as an attachment to a message. The channels may be scalar channels or structured channels. The simplest type of channel is one that carries messages with a single scalar. The following scalar types are supported for channels: 1) integer; 2) double; 3) Boolean; and 4) string. There is no need to name the field of a scalar channel as the ID of the channel would be enough to name it. Most of the channels will have more than one field and is therefore a structured channel. In fact, even if the actual (live) data is a scalar (temperature or pressure), the sensor will likely include a few other qualifiers such as for example timestamp, location and/or units of measure. The attributes of a message have a unique ID (within the type scope) and a type, scalar or structured. For the structured attributes, they can be arrays and other structures. An array is a collection of homogenous attributes with the types being scalar or structured. The type of a structured message can be of any depth. The elements of a structure referred by name (within the scope) and the elements of an array are referred by position. Now, the configuration unit of the system is described in more detail.

Referring again to Figures 1 or 2, the configuration unit (which may be a server in a preferred embodiment, has the role of mediating the access to the metadata repository 54 for all clients (among which the administration console plays an important role) as well as the system components that may need to modify the configuration. The policy engine 50 may execute commands that dynamically change the configuration and those changes are mediated by the configuration unit to guarantee consistency. At startup time, all system components will load their configuration from the database 54 directly. The configuration server will support a checkpoint call to allow the user roll back changes to a stable (tested) configuration.

The system components will negotiate with the configuration server to get their dynamic configuration updates while in operation and at startup. The configuration server also provides security services since it checks the identity of the users at login time and authorizes access to the system resources (metadata, policies, and channels). Some of the objects that are persistently stored in the repository and accessed through the configuration server may include users, groups, permissions, policy templates and instances (described above), channels (definitions), sensor proxy configurations and policy engine configurations of the system. The configuration server will expose a SOAP interface that can be used by both end users (the Administration Console described below) and by the other components in the system. The policy engine 50 and the sensor proxy 56 will both read their configuration from the repository by using API calls to the configuration server. The configuration unit may have one or more interfaces, such as an XML mediation layer, the SOAP layer and a web services interface. Now, the messaging middleware will be described in more detail. Referring again to Figures 1 and 2, the messaging middleware unit 58 manages the asynchronous communications data transfer paradigm (even though the configuration and control commands use the RPC SOAP mechanism to access the corresponding services). The asynchronous data transfer paradigm enables the de-coupling between the data producers (sensors proxies and policy engines) and the data consumers (clients subscribing to data and other policy engines). The asynchronous communication allows the consumers to register observers to the channels of interest and get notified when the data becomes available without having to poll the data sources so that consumers are event driven. In accordance with the invention, any number of consumers can subscribe to the same channel. This multicast feature can easily be implemented without the data producer having to do anything for that and the multicast feature can be controlled by the configuration. Another important advantage of using messaging middleware software comes from its ability to store the data in a persistent repository 54 before delivering it. Since the system 30 is a distributed system, the connection between the data producers and the data consumers can be temporarily broken. If guaranteed delivery is needed for some sensor readings (or derived results) then the corresponding channels should be configured as persistent. Using this mechanism, once the data has been successfully published to the repository it will be available to any client (subscriber) even if the sensor is not online at the time of the publication. Now, the administration console is described in more detail.

Referring to Figure 2, the client 38 may include an administration console which is a browser based graphical user interface (GUI) that allows the user to configure and access all the resources in the system 30. The administration console may perform configuration and monitoring functions of the system. The administration console may be used to define and configure various types of resources including a channel type, a channel instance, a policy template, a policy instance, a policy engine configuration and a sensor proxy configuration. The console may also be used to monitor the activities of the system 30 which may include a subscription to channels and/or watching the traffic, checking the state of different components of the system and/or monitoring the traffic and the policies being triggered at each Policy Engine.

The channels can carry either raw data from the sensor (proxies) or derived data generated by policies wherein the policies can be written to publish on dedicated channels of any information type from the simple fact that the policy itself has been triggered (publish the policy ID) to complex logging. The monitoring capability totally relies on the subscription to channels. As for the actual content of the channels, the administrator will be responsible for defining (and make sure they get triggered) the right policies.

Figure 6 illustrates an example of an application 80 for the system shown in Figures 1 and 2. The application 80 can be seen as a puzzle that can be grown indefinitely using the two functional patterns created by the sensor proxy 56 and the policies wherein the underlying idea is that the channel is the only way data is being communicated and there are no back doors, special interfaces or protocols that different components can use to transfer data. As shown in Figure 6, the application shows channels being shared (subscribed to) by both policies and clients and it also illustrates policies publishing data to multiple channels. It should be pointed out though that it represents the logical structure of the system and does not reveal anything about the mapping of the functional entities to physical resources.

Resource allocation is important for system optimization and it becomes very important for large systems where poor performance can make them impractical. It is also a major concern in the system 30 where scalability is important. To this end, the system 30 will suggest resource allocation strategies that could be used to drive (to some degree) the resource mapping since the resource allocation can definitely be done in an automatic way to improve (not necessarily optimize) the performance of the system. The performance is measured in terms of network load and CPU load. For example, one can think of the policy engine as being a process that can easily be loaded on any platform. If there are a number of policies sharing the same channels one can bundle all of them in the same knowledge base and make them driven by the same policy engine. This will reduce the traffic by having only one copy of the data loaded moved to that machine alone instead of replicating it to multiple machines hosting the policies. Configuration optimization can become a difficult problem when policies are defined with a particular resource allocation scheme in mind. For example, if the pieces of information for some of the rules need are bundled in a blob then a single policy can be defined so that the blob is processed only once instead of moving it around. For example, some image processing (pattern matching) algorithm should be executed early on by the first policy that gets a hold of a picture instead of letting it wander around for other policies that may need pieces of it (timestamp, location, or other details) . Other issues like event incidence (frequency) and level of importance can also be factors to influence the resource allocation.

Figure 7 is a diagram illustrating an example of another deployment of the web-based sensor and video interoperability system 30 that has a primary location 90, a remote administration location 92, a remote user location 94 and a first and second remote sites 96,98 wherein the sensors 34 at the remote sites 96, 98 are controlled by the policy engine 50. The primary location 90 may include the policy engine 50, an administration agent 100 (the configuration unit 52) and one or more sensor proxies 56 connected to sensors 34 which are all coupled together by a bus 102 such as a secure message bus in the preferred embodiment. The primary location 90 may also include a client unit 104 and a user console unit 106 wherein each has a set of APIs 108. The client unit further comprises a client stack 110 and an administration console 112 while the user console 106 further comprises a user console application 114. The remote administration unit 92 permits a user remote from the primary location to perform administration operations and functions from the remote location. The remote user unit 94 permits a remote user to take actions that a user might be able to do at the primary location. One remote site 96 may include a phantom switch 116 and an administration agent 100. The one or more remote sites 96, 98 are coupled to the primary location 90, the remote administration unit 92 and the remote user unit 94 over a link 118 such as a virtual private network link in the preferred embodiment of the system.

Figures 8- 11 illustrate examples of the operation of the web-based sensor and video interoperability system when processing a sensor. In particular, Figure 8 illustrates a method 120 for processing a sensor signal. In step 122, a chemical sensor is triggered with chlorine bleach at a location in which a chemical sensor connected to the system is located. In this example, the chemical sensor generates a value indicating the level of the chlorine sensed by the sensor. In step 124, the sensor proxy for the particular chemical sensor (typically at.the location of the sensor) performs its functions (including the conversion of the sensor data format into a common format and protocol) and processes the sensor signal values. In step 126, the policy engine executes a policy associated with the sensor output (the condition) and performs an action based on the condition. In this example, the policy sends an alert to the client 36 over the communications link. In step 128, the client may distribute the alert to one or more users based on a set of group level policies that dictate which users receive the particular alert based on a triggering of the chemical sensor.

Figure 9 illustrates a method 130 for processing a different sensor signal in accordance with the invention wherein a client can trigger a sensor event. In this example, in step 132, a client sends an RFID trigger to the policy engine. In step 134, the policy engine and the sensor proxy (for the particular type of sensor) command an IP video camera (the sensor) to take a snapshot. In step 136, the sensor proxy gathers the images from the video camera and writes the images to a location (storage unit) for the client that requested the triggering of the video camera. In step 138, the policy engine executes a policy associated with the sensor output (the condition) and performs an action based on the condition. In this example, the policy sends an alert to the client 36 over the communications link about the completed video camera images. In step 140, the client may distribute the alert to one or more users based on a set of group level policies that dictate which users receive the particular alert based on a triggering of the video camera. Figure 10 illustrates a method 150 for processing a sensor signal of another type of sensor. In step 152, a radiation sensor located at a site of uranium ore is triggered wherein the radiation sensor is connected to the system. In this example, the radiation sensor generates a value indicating the level of the radiation sensed by the sensor. In step 154, the sensor proxy for the particular sensor (typically at the location of the sensor) performs its functions (including the conversion of the sensor data format into a common format and protocol) and processes the sensor signal values. In step 156, the policy engine executes a policy associated with the sensor output (the condition) and performs an action based on the condition. In this example, the policy sends an alert to the client 36 over the communications link. In step 158, the client may distribute the alert to one or more users based on a set of group level policies that dictate which users receive the particular alert based on a triggering of the radiation sensor. Figure 11 illustrates a similar method 160 in which the output from a bio sensor for airborne non-toxic contaminants is sensed, processes and distributed using similar steps as shown in Figure 10. Now, examples of the user interface of the system shown in Figures 1-2 and 7 will be shown in more detail.

Figures 12-16 illustrates examples of the user interface of the web-based sensor and video interoperability system and in particular the administration console 112 and user console 114. The consoles provide an administrator and user, respectively, with a user interface to interact with the system. For example, Figure 12 illustrates a user interface screen 170 by which an authorized user or an administrator is able to create, modify or delete the policies of the system. For each new policy, the user interface permits the user to select a type of sensor that is part of the policy, a type of operation, a conditional or logical operator and an action associated with the policy. The user interface may also provide the user with one or more condition templates and one or more action templates. From Figure 12, it is apparent that the system 20 permits an authorized user of the system to easily create, modify or delete a policy of the system.

Figure 13 shows an example of a user interface 180 for configuring the sensor proxies

(depicted here as sensor adaptors) and sensors of the system 30 shown in Figures 1 ,2 and 7. The user interface may include a list of sensor proxies that can be adjusted and modified by a user with the appropriate authority. The user interface also lists one or more sensors wherein each sensor has a sensor type, a location, a sample rate, a data filter and a syndication policy associated with the particular sensor. As shown in Figure 13, an authorized user can change the sample time of a sensor, change the data filter of the sensor and change the syndication policy of each sensor. The syndication policy is the policy (executed by the policy engine) that determines to which users/group of users an event notification about a sensor will be sent

Figure 14 shows an example of a user interface 190 that shows the real-time log of the web-based sensor and video interoperability system. Figure 15 shown an example of a user interface 200 showing the results of the processing of one or more sensors of the system wherein the user can configure which sensor outputs/results form part of the user interface. In the example shown, the reporting user interface is displaying video information in combination with other sensor information (temperature in a room and CPMs) to the user. For the exemplary video data shown in Figure IS, the video data includes IR data and visible spectrum data. Thus, the system is able to receive data from video and other sensors and combine that data together.

Figure 16 shows an example of a user interface 210 of a group management screen of the system 30. The group management screen permits an authorized user, such as an administrator, to create, modify or delete a group of users and/or colleague institutions. The colleague institutions are a group of all user for a particular organization with the access privileges for those users of the institution. For each group, the user interface permits the authorized user to select one or more users for the group as well as the access privileges for the particular group. Thus, the system permits the authorized user to specify a plurality of groups wherein each group may have a different or same group of users and a different/same access privileges for the group so that the system permits various levels of granularity of the access privileges for each user of the system.

Figure 17 illustrates more details of the inputs and outputs from the policy engine 50 known as VIPER which is a very high-density policy encoder. The policy engine receives input files 220 produced by any one of several commercial applications as well as data from the sensor proxies 56. The input files contain a variety of fields that typically include call information, signal intensity, and statistical information such as mean, median and standard deviations of intensity. There are many other fields for which support is also provided. The policy engine provides at least the following functions and operations including: 1) a direct comparison of results from different applications (since the data is in a common format); 2) performing complex set operations on results (intersections and unions.); 3) providing complex mathematical filters on result information; 4) allowing the user to view result information and carry out multi-level sorts and filters on viewed results information; 5) allowing the user to set up persistent file sections that allow the user to view and/or write only selected fields from a particular set of results; 5) allowing multiple views of the same or different sets of results to co-exist at any one time; 6) providing file output capability for any of the sets of results originally input or any results that have been modified by VIPER operations; 7) providing the capability to generate graphs based upon any two file columns; 8) calculating some basic statistics on a gene by gene basis wherein these statistics are mean, median, coefficient of variance, standard deviation, maximum and medium; and 9) performing simple calculations between different columns of data.

Thus, the policy engine 50 received both inputs files 220a and files 220b. The invention however, is not limited to any particular input files or file types as the system may be used with other input files and file types. As shown in Figure 17, the policy engine 50 may generate one or more outputs/output files such as a results file 222, a graph 224 and a report 226. Now, the metadata associated with the policy engine will be described in more detail.

Figure 18 is a metadata class diagram 230 of the policy engine 50. In particular, the policy engine is preferably implemented in JAVA using object oriented programming in which the data structures of the policy engine are objects and object classes such as those shown in Figure 18. The classes shown in Figure 18 are the classes used by the policy engine to performs the functions and operations of the policy engine. The classes may include a AttributeData class 230a wherein all arrays and results files for the policy engine 50 are stored in memory as a named set of AttributeData objects. The classes may also include a fastset class 230b that is similar to the Standard Template Library set and is used because it requires less memory resources than its STL counterpart (about 40% less) and a namedset class 230c is a subclass of fastest and adds the capability to name the set and to provide a free-format string as user data. These classes are shown in more detail in Figure 19. The classes may also include a subattributes class 23Oe that stores one or more sub-attributes of the system data. The classes may also include a AttributeDataRespository class 23Of that contains the objects that store the arrays and results files.

The classes further include a set of helpers to the main metadata classes that may include a sortcriteria class 23Od, a uniquenamehandler class 23Og, a supportedvariables class 23Oh, a logicaloperation class 23Oi and a fieldoperation class 23Oj. The SortCriteria is an elementary container class held by every AttributeData object and used for sorting and filtering. The UniqueNameHandler is used to format a key (name) for each AttributeData. The SupportedVariables are another container that is passed to each AttributeData object as a part of the initialization and used by the object to find out what ViaCollectable objects it supports and to allocate the memory for these. The LogicalOperation and FieldOperation objects are used by AttributeData during Gene Call and Mathematical Filter operations. The classes may further include a viacollectable class 230k which may further include a viacollectablearray 2301, a viacollectableint class 230m, a viacollectabledouble class 23On, a viacollectablestring class 23Oo and a fastarray class 23Op. A single instance of a ViaCollectable represents an individual attribute for a particular gene in an array, e.g. signal, p- value, etc. ViaCollectable is an abstract class and its subclasses which may be instantiated are

ViaCollectableArray, ViaCollectableDouble, ViaCollectableString and ViaCollectablelnt. This is a very lightweight class holding only an id (integer) and a value type corresponding to its own kind. E.g. A ViaCollectableArray holds a single fastarray<double>.

Figure 20 illustrates more details of the fastarray class 230p. Fastarray is a better performing implementation of the known STL valarray and replaces the valarray. In addition, it contains some methods for sorting and for making elementary mathematical calculations.

Figure 21 illustrates more details of the AttributeDataRepository class 23Of which is a singleton repository class. It contains all array and result files loaded or generated by the policy engine. Figure 22 illustrates more details of the AttributeData class 230a wherein the AttributeData class contains one gene entry in an array wherein, sometimes this corresponds to a single line within a text file. The object is able to hold a collection of its own kind, thereby allowing the concept of a sub-attribute data.

Figure 23 illustrates a file loader/writer class 240 of the policy engine. Each file loader / writer class derives from an abstract class AttributeDataMgr 242. The file loader/writer classes operate to permit the system to interface with external applications and system. In one embodiment, the system may have six loaders which include: a PL loader (GeneAttributePLDM 244a) that supports a ViaAmp loader (ViaAmpCellDM 244b) that supports ViaAmp files where there is cell (feature) level information; a results loader (AttributeResultsDM 244c) for results files produced after an intersection or union operation in PL mode; a statistics loader (AttributeStatisticsDM 244d) for files generated from the statistics modules; a data picture loader (AttributeDataGenePixDM 244e) for Pix files; and a ViaAMP loader (ViaAmpDM 244f) for ViaAmp files and files where there are multiple datapoints in a single file.

For loading, the virtual parseHeaderlnfo method is invoked to map the input file columns to identifiers and to provide any other information required to load the file. A loadResults method is invoked in order to load up the array information and store it in the AttributeDataRepository singleton object. Each DataManager class has a contained AsciiFileParser object 246 , used to read the file and parse it into fields. A setOutputData method is invoked to determine the columns (ids) to print out and to obtain any other information required to print out a file. A writeOutputFile method is invoked to write out the file. A Fieldlds object 248 contains a static collection of all identifiers used to identifier a particular column, including the associated strings (column headers) and types ( floating point, integer, string, array.)

Figure 24 illustrates an I/O handler object model 250 of the policy engine. An IO handler subclass must be passed into the DM class in order to successfully read or write a file. The IOHandler itself is abstract and provides the pure virtual methods that must be invoked. The class may include a FileHandler object 252 and a CFileHandler 254 that encapsulate the GUI CFiIe and CArchive objects respectively. A NonGUIFileHandler 256 uses the standard "C" library calls rather than GUI and a NullFileHandler 258 is a do nothing handler that is used if there is a need to determine the list for sorting or anything else of this ilk but no requirement to print anything. If a ViewerHandler 259 is passed to a DM class and a file write carried out then the information will appear in a grid rather than in a file output. The ViewerHandler is supported only for file write since reading the information in a grid is considered to be of low value (since it is easier to access the underlying metadata object.)

Figure 25 illustrates a dialog box/command object class 260 of the policy engine. The policy engine consists of a series of services encapsulated within self-contained dialog boxes. Where appropriate, these dialog boxes are modeless. Typically, such a service consists of an GUI dialog class containing a non-GUI command class (as shown in the diagram below for cDNA statistics.) The GUI class has the duty of loading user-selected GUI information to its contained command object. It then calls the virtual Execute method on the command object and loads the results back into the GUI. The command class has the duty of carrying out all processing not intimately connected to the GUI itself, but the command class has no knowledge of the GUI dialog box object containing it which is the command design pattern. An example of this is shown in Figure 26 for the statistics module.

Figure 26 illustrates a statistics command inheritance hierarchy 270 of the policy engine. In accordance with the invention, all command objects inherit from the abstract base class "command" 272. For statistics, there are two further levels of inheritance. The first level of StatisticsBaseCommand 274 is also abstract and contains generalized methods. The final level of inheritance contains the individual concrete statistics commands 276, 278. Figure 27 illustrates a statistics dialog box/command containment hierarchy 280 of the policy engine that has two GUI-dialog classes that both inherit directly from CDialog. Each class contains a command of the appropriate type.

The Command Pattern as related to file loading and writing are now described in more detail. The DataManager (DM) classes act as helper classes to the FileWrite and FileOpen command classes and are also encapsulated by these commands as shown in Figure 28. Figures 29- 31 show the other commands classes of the system wherein there is a command class corresponding to every significant operation carried out by the policy engine.

Figure 32 illustrates a view class model 290 of the policy engine. The policy engine is a single document interface (SDI) GUI project and this architecture imposes a document, view and mainframe class at the top level of the policy engine. In the policy engine, almost all of the real work is carried out by dialog boxes which are the next level down.

This leaves the top-level classes with only shell functionality, i.e. They provide the means to invoke the various dialog boxes but do very little else. The purpose of adopting this as a strategy is to allow the various policy engine dialog box based utilities to be easily invoked from applications other than the policy engine itself without any dependencies on these top-level classes.

Figure 33 illustrates a modeless dialog box class 300 of the policy engine. All Dialog boxes are derived from the GUI CDialog class (not shown in the diagram.) All of the dialog boxes that carry out complex tasks are modeless. Simple ones such as File Open remain modal. A ModelessDialogHandler 302 is a singleton class containing all of the modeless dialog box objects used by the policy engine. For most dialog boxes, provision is made for only a single instantiation within a given policy engine session with detached grid and graph dialog boxes being the exception since there may be multiple instantiations of these two dialog boxes. To invoke a modeless dialog box, the associated "Launch" method is invoked on the modeless dialog handler, which then checks to see the state of the requested dialog box wherein the states of the modeless dialog box can be "Not yet created", "hidden" or "already displayed."

For the multiple instance dialog boxes, the policy engine keeps a list of those that have already been used so that, when a request is received to invoke a dialog box of the same type, the Modeless Dialog Handler searches its list for one that is not in use. If it finds one, it displays it, if not then it creates a new one, displays it and adds it to its list. The modeless dialog boxes are not destroyed until the particular policy engine session is shut down.

Figure 34 illustrates a modal dialog box classes 310 of the policy engine. For these modal dialog boxes, all dialog boxes inherit from the GUI CDialog class (not shown in the diagram.) All of the complex operations are now modeless while the operations shown in Figure 34 are relatively simple and remain modal. These dialog boxes are typically invoked directly from the two "Views" classes.

A graphing module class and a profile module class of the policy engine are shown in Figures 35-36 and Figure 37 illustrates an observer base class of the policy engine. The profile module is used to store file display and file merge profiles. These profiles are read from flat file at session start up and written back when the session ends. The metadata classes used for storing array and result file information is not appropriate to store profiles because they do not really support ordering. File display profiles specify the order in which the columns are to be displayed. ProfileManager is a singleton class and acts as the repository for the individual profile objects.

Figures 38A - 40 illustrate more details of the Sensor Profile and Proxy manager of the system. In particular, Figures 38A and 38B illustrate the SPM distributed architecture. The Sensor Proxy Framework has been designed to allow the user to develop new sensor proxies very rapidly by establishing some standard patterns and hence allowing for a very high reuse of the existing code.

The Sensor Proxy Framework accomplishes the above by doing one or more of the following functions/operations:

• Each sensor proxy communicates with an individual sensor or mediator and normalizes the data received. • Data is published via channels through JMS, SOAP or locally. JMS is a real time messaging hub that allows all components in SPM to send and receive real time messages. SOAP is used to navigate through a firewall or where a high performance is required.

• The SPM supports a completely distributed architecture. JMS and SOAP output channels can be inputs to other SPMs. • The normalized format of all messages is standards-based. This format is SensorML for regular data and CAPl .1 for alerts and alarms.

• Each Sensor Proxy can publish four kinds of messages: namely data, alert, communications alarm and equipment alarm.

• The writer of a new sensor proxy is isolated from the details of this, including the underlying protocol supported by the channel. Any sensor proxy can be configured to have one or more of any of these channel types. • The messages passed through each channel are configurable.

• These message types are configured so as to provide meaningful information about the underlying sensor.

• A special support is provided for sensors with many heads or for mediators where many different kinds of sensor are supported. Each of the heads has its own set of channels.

• Simple filters may also be configured for a sensor proxy so that only data satisfying certain criteria is forwarded on a channel.

• There is in-built support for a wide range of IP-based protocols in the sensor proxy framework. This means that the writer of a new sensor proxy is isolated from the underlying protocol and can concentrate on the customization required for the particular sensor proxy rather than the transport.

• Each sensor proxy also has the capability to receive commands from the policy engine or from other sensor proxies. These commands can be local (on-process) or remote (forwarded to another SPM over SOAP.) • There are a set of tools built into the sensor proxy framework that assist the developer in writing a new sensor proxy. Examples are a token parser for binary protocols and a ric XML parser which converts an arbitrary XML document into a normalized format for use within the SPM.

• Each Sensor Proxy supports a simple simulation mode where the proxy forwards data based upon its internal configuration instead of establishing communications with a real sensor and sending live data.

Figure 39 illustrates the Sensor Proxy Framework and its protocol support. The sensor proxy framework provides base classes which support a wide range of protocols. The writer of a new sensor proxy derives from one of these classes, which effectively allows him to concentrate on the customizations that are required for the particular sensor. The workings of the underlying transport or application layer protocol is automatically taken care of by the framework. The protocols supported in the exemplary embodiment may include: TCP client and server, UDP, HTTP client and server, SOAP client and server, Telnet client, SNMP Manager and Agent, and/or XML-RPC.

Figure 40 illustrates the policy engine of the system. The Policy Engine is a sophisticated Rules Engine which lies at the heart of the SPM. The Policy Engine contains a collection of individual policies and each policy contains a condition template and an action template. The input to a policy is a channel, usually an output channel from a sensor proxy, but it could also be a channel from another policy. A condition template is used to determine whether the policy should fire or not. At its most elementary, a condition template could provide a simple threshold. E.g. When the concentration of a gas monitored by sensor "A" exceeds threshold "X", the policy fires. However, even for a simple case like this, the condition template will often need to be more complex. For example, one might have a rule to prevent the policy from firing more than once per minute in order to avoid "Event Storms." This means that Conditions Templates also have the capability to evaluate some very complex rules.

When a policy fires (by the conditions being matched), the action template is executed. The Action Template consists of a sequence of actions. Examples of actions are: • Transform the input message(s) to a derived message type and publish to JMS on an output channel (such as publishing an alert.)

• Issue a command to a command-type sensor proxy. (A specific example would result in a Text message appearing or a WEB page being displayed.)

• Play back the videos obtained for the past 30 seconds. • Call a list of persons and deliver an audio message.

• Start video retrieval occurring and publish it on a channel.

While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the invention, the scope of which is defined by the appended claims.

APPENDIX A POLICY EXAMPLE

default_raeasage_server vialinux2JmsServer

//########## Subtypes used by other types: type AlternateLocationType {

Alternate_location_x double

Alternate_location_y double

Alternate_location_z double } type BuildingLocationType {

Location_building string

Location_floor string

Location_room string

} type GeoLocationType { longitude double latitude double altitude double } type LocationType {

Location_description string

Location_street_address string Lιocation_city string

Location_state string

Location_zip_code string

Location_country string

Location_building string Liocation_floor string

Location_room string

}

//########## END Subtypes used by other types.

//########## Message Typeg: type CommAlarmMessageType {

Timestamp string

Description string

AlertLevel string

SensorName string

SensorType string buildingLocation BuildingLocationType alternateliocation AlternateLocationType geoLocation GeoLocationType

} type RichardsZetaBinaryValueData {

Timestamp string

BinaryValue integer buildingLocation BuildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

} type RichardsZetaPercentData {

Timestamp string

Percent double buildingLocation BuildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

} type RichardsZetaPowerData {

Timestamp string

Power double buildingLocation BυildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

} type RichardsZetaTemperatureData {

Timestamp string

Temperature double buildingLocation BuildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

} type VideoMessageType {

Timestamp string

VideoData string buildingLocation BuildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

} type AlertMessageType {

Timestamp string alert string buildingLocation BuildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

} type wprDataTivellaControlMessageType { UUID string

Timestamp string

HTTPGetURL string buildingLocation BuildingLocationType alternateLocation AlternateLocationType geoLocation GeoLocationType

}

//########## END Message Types.

configuration server myConfigServer { host "64.210.18.81" :10000 } local_message_server MyLocalMsgServer { channel LocalCommAlarmChannel type CommAlarmMessageType } jms_message_server vialinux2JmsServer { host "vialinux2" : 1099

//########## All the channels hosted by the message server (with the corresponding message types) : channel BMS.Alert .RZ .Meter. kw type AlertMessageType persistent channel BMS .Alert .RZ .VAV. Cool type AlertMessageType persistent channel BMS.Alert .RZ .VAV.Hot type AlertMessageType persistent channel BMS.Alert .RZ.DR. Init type AlertMessageType persistent channel BMS.Alert .RZ. Shed.Target type AlertMessageType persistent channel BMS .Alert -RZ . Emerg. PanicButton type AlertMessageType persistent channel BMS .Alert. RZ.VAV.SS type AlertMessageType persistent channel BMS.Alert .RZ. Lighting. ltg_lvll type AlertMessageType persistent channel BMS .Alert .RZ.Lighting. Itg_lvl2 type AlertMessageType persistent channel BMS.Alert .RZ.Access.AccessGranted type AlertMessageType persistent channel BMS .Alert .RZ. Console type AlertMessageType persistent channel BMS.Alert .RZ. Pager type AlertMesaageType persistent channel BMS. Data.RZ.VAV.Cool type RichardsZetaTemperatureData channel BMS.Data.RZ.VAV.Hot type RichardsZetaTemperatureData channel BMS. Data.RZ.Meter .kw type RichardsZetaPowerData channel BMS .Data .RZ. Shed.Target type RichardsZetaPowerData channel BMS .Data .RZ. Emerg .PanicButton type RichardsZetaBinaryValueData channel BMS .Data .RZ. DR. Init type RichardsZetaBinaryValueData channel BMS .Data .RZ. Lighting. ltg_lvll type RichardsZetaPercentData channel BMS .Data.RZ .Lighting. Itg_lvl2 type RichardsZetaPercentData channel BMS .Data -RZ. Access .AccessGranted type

RichardsZetaBinaryValueData channel BMS .Data .RZ. FanStatus type RichardsZetaBinaryValueData channel BMS.Data.RZ.LightStatus type RichardsZetaBinaryValueData channel BMS .Data.RZ.VAV.SS type RichardsZetaBinaryValueData channel BMS .Data.Lenel . BrokenWindow type RichardsZetaBinaryValueData channel BMS . Data .Lenel .OccupationThreshold type

RichardsZetaBinaryValueData channel BMS. Data.Lenel .AccessGranted type RichardsZetaBinaryValueData channel WPR. Data.Tivella.Groupl type wprDataTivellaControlMessageType channel WPR.Data.Tivella.Group2 type wprDataTivellaControlMessageType channel WPR.Data.Tivella.Group3 type wprDataTivellaControlMessageType channel WPR.Data.Tivella.Group4 type wprDataTivellaControlMessageType channel WPR. Data.Tivella .Groups type wprDataTivellaControlMessageType channel WPR. Data.Tivella .Groupβ type wprDataTivellaControlMessageType channel BMS .Video. Frames . IPCam.1 type VideoMessageType channel BMS.Video. Frames .BrokenWindow type VideoMessageType channel Chem.Video. Frames .DevLabWideAngle type VideoMessageType //########## END All the channels hosted by the message server (with the corresponding message types) .

} // END message server vialinux2JmsServer

//########## Policies: policy_template ACCESS_GRANTED_CONTROL_TIVELLA_template { condition true action publish

HTTPGetURL : constant AccessGrantedTivellaURL reference channels (BMS .Data.Lenel .AccessGranted) on channels (WPR.Data.Tivella.Groupl) , set TIVELLA_ON_TIME : register TIMESTAMP, set TIVELLA ON DEFAULT PAGE : false } " " policy ACCESS_GRANTED_CONTROL_TIVELLA { template ACCESS_GRANTED_CONTROL_TIVELLA_template constants AccessGrantedTivellaURL :

"http: //64.210.18.91/common/j sp/SPM/RealTimeDisplayGen2/realTimeBadgeAccessRep ortviewer . jsp" channel bindings [

BMS.Data. Lenel .AccessGranted : vialinux2JmsServer: :BMS .Data.Lenel .AccessGranted

WPR.Data .Tivella.Groupl : vialinux2JmsServer : : WPR.Data.Tive11a .Groupl ] register bindings [

TIMESTAMP : TIMESTAMP TIVELLA_ON_TIME : TIVELLA_ON_TIME

TIVELLA_ON_DEFAULT_PAGE : TIVELLA_ON_DEFAULT_PAGE J } policy_template TURN_ON_LIGHT_template { condition channel (turn_on_light_input_channel input) action control

RZAttr : "1"

RZNode : n/interfaces/relay2" SENSORS (RichardZetaLightRelay) , set LIGHT_ON_TIME : register TIMESTAMP } policy BROKEN_WINDOW_TURN_ON_LIGHT { template TURN_ON_LlGHT_template channel bindings [ turn_on_light_input_channel : vialinux2JmsServer : :BMS .Data.Lenel .BrokenWindow ] register bindings [

LIGHT_ON_TIME : LIGHT_ON_TIME TIMESTAMP : TIMESTAMP

] sensor bindings [

RichardZetaLightRelay : BMS_ViperServer : :RichardZetaLightRelay ]

} policy_template TURN_OFF_LIGHT_template { condition channel (turn_off_light_input_channel input). action if

(TIMESTAMP_GAP_SEC (register TIMESTAMP, register LIGHT_ON_TIME) > 10) &

(channel (light_status input) == 1) then control RZAttr : "0"

RZNode : n/interfaces/relay2n SENSORS (RichardZetaLightRelay) else fi } policy TURN_OFF_LIGHT { template TURN_OFF_LIGHT_template channel bindings [ turn_off_light_input_channel : vialinux2JmsServer : :BMS .Data.RZ. DR. Init light_status : vialinux2JmsServer : :BMS .Data . RZ.LightStatus- >BinaryValue ] register bindings [

LIGHT_ON_TIME : L:rGHT_ON_TIME TIMESTAMP : TIMESTAMP

] sensor bindings [

RichardZetaLightRelay : BMS_ViperServer: :RichardZetaLightRelay ]

} policy_template PLAY_BUFFERED_VIDEO_template { condition channel (play_buffered_video_input_channel input) action control

PublishAlert : "ln sensors (CameraForBMSBrokenWindowDemo) } policy PLAY_BUFFERED_VIDEO { template PLAY_BUFFERED_VIDEO_template channel bindings [ play_buffered_video_input_channel : vialinux2JmsServer : :BMS.Data . Lenel . BrokenWindow J sensor bindings [

CameraForBMSBrokenWindowDemo : BMS_ViperServer : :CameraForBMSBrokenWindowDemo

] } //##### END Policies.

//########## Sensor adapters: sensor_adapter_template ArecontCameraTemplate { me a a ag e_t ype s data_message VideoMessageType comm_alarm_mesBage CommAlarmMessageType protocol AreCont properties

Enabled integer

IPAddress string

HTTPService string cameraResolution string xO integer yθ integer xl integer yi integer doublescan integer quality integer acguisitioπBufferSize integer

PollingRate double

ResizeDivisor integer

BufferEnabled integer

SecondsInBuffer integer

SecondsToPublishVideoAlert integer

GeoLocation GeoLocationType

} sensor_adapter_instance CameraForBMSBrokenWindowDemo { template ArecontCameraTemplate channels data_channels

BMS.Video. Frames .IPCam.1,

Chem.Video. Frames. DevLabWideAngle alert_channels

BMS .Video . Frames . BrokenWindow comm_alarm_channels

MyliOcalMsgServer : : LocalCommAlarmChannel properties

Enabled : 1

IPAddress : "64.210.18.13

HTTPService : "image" cameraResolution : "FULL" xO ; 0 yO 0 xl : 2592 yi : 1936 doublescan : 0 quality : 20 acquisitionBufferSize : 5000000

PollingRate : 0.25

ResizeDivisor : 4

BufferEnabled : 1

SecondsInBuffer : 10

SecondsToPublishVideoAlert : 20 geoLocation. longitude : -71.648122 geoLocation. latitude : 42.71956 geoLocation. altitude : 0

} sensor_adapter_teτnplate RichardZetaLightRelay_template { message_tyρes data_itiessage RichardsZetaBinaryValueData comm_alarm_message CommAlarmMessageType protocol RichardZeta properties

Enabled integer

SimulatorMode integer

SessionMaster integer

IPAddress string

Port integer

PollingRate double appletName string nodeName string userName string password string } sensor_adapter_instance RichardZetaLightRelay { template RichardZetaLightRelay_template channels data_channels

BMS .Data.RZ . LightStatua comm_alarm_channels

MyLocalMsgServer: : LocalCommAlarmChannel properties

Enabled : 1

SimulatorMode 0

SessionMaster 1

IPAddress "68.111.41.170-

Port 80

PollingRate 1.0 appletName "/xmlrpc" nodeName "/interfaces/relay2" userName "mpxadmin" password "mpκadmin"

}

//########## END Sensor adapters.

//########## Viper servers: viper_server BMS_ViperServer { soap_server_port 10001 message_servers vialinux2JmsServer registers TIMESTAMP : string = "00000000000",

LIGHT_ON_TIME : string = "00000000000", TIVELLA_ON_TIME : string = "00000000000", TIVELLA_ON_DEFAULT_PAGE : boolean = true sensor_adapters

CameraForBMSBrokenWindowDemo, RichardZetaLightRelay policies

BROKEN_WINDOW_TURN_ON_LIGHT, PLAY_BUFFERED_VIDEO, TURN OFF LIGHT } // END viper server BMS_ViperServer

APPENDIX B Grammar for Sensor Policy Engine

0 Saccept: spm_description "end of file"

1 spm_description: default_message_server spm components

2 I spm_components

3 spm_components: spm_component

4 I spm_component spm_components

5 spm_component: type

6 I config_server_decl

7 I message_server_decl

8 I sensor adapter template

9 I sensor_adapter_instance

10 I policy_template

11 I policy_instance

12 I viper_server

13 message_server_decl: "jms_message_server" distinguished_name "{" "host" "const_string" ":" "const_integer" channels "}"

14 I "soap_message_server" distinguished name " {" "host" "const_string" 11 :" "const integer" channels "}"

15 I "local_message_server" distinguishedjname "{" channels "}"

16 I error distinguished_name "{" "host" "const_string" ":" "const_integer" channels "}"

17 I "jms_message_server" error "{" "host" "const_string" ":" "const_integer" channels "}"

18 I "jms_message_server" distinguished name error "host" "const_string" ":" "const_integer" channels "}"

19 I "jms_message_server" distinguished_name "{" error "const_string" ":" const_integer" channels "}" 20 I "jms_message_server" distinguishedjiame "{" "host" error ":" "const_integer" channels "}"

21 I "jms_message_server" distinguished_name "{" "host" "const_string" error "const_integer" channels "}"

22 I "jms_message_seτver" distinguished_name "{" "host" "const_string" ":" error channels "}"

23 I "jms_message_server" distinguished_name "{" "host" "const_string" ":" "const_integer" error "}"

24 I "jms_message_server" distinguished_name "{" "host" "const_string" ":" "const integer" channels error

25 I "jms_message_server" error "}"

26 I "soap_message_server" error "{" "host" "const string" ":" "const integer" channels "}"

27 I "soap_message_server" distinguished name error "host" "const_string" ":" "const_integer" channels "}"

28 I "soap_message_server" distinguished_name "{" error "const_string" ":" "const_integer" channels "}"

29 I "soap_message_server" distinguished_name "{" "host" error ":" "const_integer" channels "}"

30 I "soap_message_server" distinguished_name " {" "host" "const_string" error "const_integer" channels "}"

31 I "soap_message_server" distinguished_name " {" "host" "const_string" ":" error channels "}"

32 I "soap_message_server" distinguished_name " {" "host" "const_string" ":" "const integer" error "}"

33 I "soap_message_server" distinguished_name " {" "host" "const_string" ":" "const_integer" channels error

34 I "soap_message_server" error "}"

35 I "local_message_server" error " {" channels "}"

36 I "local_message_server" distinguished name error channels "}" 37 I "local_message_server" distinguished name "{" error "}"

38 I "local_message_server" distinguished_name "{" channels error

39 I "local_message_server" error "}"

40 default message server: "default message server" distinguished name

41 I "default_message_server" error

42 config_server_decl: "configuration" "server" distinguished_name "{" "host" "const string" ":" "const integer" "}"

43 type: enum_type

44 I structured_type

45 enum_type: "enum" distinguished_name " {" enum list "}"

46 enum list: "coπst_string"

47 I enum_list "const_string"

48 structured_type: "type" distinguished_name "{" type_attributes "}"

49 I error distinguished_name " {" type_attributes "} "

50 type attributes: scalar_type_attribute

51 I structured_type_attribute

52 I enum_type

53 I structured_type

54 I type_attributes scalar_type_attribute

55 I type_attributes structured_type_attribute 56 I type_attributes enum_type

57 I type_attributes structured_type

58 scalar_type_attribute: "identifier" scalarjype

59 I "identifier" scalar_type ":" constant

60 I "identifier" "enum" "identifier"

61 I "identifier" "enum" "identifier" ":" "const_string"

62 scalar type: "integer"

63 I "double"

64 I "boolean"

65 I "string"

66 structured_type_attribute: distinguished_name distinguishedjname

67 channels: channel

68 I channel channels

69 channel: "channel" distinguished name "type" distinguished_name

70 I "channel" distinguished name "type" distinguished_name "persistent"

71 viper_server: "viper_server" distinguished_name "{" "soap_server_port" "const_integer" message_servers registers sensor_adapters policies "}"

72 message servers: /* empty */

73 I "message_servers" message_server_list 74 message_server_list: distinguished_name

75 I message_server_list "," distinguished_name

76 registers: /* empty */

77 J "registers" register_type_list

78 register_type_list: register_name ":" register_type register_size

79 I register_type_list "," register_name ":" register_type register_size

80 register_name: distinguished_name

81 register_type: distinguished_name

82 I scalar_type

83 I scalar_type "=" constant

84 register_size: /* empty */

85 I "(" "const_integer" »)"

86 sensor adapters: /* empty */

87 I "sensor_adapters" sensor_adapter_instance_list

88 sensor_adapter_instance_list: distinguished_name

89 I sensor_adapter_instance_list "," distinguished_name

90 sensor_adapter_instance: "sensor_adapter_instance" distinguished_name sensor_adapter_instance_description 91 sensor_adapter_instance_description: "{" "template" distinguished_name sensor_channels filter_description sensor_adapter_instance_props "}"

92 I " {" error distinguished name sensor_channels filter_description sensor_adapter_instance_props "}"

93 I " {" "template" error sensor_channels filter_description sensor_adapter_instance_props "} "

94 I " {" error "}"

95 sensor_channels: /* empty */

96 I "channels" sensor_channels_list

97 I error sensor_channels_list

98 sensor_channels_list: /* empty */

99 I sensor_channels_list sensor_channels_pair

100 sensor_channels_pair: "data_channels" channel_list

101 I "alert_channels" channel_list

102 | "input channels" channel list

103 I "comm_alarm_channels" channel_list

104 I "equiprnent_alarm_channels" channel_list

105 I error channeMist

106 I "data channels" error

107 I "alert channels" error

108 I "input channels" error

109 I "comm_alarm_channels" error

110 I "equipment_alarm_channels" error

11 1 t error 112 filter_start: "filter"

113 filter_description: I* empty */

114 I filter_start expression

115 sensor_adapter_instance_props: /* empty */

116 I "properties" name_value_list

117 I "properties" error

118 name value list: distinguished_name ":" constant

119 I name_value_list distinguished_name ":" constant

120 I distinguished_name error constant

121 I distinguished_name ":" error

122 condition_template: "template" distinguished_name

123 I expression

124 action_template: "template" distinguished_name

125 I action_list

126 ρolicy_template: "policy_template" distinguished_name "{" "condition" condition_template "action" action_template "}"

127 expression: factor

128 I expression "+" expression

129 I expression "-" expression

130 I expression "|" expression 131 I expression "*" expression

132 I expression 1V" expression

133 I expression "&" expression

134 I expression " * * " expression

135 I expression "~-&" expression

136 I expression "~|" expression

137 I expression "A|Λ" expression

138 I expression "=" expression

139 I expression "!=" expression

140 I expression ">" expression

141 I expression ">=" expression

142 I expression "<" expression

143 I expression "<=" expression

144 I expression "II" expression

145 I expression "-<" expression

146 I expression ">-" expression

147 factor: constant

148 I reference

149 I function_call_expression

150 I "-" factor

151 I "-" factor

152 I "(" expression ")"

153 function_call_expression: distinguished name "(" expression_list ")"

154 expression_list: expression 155 I expression_list "," expression

156 constant: "const_integer"

157 I "const_double"

158 I boolean_constant

159 I "const_string"

160 boolean_constant: "true"

161 I "false"

162 input_qualifier: /* empty */

163 I "input"

164 I "output"

165 depth qualifier: /* empty */

166 I "depth" "const_integer"

167 timespan qualifϊer: /* empty */

168 I "timespan" "const_integer" "ms"

169 channel_reference: "channel" "(" qualified_channel_name input_qualifier depth_qualifier timespan_qualifier ")"

170 reference: channel_reference

171 I "register" qualified_register_name

172 I "sensor" qualified_sensor_name

173 T'constant" "identifier" 174 I distinguished name

175 qualified_ch£innel_name: distinguished_name

176 I distinguished_name "::" distinguished_name

177 I distinguished name "->" distinguished_name

178 I distinguished name "::" distinguished_name "->" distinguished_name

179 qualified_register_name: distinguished_name

180 I distinguished_name "->" distinguished name

181 qualified_sensor_name: distinguished_name

182 I distinguished_name "::" distinguished_name

183 I distinguished_name "->" distinguished name

184 I distinguished_name "::" distinguished_name "->" distinguished name

185 distinguished_name: "identifier"

186 I distinguished_name "." "identifier"

187 I distinguished name "." "const integer"

188 policies: /* empty */

189 I "policies" policy_instance_list

190 policy_instance_list: distinguished_name

191 I policy_instance_list "," distinguished_name

192 policy_instance: "policy" distinguished_name "{" "template" distinguished_name constant_declaration channel_bindings register_bindings sensor_bindings "}" 193 I "policy" distinguished_name

194 constant_declaration: /• empty */

195 I "constants" constant_declaration_list

196 constant_declaration_list: "identifier" ":" constant

197 I constant_declaration_list "," "identifier" ":" constant

198 channel bindings: /* empty */

199 I "channel" "bindings" "[" channel_name_pairs "]"

200 register_bindings: /* empty */

201 I "register" "bindings" "[" register_name_pairs "]"

202 sensor_bindings: /* empty */

203 I "sensor" "bindings" "[" sensor_name_pairs "]"

204 channel_name_pairs: qualified_channel_name ":" qualified channel name

205 I channel_name_pairs qualified_channel_name ":" qualified_channel_name

206 register_name_pairs: distingυished name ":" distinguished name

207 I register_name_pairs distinguished name ":" distinguished_name

208 sensor_name_pairs: qualified_sensor_name ":" qualified_sensor_name

209 I sensor_name_pairs qualified_sensor_name ":" qualified_sensor_name 210 sensor_adapter_template: "sensor_adapter_template" distinguished_name "{" sensor_adapter_template_description " } "

211 I error distinguished_name " {" sensor_adapter_template_description

212 I "sensor_adapter_template" error " {" sensor_adapter_template_description "} "

213 I "sensoτ_adapter_template" distinguished_name error sensor adapter template description " } "

214 I "sensor_adapter_template" distinguished_name "{" error "}"

215 sensor_adapter_template_description: sensor_message_types "protocol" distinguished_name sensor_adapter_template_props

216 sensor_message_types: /* empty */

217 I "message_types" sensor_message_type_list

218 I error sensor_message_type_list

219 sensor_message_type_list: /* empty */

220 I sensor_message_type_list sensor_message_type_pair

221 sensor_message_type_pair: "data message" distinguished name

222 I "alert_message" distinguished_name

223 I "comm_alarm_message" distinguished_name

224 I "equipment_alarm_message" distinguished name

225 I error distinguished_name

226 I "data_message" error

227 I "alert message" error

228 I "comm_alarm_message" error 229 I "equipment_alarm_message" error

230 I error

231 sensor_adapter_template_props: /* empty */

232 I "properties" type_attributes

233 I "properties" error

234 action list: action

235 I action list "," action

236 action: primitive action

237 I conditional_action

238 primitive_action: publish_action

239 I control_action

240 I function_call_expression

241 I assign_action

242 assign_action: "set" qualified_register_name ":" expression

243 I "set" qualified_register_name ":" "[" named_expression_list "]"

244 conditional_action: "if expression "then" action_list "else" action_list "fϊ"

245 I "if expression "else" actionjist "fi"

246 I "if expression "then" actionjist "fi"

247 publish_action: "publish" star_named_expression_list "reference" "channels" "(" channeljist ")" "on" "channels" "(" channeljist ")" 248 star_named_expression_list: "*" ":" qualified_channel_name

249 I named_expression_list

250 named_expression_list: named_expression_pair

251 I named expression list "," named_expression_pair

252 named_expression_pair: distinguished name ":" expression

253 I distinguished_name ":" "[" named_expression_list "]"

254 channel_list: qualified_channel_name

255 I channel_list "," qualified_channel_name

256 control_action: "control" named_expression_list "sensors" "(" sensor_list ")"

257 sensor list: qualified sensor name

258 I sensor list "," qualified_sensor_name

APPENDIX C

Sensor Proxy Framework

User and Programmer Guide

Table of Figures

FIGURE 1 SPM COMMUNICATIONS INFRASTRUCTURE 3

FIGURE 2 SENSOR PROXY FRAMEWORK - DEPENDENCY PACKAGE DIAGRAM 5

FIGURE 3 PROTOCOL CLASS MODEL 9

FIGURE 4 VIASENSORPROXYPROTOCOL U

FIGURE 5 SERVICE DATA OBJECT CLASS MODEL 16

FIGURE 6 DATA OBJECT CLASS 18

FIGURE 7 OBSERVER PATTERN CLASSES 20

FIGURE 8 COMPOSITE SENSOR PROXY EXAMPLES 29

FIGURE 9 SPM / SPM EDGE CONFIGURATION 37

1 Introduction

The purpose of this document is to provide a description of the sensor proxy framework and the information needed to allow an engineer to develop and configure a new sensor proxy using the framework.

The intended audience is software engineers and other technical staff using the sensor proxy framework to develop new sensor proxies.

It is not intended to be a full design document but does give a detailed account of the set of tools in the framework and of how to configure a sensor proxy using XML. I.e. the stress is on the exposed programming API, not on the underlying architecture.

The rationale surrounding the design of many of the features is discussed. Where a feature is currently flagged for enhancement then this is noted. The protocols currently supported by the sensor proxy framework include:

• TCP Client

• TCP Server

• UDP

• Telnet Client

• HTTP Post Client

• HTTP Post Server

• HTTP Get Client

• SNMP Manager

• SNMP Agent

• SOAP

• TCP Modbus as well as various proprietary protocols. All of these belong to the TCP/IP suite of protocols, or are application layers running aboveTCP/IP.

A basic understanding of IP-based protocols is required as is a grasp of "C++" and UML notation.

An understanding of XML and of design patterns is helpful. 2 SPM Communication Infrastructure

Figure imgf000056_0001

Figure 1 SPM Communications Infrastructure

The diagram above shows an extremely simplified representation of the communications infrastructure of the SPM.

The major components of the SPM are the policy engine and the sensor proxy framework. These are autonomous subsystems.

Both of these components are able to publish asynchronous data by means of channels. Presently, there are three kinds of channel, namely JMS1 SOAP and local channels.

Local channels are used when the source and end points of the channel are both on process.

SOAP channels are typically used where the footprint of the process is important and JMS is not supported.

In addition to publishing, there is a command architecture which allows the Policy Engine to send commands to individual sensor proxies.

The command architecture is a part of the sensor proxy framework.

The command is forwarded locally if the target sensor proxy is on-process or remotely over SOAP if it is running on a remote process. This is transparent to the Policy Engine.

Sensor proxies are also able to use this command architecture to send commands to each other. 3 Component Dependencies

Figure imgf000057_0001

Figure 2 Sensor proxy framework - Dependency Package Diagram

The diagram above shows the dependencies of the various components associated with the sensor proxy framework at compile time. It is not intended to suggest any order of invocation of methods, initialization or anything of this nature.

• The sensor proxy framework and the policy engine are autonomous subsystems within the SPM.

• The sensor proxy framework has no direct dependencies upon the policy engine whatsoever.

• The policy engine has a single dependency upon the sensor proxy facade to issue commands to individual sensor proxies.

• The sensor proxy facade is the only interface into the sensor proxy framework that should be used by any component outside of the framework itself. To date, the only functions required are initialization of the framework, shutdown of the framework and the issuing of commands to individual sensor proxies.

• The sensor proxy facade is the only SPM component that has a direct dependency upon the individual sensor proxies. There is work underway to create dynamic libraries for the individual sensor proxies and load them dynamically. Once this is in place, this dependency will disappear.

There is a separate project (package) for every individual sensor proxy.

Individual sensor proxies have no compile time dependencies between each other. (In most cases, there are no run-time dependencies either. There is an exception to this for composite and single-headed sensors.)

The ViaSensorProxy component is the heart of the sensor proxy framework, containing the base classes and utilities which are used in order to write new sensor proxies. Most of the remainder of this document will consist of the discussion of ViaSensorProxy.

Both ViaSensorProxy and the policy engine have dependencies on the channel architecture, and use channels to publish data!

ViaServiceDataObject is a metadata library, loosely based upon the "Service

Data Object" specification. It was developed for use with the sensor proxy framework but is also used by other components (not shown in the diagram.)

ViaAce is a wrapper around the ACE framework. ACE is a cross-platform toolkit used for managing threads and sockets. ViaAce is used by several other components besides ViaSensorProxy (not shown in the diagram.)

4 TCP/IP Communications Stack

The sensor proxy framework supports sensors that are IP-enabled. It does not support serial interfaces such as RS-232 and RS-485.

This section gives a very simplified over view of the TCP/IP stack and how it relates to the sensor proxy framework.

4.1 TCP/IP Stack

Figure imgf000059_0001

Each protocol layer uses the services of the layer directly below it. The first three protocol layers are encapsulated by the transport layer within the programming API. The sensor proxy framework deals only with the transport and application layers.

4.2 Connection-Oriented and Connectionless Transport Protocols

The usage of some of the tools within the sensor proxy framework depends upon whether the underlying protocols are connection-oriented or connectionless. A brief discussion of this subject follows.

A connection-oriented protocol is defined as a protocol where a connection has to be established between a client and a server before an exchange of data can occur. When the data interchange is complete, the connection is closed.

A connectionless protocol is defined as one where no connection has to be established and data can be exchanged at any time. Where connectionless protocols are used, the distinction between client and server are less clear-cut than in the connection-oriented case.

The Ethernet and IP layers are connectionless. The transport layer offers a choice between a connection-oriented and connectionless protocol.

TCP is the connection-oriented stream protocol. When a connection is established, the ordering of data sent and received is guaranteed as is delivery at the receipt point. Data packets can be concatenated or segmented.

UDP is the connectionless datagram protocol. The ordering of packets received is not defined and some packets may be lost altogether. Packets are always discrete. I.e. there is no concatenation or segmentation.

HTTP is a hybrid between the two, but for most purposes it can be treated as connectionless within the sensor-proxy framework.

HTTP runs above TCP (connection-oriented), but after establishing a connection, there is a single data exchange and then the connection is closed.

This means that the transmission of each HTTP Request is stateless (unlike raw TCP but similar to UDP.)

5 Sensor Proxy Framework Protocol Classes 5.1 Class Model

Figure imgf000061_0001

Figure 3 Protocol Class Model

The main task in writing a new sensor proxy is to derive and write a protocol class. All protocol classes inherit from one of the base classes above.

One of the purposes of these protocol classes is to protect the engineer from having to deal directly with the underlying transport protocol used by the sensor proxies.

It is also the intention to provide support for standard application protocols within these generic protocol objects. So far, this has only been achieved for Telnet and for SNMP. SNMP is discussed in another section.

The base class, ViaSensorProxyProtocol provides the sole interface to the channel architecture, the command architecture and basic filter logic.

5.2 Using the Protocol Classes to write a Sensor Proxy The first step is to identify the underlying transport protocol to be used by the new sensor proxy and use the table below to decide on the inheritance.

Figure imgf000062_0001

A Sensor Array consists of a set of similar sensors that share a resource (such as a TCP Connection) and so have a common polling sensor proxy. E.g. The ltrans gas sensor is double-headed: it has two gas detectors, one of which detects carbon monoxide and the other methane. Each of the heads is represented by a class derived from ViaSensorProxySingleHeadProtocol which depends for its data feed on an underlying polling sensor proxy.

A Sensor Group consists of a set of dissimilar sensors that do not share resources, but have some logical dependencies upon each other so that a higher-level (compound) sensor proxy manages them as a group. E.g. A trigger to a compound sensor proxy requires it to read GPS and compass data, move a camera to point in the right direction and start acquiring and publishing photographs.

Both of these types are discussed in greater depth in a later section.

5.3 ViaSensorProxyProtocol Base Class

Figure imgf000063_0001

Figure 4 ViaSensorProxyProtocol The ViaAceSocketThreadAction class is included here only because it provides some of the virtual methods that the programmer must override. A broader discussion of the ViaAce library is outside of the scope of this document.

All of the methods are described in the "C++" header files themselves. This section concentrates upon the ones that are of interest to the programmer.

This table should be used for reference only.

Figure imgf000064_0001
Figure imgf000065_0001
5.4 Writing a Sensor Proxy using the Protocol Classes

The methods to override depend upon the underlying transport protocol. This section provides more details.

5.4.1 In general:

• Decide which Sensor Proxy base class to derive from. Your choice should be based upon the underlying transport protocol. See the table in the last section.

• If your protocol is not supported or you need some custom behavior then inherit form ViaSensorProxyProtocol and override the Execute() method.

• Override initialize() if you need any extra initialization steps prior to entering the main loop.

• The getType method must always be overridden. A new enumerated type must be added to the ViaSensorProxyEnums in the header.

• The CloneO pure virtual constructor method must be overridden.

• acceptRunValuesO is the virtual method used to load values from the configuration object SDO.

• If your sensor proxy can receive commands then override acceptUpdate().

• Set up the configuration file. This is discussed in the next section.

In addition, you must override certain methods depending upon the underlying transport protocol.

5.4.2 TCP Client:

Sensors may be active or passive.

For both types, override the method "receiveFromServer(). This is used to process incoming data.

If your sensor is active( requires polling) then also override sendPoll().

5.4.3 TCP Server:

Override receiveFromClient() and sendPoll() if the sensor is passive.

5.4.4 HTTP Post Client:

■ Override sendPostQ if extra actions are required. (By default, this method sends the post string to the HTTP server once every poll period.

• Override initializePostString(). This sets up the HTTP post string based upon the configuration file.

• Override receiveFromServerQ to process the HTTP response. 5.4.5 HTTP Get Client:

HTTP Get strings take the general form: hostname: port/applet?parm 1 ;parm2;parm3 etc.

The pure virtual method constructURL() must be overridden. This constructs the HTTP Get string based upon entries in the configuration file.

5.4.6 Telnet Client:

Handle in the same way as for TCP Clients.

6 Service Data Objects

6.1 Service Data Object Class Model

Figure imgf000068_0001

Figure 5 Service Data Object Class Model

Service Data Objects (SDOs) are used within the sensor proxy framework as metadata in two different ways.

Firstly, SDOs are used to describe the messages that are sent out on data channels and on alert channels, and used to publish data internally within the process. (At the process boundary, SDOs are converted to SensorML.

Secondly, the configuration of each sensor proxy instance is described by an SDO. The ViaSdoDataObject class uses the composite design pattern and so has the capability to contain a collection of ViaSdoDataObjects to any depth.

ViaSdoDataObject also contains a collection of ViaSdoProperty which can be any one of the supported types shown in the diagram.

ViaSdoProperty can contain a single value or an array of values. Where there is an array, all of the values can be extracted into a ViaSdoPropertyList object.

6.2 Data Object Class

Figure imgf000070_0001

Figure 6 Data Object Class

Figure imgf000071_0001
7 Multi-headed sensors: Single-headed Sensor Proxies

7.1 Observer Design Pattern:

More information can be obtained regarding the observer pattern from the Gang- of-4 Design Patterns Book.

Figure imgf000072_0001

Figure imgf000072_0002

Figure 7 Observer Pattern classes

Essentially, a model observer can register interest in a model object without publishing its own interface to the model object. When the changed method is invoked on the model object, either by the model object itself or externally, then the virtual method updateFromModel() is invoked on each of the model objects registered observers. updateFromModel() is overridden for each observer so as to get the information it needs from the model and update itself based upon the change.

This pattern is used where the model objects should not know much about who their observers are. All sensor proxy protocol objects are both models and observers and the observer pattern itself is used in several different ways within the sensor proxy framework.

7.2 Example of a Multi-headed sensor proxy: ltrans

The ltrans sensor is a gas sensor which has two heads. One head tests for Carbon Monoxide: the other for Methane.

However, the two heads are not entirely separate since they share a single TCP connection.

If we model this as a single sensor proxy, each head will publish to different channels and the proxy has to apply some highly custom logic to decide on what to publish and to where. Suppose we have another sensor with twelve heads. This soon becomes unmanageable.

• If we model this as two separate proxies we need to introduce unwanted coupling between the two in order to share the TCP socket.

• The solution is to model an underlying polling sensor proxy that establishes the TCP connection and retrieves the data. This is a standard TCP client type sensor proxy, except that it does not publish to channels.

• We model two single-head sensor proxies which set up observer relationships with the TCP client proxy and each has their own channels.

• The single-head sensor proxies each apply standard filters which filter out data not for them. (Both filters are notified by the TCP client when any data is received.

• When the single-head sensors receive data that passes their internal filters they publish to their own channels.

Unlike almost every other type of sensor proxy, single-headed sensors are not run on their own threads. They receive notifications on the thread of their model object (the TCP client) and action the data synchronously.

Single-headed sensor proxies have a do-nothing execute loop which exits immediately, since they depend upon the Execute loop of the associated model object.

To create a single-headed sensor proxy, inherit from

ViaSensorProxySingleHeadProtocol.

Override these methods:

• getTyρe()

• acceptRunValuesO

• CloneO

• updateFromModelQ - See if the data passes our filter and publish if it does. Some single-headed sensor proxies also have to transform the data here. 8 XML Configuration Files

An XML configuration file is used to specify what sensor proxies to run and which what parameters.

Work is in progress to supersede this although we wilt still need some equivalent. This section describes the current state-of-play.

Since we took the ltrans sensor proxy as an example in the last section, we will take the corresponding configuration file entries here in order to illustrate the usage.

In order to create a new XML tag for use in the configuration file, one first has to add it to the XML schema, as a part of the subSectionEntry "choice".

<xs:complexType name="subSectionEntryType "">; <xs:sequence maxθccurs="unbounded"> <xs:choice>

<! — Properties. — >

<xs:element name="locallPAddress" type="xs:string"/> <xs:element name="remotelPAddress" type="xs:string"/> <xs:element name="locallPPort" type="xs:int"/> <xs:element name="remotelPPort" type="xs:int7> <xs:element name="SensorName" type="xs:string7>

</xs:choice> </xs:sequence > <xs:complexType>

The following sections must be present in the configuration file.

8.1 Sensor Proxy Types:

<configSection>

<sectionName>SensorProxyTypes</sectionName> <sectionEπtry>

<subSection>

<subSectionName>SeπsorProxyTypes</subSectionName> <subSectionEntry>

<SP_Type>ThermoElectron</SP_Type> <SP_Type>ITrans</SP_Type> <SP_Type>ITransChanneK/SP_Type> <SP_Type>AreCont</SP_Type> </subSectionEntry> </subSection> </sectionEntry> </configSection>

This section must contain the names of all sensor proxy types to be run within the application.

ITrans is the type name for the ltrans TCP client sensor proxy. ITransChannel is the type name for ltrans single-headed sensor proxies.

8.2 Message Type

Any message types used by the sensor proxies must be specified in the configuration file. For example, ltrans uses GasMessage as its data type.

<configSection>

<sectionName>DataTypes</sectionName> <sectionEntry>

<subSection>

<subSectionName>GasData</subSectionName> <subSectionEntry>

<Timestamp>000000000000</Timestamp> <NameofdetectedGas>Unknown</NameofdetectedGas>

<Concentration>0.00</Concentration> </subSectionEntry> </subsection> </sectionEntry> </configEntry> 8.3 Sensor Proxy Entries

The entry for the TCP client sensor proxy is:

<configSection>

<sectionName>ITrans</sectionName> <sectionEntry>

<subSection>

<subSectionName>iTransSP</subSectionName> <subSectionEntry>

<SensorName>ITransSP</SensorName> <remotelPAddress>64.210.18.21 </remotelPAddress> <remotelPPort>2101</remotelPPort> <PollingRate>1 </PollingRate> <Enabled>1 </Enabled>

<DataMessageType>GasData</DataMessageType> <AlertMessageType>NullAlert</AlertMessageType> </subSectionEntry> </subSection> </sectionEntry> </configSection>

The polling rate is in seconds.

The IP address and port specify the socket to which the client opens a connection.

The sensor name is the name which is used to address this sensor proxy for sending commands.

The enabled flag is standard. 0 = disabled, I = enabled, 2 = suspended.

The data message type and alert message type names must match to entries in the Message Types section.

The entries for the two single-head sensor proxies are:

<configSection>

<sectionName>ITransChannel</sectionName> <sectionEntry>

<subSection>

<subSectionName>ITransChanO</subSectionName> <subSectionEntry>

<SensorName>ITransChanO</SensorName>

<FilterAttribute>NameofdetectedGas</FilterAttribute>

<FilterValue>CO_Carbon_Monoxide</FilterValue>

<FilterMode>OR</FilterMode>

<FilterType>2</FilterType>

<Enabled>1 </Enabled>

<SP_Model>ITransSP</SP_Model>

<DataMessageType>NullData</DataMessageType>

<AlertMessageType>NullAlert</AlertMessageType> <jmsTopic>ITrans.Gas.CarbonMonoxide</jmsTopic> </subSectionEntry> </subSection> <subSection>

<subSectionName>ITransChan1</subSectionName> <subSectionEntry>

<SensorName>ITransChan 1 </SensorName> <Enabled>1 </Enabled> <SP_Model>ITransSP</SP_Model> <FilterAttribute>NameofdetectedGas</FilterAttribute> <FilterValue>CH4_Methane</FiIterValue> <FilterMode>OR</FilterMode> <FilterType>2</FilterType>

<DataMessageType>NullData</DataMessageType> <AlertMessageType>NullAlert</AlertMessageType> <jmsTopic>ITrans.Gas.Flammables</jmsTopic> </subSectioπEntry> </subSection> </sectionEntry> </configSection>

The filter values are applied by these sensor proxies to evaluate whether the data passed in is for them.

The "SP_Model" tag gives the name of the TCP client sensor proxy with which these register with to set up the observer relationship.

Both of these proxies are configured to publish to a JMS channel.

9 Adding a new sensor proxy to the application

Work is in progress to convert the current sensor proxy static libraries to DLLs and to dynamically load them,

Once this work is complete, this section will be superseded. Integration of new sensor proxies will be by configuration file only.

The current steps are these:

• Add a new enumerated value to ViaSensorProxyEnums as the type of the new sensor proxy.

• Add the new type to the ViaSensorProxyNameConverter class. This associates the enumerated values with the strings used in the XML config. File.

• In the sensor proxy facade, include the header for the new sensor proxy type in the source file and register the new sensor proxy type in SensorProxyFacade::registerProxies().

• Update the include paths for ViaSensorProxyFacade so that it finds the header file for the new sensor proxy.

• Update the configuration file.

10 SNMP Sensor Proxies

10.1 SNMP Manager (Trap Collector)

The SNMP Manager is a specialized sensor proxy that binds to the SNMP Trap Port (UDP 162.) Only one SNMP Manager can run on a particular station, but it is able to receive SNMP traps from any number of sensors.

The processing of these traps is specific to the kind of sensor that originated them and so the scenario here is one sensor proxy that receives the data from many sources and individual sensor proxies that transform and publish the data.

In other words, this is another situation where the use of single-headed sensors is appropriate.

The SPM currently supports just two kinds of device that send SNMP traps, namely a UPS sensor and a Seismic sensor.

Hence, we have a total of three SNMP sensor proxies that receive traps.

The first is the SNMP Manager binds to the SNMP trap socket and receives the traps.

The second and third are the UPS and Seismic sensor proxies which are both single-headed sensor proxies and register interest with the SNMP Manager.

As the number of SNMP devices we support grows, it is expected that there will be many more of these single-headed sensor proxies.

10.2SNMP Agent (Trap Generator)

The SNMP Agent does no polling and receives no data from outside of the process. Nor does it publish any data.

It is essentially a "Command" type of proxy which waits to receive a command from the policy engine or from another sensor proxy.

The command specifies the SNMP Trap to be sent. This must be a trap that is specified in the SPM MIB file.

Both the SNMP Manager and Agent derive from ViaSensorProxyProtocol and use the UDP mode of operation. 11 Composite Sensor Proxies

11.1 Over view

The purpose of composite sensor proxies is to manage a group of dissimilar sensors that form a logical group, requiring some common processing.

Composite sensor proxies do not have any underlying protocols of their own but rely upon the individual proxies comprising the group in order to obtain data from the physical sensors.

11.2 Example

Figure imgf000080_0001

Figure 8 Composite Sensor Proxy Examples

This is a slightly simplified example of an actual scenario supported by the SPM.

On receiving a trigger (command), the Image Acquisition sensor proxy retrieves the latest GPS position and compass bearing from two individual sensor proxies and issues a command to a camera sensor proxy to point the camera in the right direction and begin image acquisition.

The Image Acquisition sensor proxy already has the GPS and compass readings required since it has registered to receive this data via local channels.

It uses the standard command architecture in order to control the camera.

By using the standard command architecture and using local channels, we avoid the pitfall of introducing coupling between these four different sensor types.

The GPS and compass sensor proxies have no intelligence about the Image Acquisition proxy. They only publish to channels.

Similarly, the Camera sensor proxy only has the intelligence to check its internal queue to see if there are any commands. It knows nothing about the source of the commands.

The Image Acquisition sensor proxy has to understand something about the fields in the normalized messages it receives from the GPS and Compass sensor proxies and the format of the commands it sends to the Camera. It has no knowledge about the sensor proxies themselves and the underlying protocols are completely hidden from it.

11.3 How to write a Composite Sensor Proxy

The new composite proxy protocol object inherits from ViaSensorProxyCompositeProtocol and uses its execute loop.

The usual methods are overridden:

• getType()

• CloneO

• InitializeQ

• acceptRunValuesO

Composite proxies are not associated with a real protocol. They receive there data by command and by input channel.

Override the following methods:

• onReceiveDataO - invoked when data from an input channel is received.

• onReceiveCommand() - invoked when a command is received from the policy engine or from another sensor proxy.

Registering interest in local channels is a configuration option requiring no coding. Here is some sample XML.

<configSection> <sectionName>lmageAcquisition </sectionName> <sectionEntry>

<subSection>

<subsection <Name>BorderSecuritySP</subSectionName> <subSectionEntry>

<SensorName>BorderSecuritySP</SensorName>

<PollingRate>1 </PollingRate>

<Enabled>1 </Enabled>

<locallnputChannel>GPS1 Chan</locallnputChannel>

<locallnputChannel>Compass1n</locallnputChannel>

<Camera1 tName>QuickSet1 SP</Camera1 Name>

</subSectionEntry> </subSection> </sectionEntry> </configSection>

The "locallnputChannel" names must match the names of the local channels that the compass and GPS sensor proxies publish to which is also specified in the configuration.

12 Sensor Proxy Framework - Utilities 12.1 ViaSensorProxyPacketAssembler

ViaSensorProxyPacketAssembler is used for proxies with connection-oriented protocols and is used to undo packet concatenation and segmentation.

Example:

The physical sensor is a TCP server.

It sends data asynchronously as a response to polls.

However, responses get concatenated together so that several responses can be sent as a part of a single TCP packet.

Also, the beginning and end of the TCP packet cannot be assumed to align with the beginning and end of responses from the sensor.

Each response starts with an ACK character (hex 6) or a NAK character (hex 15) and ends with an ETX character (hex 3.)

The sensor proxy physically contains a packet assembler object. The constructor looks like this:

// Pass the first start token to the packet assembler. fastarray<unsigned char> startTok; startTok.resize(1 ); startTok[0] = ACK; m_assembler.setStartToken(startTok);

// Pass the second start token to the packet assembler. startTok[0] = NAK; m_assembler.setStartToken(startTok);

// Pass the end token to the packet assembler. fastarray<unsigned char> endTok; endTok.resize(i); endTok[0] = ETX; m_assembler.setEndToken(endTok);

// Register interest. Set up an observer relationship.. m_assembler.registerlnterest(this);

When the sensor proxy receives data from the sensor, the receiveFromServer method is invoked. The data received is not processed by the proxy but passed directly to the packet assembler. void ViaMyProtocol::receiveFromServer()

{ int size; unsigned char *data = Receive(size); if ( (size > 0) && (data) )

{ m_assembler.handlelnput(data, size);

Each time the packet assembler parses a completeresponse, it notifies the sensor proxy via the updateFromModel method, which is then able to process it. void ViaMyProtocol::updateFromModel(void *clientData)

{

// For now we take this as a status response. int index = m_assembler.getStartTokenlndex(); fastarray<unsigned char> message = m_assembler.getMessage();

This utility is suitable for both binary and ASCII protocols.

12.2 Simple Filters

Simple filters can be set up within the configuration file.

This filter operates on a Service Data Object (SDO), usually the data message SDO or the alert SDO.

Here is some sample XML.

<sectionEntry>

<subSection>

<subSectionName>MySP</subSectionName> <subSectionEntry>

<FilterAttribute>AlarmType</FilterAttribute> <FilterVal ue>4</FilterValue> <FilterAttribute>AlarmSubType</FilterAttribute> <FilterValue>2</FilterValue> <FilterMode>AND</FilterMode> <FilterType>2</FilterType>

<jmsTopic>ITrans.Gas.CarbonMonoxide</jmsTopio </subSectionEntry> </subSection> </sectionEntry>

The filter tags above have the following meanings:

FilterAttribute: The name of the SDO property to apply the filter to. FilterValue: The value of the attribute to apply the filter to. FilterMode: The valid values are AND and OR. FilterType: 0 = less than, 1 = greater than; 2 = equality.

The filter outlined above gives us an AND filter which is passed when the AlarmType field = 4 AND the AlarmSubType = 2.

From a sensor proxy, a filter is evaluated with the following line of code: ViaSdoDataObject sdo; bool isPass = m_filterMgr.passFllter(sdo);

If the sensor proxy is not configured with a filter then this will return "true." i.e. A null filter is always passed.

If one or more of the SDO properties is not present in the SDO input parameter then this statement will return "true."

12.3 Sensor Proxy Group Manager

This is a utility which allows sensor proxies of the same type to be assigned to groups.

The utility is driven from the configuration file. Here is some sample XML:

<configSection>

<sectionName>SensorGroups</sectionName> <sectionEntry> <subSection>

<subSectionName>Tivella1</subSectionName> <subSectionEntry>

<SensorGroupName>TivellaK/SensorGroupName>

<SensorGroupType>Tivella</SensorGroupType>

<SensorName>TivellaSP1</SensorName>

</subSectionEntry> </subSection> <subSection>

<subSectionName>Tivella2</subSectionName> <subSectionEntry>

<SensorGroupName>Tivella2</SensorGroupName> <SensorGroupType>Tivella</SensorGroupType> <SensorName>TivellaSP2</SensorName> <SensorName>TivellaSP3</SensorName> </subSectionEntry> </subSection> <subSection> <subSection>

<subSectionName> I Nova 1 </su bSection Name> <subSectionEntry>

<SensorGroupName>INovaK/SensorGroupName> <SensorGroupType>lnova</SensorGroupType> <SensorName>INovaSPK/SensorName> </subSectionEntry> </subSection> </sectionEntry> </configSection>

Each sensor group is assigned a group name and a type. The type must match one of the entries in the "SensorProxyTypes" section of the configuration file.

The Sensor names must match to corresponding names in the sections where the sensor proxies themselves are specified.

No coding effort is required to load the sensor groups. The following code fragment shows how to retrieve sensor groups from the Group Manager Singleton object.

ViaSensorProxyGroupManager *spg = ViaSensorProxyGroupManager::lnstance(); const map< string, vector<string> > *tivellaGroups

= spg->getGroupsForType("Tivella"); const map< string, vector<string> > *inovaGroups

= spg->getGroupsForType("lnova");

13 SPM Edge Sensor Proxies

Figure imgf000088_0001

Figure 9 SPM / SPM Edge Configuration

The diagram above shows what will surely become a typical configuration for SPM and SPM Edge.

Each instance of the SPM Edge runs on a separate work station and managed a set of sensors using the sensor proxy framework.

The SPM server has the capability of managing sensors directly, but for this configuration, the SPM server runs only the policy engine and acts as a hub.

The yellow arrows represent a two-way communication between the SPM server and SPM Edge processes and the remainder of this section provides a discussion how this is accomplished.

13.1 Publishing Data from the SPM Edge to the SPM server

Each of the sensor proxies running on the SPM Edge is configured to publish data to SOAP channels. This is a configuration option.

On the SPM server, one instance of the SPM Edge Server sensor proxy is run. This receives the SOAP packets from the SPM Edge and republishes to JMS channels or local channels.

13.2 Sending commands from the SPM server to the SPM Edge The SPM server runs one instance of the SPM Edge Command sensor proxy. When the SPM server needs to send a command to a sensor proxy, the command consists of the name of the sensor proxy and an SDO specifying the command details.

The command is passed to the sensor proxy framework.

If the sensor proxy is running locally on the SPM server, the command is passed to the proxy directly.

Otherwise, the command is passed to the SPM Edge Command sensor proxy. This proxy holds a table of sensor proxy names together with the SOAP end point for each (contained in the configuration file.)

The Command proxy looks up the SOAP end point for the sensor proxy name it has been given and sends the command to the SPM edge.

13.3 Rationale

The SPM has the capability to manage and interact with other sensor management systems such as the Richards-Zeta mediator and Lenel On-Guard.

Using the two SPM Edge sensor proxies, SPM Server is able to interact and manage a remote SPM Edge processes in much the same way. The only two differences are that the publishing and command patterns have been separated, and that the SPM Edge proxies are more flexible in that SPM server and SPM Edge processes are interchangeable in the interactions that are possible.

Claims

Figure imgf000090_0001
A sensor interoperability system, comprising: one or more sensors that each generate a piece of sensor data; one or more sensor proxies, each sensor proxy coupled to one of the one or more sensors wherein each sensor proxy processes the piece of sensor data from the sensor to create a sensor S system piece of data wherein the system piece of data has a common format; and a policy unit coupled to the one or more sensor proxies over a communications path, wherein the policy unit further comprises one or more policies wherein each policy further comprises a condition and an action wherein the condition tests a sensor system piece of data and the action is an action that occurs when the condition is satisfied and a policy engine that applies 0 the one or more policies to the sensor system piece of data.
2. The system of claim 1, wherein the one or more sensor proxies further comprises a first sensor proxy for a first type of sensor having a first set of characteristics and a second sensor proxy for a second type of sensor having a second set of characteristics.
3. The system of claim 1, wherein each sensor further comprises an information 5 source.
4. The system of claim 3, wherein the one or more sensors further comprises one of a read-only sensor, a read-write sensor and a degenerative sensor.
5. The system of claim 4, wherein the one or more sensors further comprises one of a real time location system sensor, a building management system sensor, a distributed control 0 system sensor, a supervisory control and data analysis (SCADA) system sensor, a biometric sensor, a chemical sensor, a biosensor, a nuclear sensor, a video sensor and a geographical information system sensor.
6. The system of claim 5, wherein the video sensor further comprises one of a low resolution visible spectrum sensor, a high resolution visible spectrum sensor and a thermal sensor. 5
7. The system of claim 5, wherein the geographical information system sensor further comprises a sensor that pulls data from a CAD format and a sensor that pulls data from a geographical information system database.
8. The system of claim 1, wherein the policy unit further comprises a field programmable field gate array on which one or more pieces of software code are stored that 0 implement the functions of the policy unit.
EMY72O3187
9. The system of claim 1 , wherein the policy unit further comprises one or more pieces of software code wherein the one or more pieces of software code implement the functions of the policy unit.
10. The system of claim 1, wherein the common format further comprises a SensorML format.
- 11. A method for monitoring a system using a sensor network, comprising: generating a piece of sensor data at each of one or more sensors; processing each piece of sensor data of each sensor using a sensor proxy to create a sensor system piece of data wherein the sensor system piece of data has a common format; determining if the sensor piece of data triggers a policy contained in a policy engine wherein the policy is triggered when the sensor system piece of data meets a condition of one or more policies stored in the policy engine; and performing an action of the triggered policy.
12. The method of claim 11, wherein the processing of each piece of sensor data further comprises processing a piece of sensor data from a first type of sensor having a first set of characteristics into the common format and processing a piece of sensor data from a second type of sensor having a different set of characteristics into the common format.
13. The method of claim 11 , wherein generating the piece of sensor data further comprises generating a piece of data from an information source.
14. The method of claim 13, wherein generating the piece of sensor data further comprises generating a piece of data from one of a read-only sensor, a read-write sensor and a degenerative sensor.
15. The method of claim 14, generating the piece of sensor data further comprises generating a piece of data from one of a real time location system sensor, a building management system sensor, a distributed control system sensor, a supervisory control and data analysis
(SCADA) system sensor, a biometric sensor, a chemical sensor, a biosensor, a nuclear sensor, a video sensor and a geographical information system sensor.
16. The method of claim 11, wherein the common format further comprises a SensorML format.
EMY72O3187
PCT/US2007/008918 2006-04-08 2007-04-09 Software enabled video and sensor interoperability system and method WO2007117705A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US79076706P true 2006-04-08 2006-04-08
US60/790,767 2006-04-08

Publications (3)

Publication Number Publication Date
WO2007117705A2 true WO2007117705A2 (en) 2007-10-18
WO2007117705A9 WO2007117705A9 (en) 2008-07-10
WO2007117705A3 WO2007117705A3 (en) 2008-09-18

Family

ID=38581700

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/008918 WO2007117705A2 (en) 2006-04-08 2007-04-09 Software enabled video and sensor interoperability system and method

Country Status (1)

Country Link
WO (1) WO2007117705A2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3018543A1 (en) * 2014-11-05 2016-05-11 FAZ Technology Limited Sensor information transparency system and method
WO2016166594A1 (en) * 2015-04-16 2016-10-20 Faz Technology Limited Remotely adapted virtual sensor optimization system and method
EP3109720A1 (en) * 2015-06-22 2016-12-28 Rockwell Automation Technologies, Inc. Active report of event and data
US9641432B2 (en) 2013-03-06 2017-05-02 Icu Medical, Inc. Medical device communication method
WO2017060776A3 (en) * 2015-10-08 2017-08-03 King Fahd University Of Petroleum And Minerals Methods and apparatus to design collaborative automation systems based on data distribution service middleware
US9971871B2 (en) 2011-10-21 2018-05-15 Icu Medical, Inc. Medical device update system
US10042986B2 (en) 2013-11-19 2018-08-07 Icu Medical, Inc. Infusion pump automation system and method
US10238799B2 (en) 2014-09-15 2019-03-26 Icu Medical, Inc. Matching delayed infusion auto-programs with manually entered infusion programs
US10238801B2 (en) 2009-04-17 2019-03-26 Icu Medical, Inc. System and method for configuring a rule set for medical event management and responses
US10242060B2 (en) 2006-10-16 2019-03-26 Icu Medical, Inc. System and method for comparing and utilizing activity information and configuration information from multiple medical device management systems
US10311972B2 (en) 2013-11-11 2019-06-04 Icu Medical, Inc. Medical device system performance index
US10314974B2 (en) 2014-06-16 2019-06-11 Icu Medical, Inc. System for monitoring and delivering medication to a patient and method of using the same to minimize the risks associated with automated therapy
US10434246B2 (en) 2003-10-07 2019-10-08 Icu Medical, Inc. Medication management system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6589831B2 (en) * 2001-12-12 2003-07-08 Samsung Electronics Co., Ltd. Transistor structure using epitaxial layers and manufacturing method thereof
US20040252053A1 (en) * 2003-06-13 2004-12-16 Harvey A. Stephen Security system including a method and system for acquiring GPS satellite position
US7020701B1 (en) * 1999-10-06 2006-03-28 Sensoria Corporation Method for collecting and processing data using internetworked wireless integrated network sensors (WINS)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7020701B1 (en) * 1999-10-06 2006-03-28 Sensoria Corporation Method for collecting and processing data using internetworked wireless integrated network sensors (WINS)
US6589831B2 (en) * 2001-12-12 2003-07-08 Samsung Electronics Co., Ltd. Transistor structure using epitaxial layers and manufacturing method thereof
US20040252053A1 (en) * 2003-06-13 2004-12-16 Harvey A. Stephen Security system including a method and system for acquiring GPS satellite position

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10434246B2 (en) 2003-10-07 2019-10-08 Icu Medical, Inc. Medication management system
US10242060B2 (en) 2006-10-16 2019-03-26 Icu Medical, Inc. System and method for comparing and utilizing activity information and configuration information from multiple medical device management systems
US10238801B2 (en) 2009-04-17 2019-03-26 Icu Medical, Inc. System and method for configuring a rule set for medical event management and responses
US9971871B2 (en) 2011-10-21 2018-05-15 Icu Medical, Inc. Medical device update system
US10333843B2 (en) 2013-03-06 2019-06-25 Icu Medical, Inc. Medical device communication method
US9641432B2 (en) 2013-03-06 2017-05-02 Icu Medical, Inc. Medical device communication method
US10311972B2 (en) 2013-11-11 2019-06-04 Icu Medical, Inc. Medical device system performance index
US10042986B2 (en) 2013-11-19 2018-08-07 Icu Medical, Inc. Infusion pump automation system and method
US10314974B2 (en) 2014-06-16 2019-06-11 Icu Medical, Inc. System for monitoring and delivering medication to a patient and method of using the same to minimize the risks associated with automated therapy
US10238799B2 (en) 2014-09-15 2019-03-26 Icu Medical, Inc. Matching delayed infusion auto-programs with manually entered infusion programs
EP3018543A1 (en) * 2014-11-05 2016-05-11 FAZ Technology Limited Sensor information transparency system and method
WO2016166594A1 (en) * 2015-04-16 2016-10-20 Faz Technology Limited Remotely adapted virtual sensor optimization system and method
US10165095B2 (en) 2015-06-22 2018-12-25 Rockwell Automation Technologies, Inc. Active report of event and data
EP3109720A1 (en) * 2015-06-22 2016-12-28 Rockwell Automation Technologies, Inc. Active report of event and data
US10146216B2 (en) 2015-10-08 2018-12-04 King Fahd University Of Petroleum And Minerals Autonomous process interface systems based on data distribution service middleware
US9874867B2 (en) 2015-10-08 2018-01-23 King Fahd University Of Petroleum And Minerals Clustered automation platform based on data distribution service middleware
WO2017060776A3 (en) * 2015-10-08 2017-08-03 King Fahd University Of Petroleum And Minerals Methods and apparatus to design collaborative automation systems based on data distribution service middleware
US10185311B2 (en) 2015-10-08 2019-01-22 King Fahd University Of Petroleum And Minerals Methods and apparatus to design collaborative automation systems based on data distribution service middleware
US10429825B2 (en) 2015-10-08 2019-10-01 King Fahd University Of Petroleum And Minerals Collaborative automation platform
US10082786B2 (en) 2015-10-08 2018-09-25 King Fahd University Of Petroleum And Minerals Distributed autonomous process interface systems based on data distribution service middleware

Also Published As

Publication number Publication date
WO2007117705A3 (en) 2008-09-18
WO2007117705A9 (en) 2008-07-10

Similar Documents

Publication Publication Date Title
Mahnke et al. OPC unified architecture
Uszok et al. KAoS policy management for semantic web services
Wang et al. A comprehensive ontology for knowledge representation in the internet of things
US7267275B2 (en) System and method for RFID system integration
US6941557B1 (en) System and method for providing a global real-time advanced correlation environment architecture
US6751657B1 (en) System and method for notification subscription filtering based on user role
US20090049518A1 (en) Managing and Enforcing Policies on Mobile Devices
CN103891201B (en) For the exploitation of the application and service based on sensing data and the calculating platform of deployment
US20050119905A1 (en) Modeling of applications and business process services through auto discovery analysis
US20070112574A1 (en) System and method for use of mobile policy agents and local services, within a geographically distributed service grid, to provide greater security via local intelligence and life-cycle management for RFlD tagged items
JP2007519075A (en) Scalable synchronous and asynchronous processing of monitoring rules
US8234704B2 (en) Physical access control and security monitoring system utilizing a normalized data format
US20060026193A1 (en) Dynamic schema for unified plant model
US20020032762A1 (en) System and method for remotely configuring testing laboratories
EP2191388B1 (en) System and method for deploying and managing intelligent nodes in a distributed network
Leitner et al. OPC UA–service-oriented architecture for industrial applications
US20060069995A1 (en) Personalised process automation
US9323647B2 (en) Request-based activation of debugging and tracing
US7676560B2 (en) Using URI&#39;s to identify multiple instances with a common schema
US8620851B2 (en) System and method for determining fuzzy cause and effect relationships in an intelligent workload management system
Katasonov et al. Smart Semantic Middleware for the Internet of Things.
Chen et al. Solar: An open platform for context-aware mobile applications
Khalaf et al. Business processes for Web Services: Principles and applications
JP2013513860A (en) Cloud computing monitoring and management system
Mohamed et al. A survey on service-oriented middleware for wireless sensor networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07775164

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07775164

Country of ref document: EP

Kind code of ref document: A2