CA2296136A1 - System for processing and distributing service and status data from diverse sources - Google Patents

System for processing and distributing service and status data from diverse sources Download PDF

Info

Publication number
CA2296136A1
CA2296136A1 CA 2296136 CA2296136A CA2296136A1 CA 2296136 A1 CA2296136 A1 CA 2296136A1 CA 2296136 CA2296136 CA 2296136 CA 2296136 A CA2296136 A CA 2296136A CA 2296136 A1 CA2296136 A1 CA 2296136A1
Authority
CA
Canada
Prior art keywords
data
client
sources
format
endpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2296136
Other languages
French (fr)
Inventor
Mark T. Ponder
Ralph C. Brindle
Charles H. Stevens
Edward W. Bruggeman
Lawrence E. Schwartz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
C3 COMMUNICATIONS Inc
Original Assignee
C3 COMMUNICATIONS Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by C3 COMMUNICATIONS Inc filed Critical C3 COMMUNICATIONS Inc
Publication of CA2296136A1 publication Critical patent/CA2296136A1/en
Abandoned legal-status Critical Current

Links

Abstract

A system is provided for processing and delivering service usage and status data collected from diverse sources. The sources are typically various endpoint service monitoring devices owned by one or more clients.
The data collected from the endpoint service monitoring devices is first parsed into a common format. When requested or in accordance with a schedule, data associated with a given client is then translated from the common format into a format specified by the client (typically a format compatible with the client's existing information system). The translated data is then delivered to the client.

Description

,a-- Atty. Dkt. No. 7816/31 PATENT
SYSTEM FOR PROCESSING AND DISTRIBUTING
SERVICE AND STATUS DATA FROM DIVERSE SOURCES
BACKGROUND OF THE INVENTION
Technical Field The present invention relates generally to automated meter reading (AMR) systems and, more particularly, to a system for collecting, processing and delivering service usage and status data from diverse sources.
Description of the Related Art AMR systems are used by utilities for collecting service usage and status data from endpoint service monitoring devices over a communications network. The service usage and status data typically includes individual customer service records, which contain consumption, service tampering, outage and other information.
The endpoint service monitoring devices and other AMR products are provided by many different vendors. There are no standardized methods of collecting service usage and status data from these diverse devices and then delivering the data to utility clients for use in their existing utility information systems. Consequently, gathering service usage and status data from diverse sources in useful form is an inefficient and costly process.

007816.00031:0387881.01 ~.-Atty. Dkt. No. 7816/31 PATENT
Thus a need exists for a vendor-neutral utility data system that can be used to interface diverse input service monitoring devices to various client information processing systems.
-2-007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
BRIEF SUMMARY OF THE INVENTION
A primary object of the present invention is to provide a vendor-neutral utility data system that can be used to interface diverse input AMR
devices to diverse client data systems. ' A further object of the invention is to provide a utility data system that gathers endpoint service usage and status data from diverse sources and delivers it to various clients in formats compatible with the clients' existing information systems.
Acnother object of the invention is to provide a utility data system having a rules based design for determining how, when and what data is to be delivered to a client.
A further object of the invention is to provide such a system that supports substantially all automated endpoint service monitoring devices (including those for water and gas metering).
These and other objectives are accomplished by a system for processing and delivering service usage and status data collected from diverse sources. The sources are typically various endpoint service monitoring devices owned by one or more clients. The data collected from the endpoint service monitoring devices is first parsed into a common format. When requested or in accordance with a schedule, data associated with a given client is then translated from the common format
-3-007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
into a format specified by the client (typically a format compatible with the client's existing information system). The translated data is then delivered to the client.
The system processes are preferably controlled by rules-based logic processing wherein rules can be modified to configure how, when and what data is delivered to a client. Accordingly, modification of system code can be avoided when client requirements change.
The foregoing has outlined some of the more pertinent objects and features of the present invention. These objects should be construed to be merely illustrative of some of the more prominent features and applications of the invention. Many other beneficial results can be attained by applying the disclosed invention in a different manner or modifying the invention as will be described. Accordingly, other objects and a fuller understanding of the invention may be had by referring to the following Detailed Description of the Preferred Embodiment.
-4-007816.00031:0387881.01 _ Atty. Dkt. No. 7816/31 PATENT
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and the advantages thereof, reference should be made to the following Detailed Description taken in connection with the accompanying drawings in which:
FIGURE 1 is a block diagram illustrating the general context in which the inventive system is implemented;
FIGURE 2 is a block diagram illustrating various processes of the system;
FIGURE 3 is a block diagram illustrating one possible system topology;
FIGURE 4 is a block diagram illustrating an alternative topology;
FIGURE 5 is a block diagram illustrating a further possible system topology;
FIGURE 6 is a block diagram illustrating the collect endpoint data process of the system in greater detail;
FIGURE 7 is a block diagram illustrating the parse process of the system in greater detail;
FIGURE 8 is a block diagram illustrating the common format database of the system in greater detail;
FIGURE 9 is a block diagram illustrating the translate process of the system in greater detail;
-5-007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
FIGURE 10 is a block diagram illustrating the rules engine of the system in greater detail;
FIGURE 11 is a block diagram illustrating the scheduler process of the system in greater detail;
FIGURE 12 is a block diagram illustrating the deliver to client process of the system in greater detail;
FIGURE 13 is a block diagram illustrating the manage tasks process of the system in greater detail; and FIGURE 14 is a block diagram illustrating the SNMP agent process of the system in greater detail.
-6-oo~sm.ooo3i:o3s~ssi.oi Atty. Dkt. No. 7816/31 PATENT
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
FIGURE 1 illustrates the general context in which the inventive utility data system or solution 10 (referred to in some of the drawings as the 'UDS') is implemented.
(In each of the figures, a bubble represents a software process that performs a function preferably independent of other processes with a well-defined interface through which it communicates with other processes.
Processes that share an interface are indicated by dataflow arrows.) briefly, the system 10 collects data from endpoint service monitoring devices 12, parses it, stores it in a common format database, translates it into a format desired by a client 14, and delivers it to the client wherein it is integrated into the client's existing information system. The data might be meter readings collected, e.g., on a daily or hourly basis throughout a month, then delivered to the client early in the following month or as otherwise desired. The system operates over a communications network.
The data collections are made by programming the endpoint meters to initiate contact to the system periodically or by contacting the meters on a scheduled basis.
The endpoint devices 12, which typically comprise metering equipment from a variety of vendors, are sources of real-time data for the system 10. The data can include meter readings, time of use, pricing and _7_ 007816.00031:D387881.01 Atty. Dkt. No. 7816/31 PATENT
other information and automated outage detection information.
Representative endpoint devices include, without limitation, a TeledataTM
AMR collection device, a American InnovationsTM AMR collection device, ABB ElectricTM AMR metering devices, Landis & Gyr ElectricTM AMR
metering devices, or a wireless AMR system providing a data stream into the system.
The client 14 is the destination of data that has been parsed and translated by the system 10. The client 14 can purchase the services of the system to gather and/or translate data and specifies the desired format and delivery method. Representative client devices include, without limitation, Lucent 5ESS and other telephone company phone service switches, frame relay equipment, and a telephone company usage collection system providing a data stream into the system.
In addition to endpoint devices, an external data source 16 can provide data to the system preferably in machine form (either database or flat file) that requires translation. The data may be previously collected field data from an archive, field data collected by another system, or some other form of non-live data for vvhich some useful translation is desired.
An operator requester 18 is a person who interacts with the system typically to gather status information, provide configuration, or examine data. It may be a system employee or an employee of a client that is _8_ 007816.00031:0387881.01 Atty. Dkt. No. 7816131 PATENT
authorized to use the system. Different operators can be given different capabilities to access the system so an operator could be a device installer, an administrator, or a client. The system preferably includes a graphical user interface (GUI), which is described below, for the operator.
A machine requester 20 can also be linked to the system. The machine requester is a communicating computer program that can gather or provide the same type of information as an operator, but does so via a machine-oriented interface. The existence of a machine requester interface can als~ allow the development of a user interface to the system other than the GUI developed specifically for the operator requester.
An SNMP Manager 22, a software package that uses the Simple Network Management Protocol, preferably reports network conditions to a network administrator not directly associated with the system.
FIGURE 2 illustrates the major processes of the system 10. The system includes a Collect Endpoint Data process 24, which collects data from endpoint devices 12 and makes it available as device-specific raw data preferably with a device-independent common header that can be used to later identify the parsing rules to be applied to the data.
The system includes a Parse process 26, which converts data from an external source 16 or endpoint 12 into a common format.

007816.00031:0387881.01 _ Atty. Dkt. No. 7816131 PATENT
The system also includes a Translate process 28, which accepts data in the common format from the Parse process 26. The common format allows the Translate process 28 to create output data for many disparate clients based on a single input expectation, and stores the data over time so that periodic delivered data can summarize a period of time for the client. The common format, the rules that govern parsing into the common format, and the rules that govern translation from the common format to the client format can be changed without changing the system code. v The Common Format Database 30 is a representation of information typically from the many diverse sources in an application-oriented format independent of the source. It includes provisioning data that allows associations to be made against the collected data such as the association of data with an owner (or client).
The main data path via the Collect Endpoint Data, Parse, and Translate processes 24, 26, 28 should provide high speed delivery of priority data (described below). Accordingly, the use of databases in this data path is minimized to avoid introducing unnecessary delays.
The system also includes a Handle Requests process 32, which provides the interface to the system 10 for both operator 18 (via a GUI 33) and machine requesters 20. It is responsible for imposing access 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
restrictions on outsiders, providing the information they request, and storing the information they provide within the system. For most requests, Handle Requests 32 acts essentially as a router that forwards the request to the appropriate process, so that each process can exert control over its own data and the methods available for viewing and modifying it.
The Handle Requests process 32 enforces access restrictions on a connection basis. It preferably supports multiple connections, requiring a login verification before it will accept requests from the connection.
The system 10 also includes a Manage Tasks process 34, which is responsible for monitoring the health of tasks during operation, and for collecting error and informational messages. It makes reports available to either the Handle Requests process 32 or the SNMP manager 22.
Messages 36 and Process Status information 38 can come from any process in the system 10. (They are represented in FIGURE 2 as incoming flows with no source to avoid cluttering the drawing with flows from every other process.) The typical data flow path in the system is from the endpoint devices 12 to Collect Endpoint Data 24 to Parse 26 to Common Format Database 30 to Translate 28 to the client 14. Data from the endpoint devices 12 may be pulled into Collect Endpoint Data 24 (dial-out) or pushed onto Collect Endpoint Data 24 (dial-in). Parse 26 preferably pulls oo~sis.ooo3i:o3s~asi.oi - Atty. Dkt. No. 7816!31 PATENT
data from Collect Endpoint Data 24 and pushes data onto the Common Format Database 30. Translate 28 preferably pulls data from Common Format Database 30.
In use, the system 10 collects data from endpoints 12, parses it as quickly as possible, and stores it in the Common Format Database 30. The data remains in the Common Format Database 30 until it is time to make a client delivery. The data is then extracted, translated into a format desired by the client, and delivered. The data might be meter readings collected, e.g., on; a daily or hourly basis throughout a month, then delivered to the client 14 early in the following month. The data collections are made by programming the meters 12 to initiate contact periodically or by contacting the meters 12 on a scheduled basis. The delivery to the client 14 can be in the form of a flat file that is copied to a website or a client machine.
A Data Available signal 40 is preferably sent by the Collect Endpoint Data process 24 to the Parse process 26 to promote the immediate parsing of data as soon as it is collected, without requiring the Parse process 26 to poll for data.
A Translate Now signal 42 is preferably sent by the Parse process 26 to the Translate process 28 to handle priority outage detection data.
Meters or power line monitors initiate contact when an outage is detected, and the data is parsed and stored in the Common Format Database 30. As 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
soon as it arrives, however, it is delivered to the client 14. The Translate Now signal 42 notifies the Translate process 28 that there is priority data available to be delivered. In this case, the delivery might be a sequence of individual outage records written to a continuous connection to a client machine. Each outage record is accordingly delivered as soon as it arrives.
The Parse process 26 preferably also supports another mode of operation, in which data is pulled from the External Data Source 16 rather than from the Collect Endpoint Data process 24. This is typically be used when data is to be pulled from an outside collection system or an archive of previously collected data, or for data that has undergone some external analysis. Such a parsing activity is ordinarily scheduled or directed to occur by an operator.
The Parse and Translate processes 26, 28 preferably operate either on a scheduled basis or on a signaled basis. This allows several other modes of data path operation, which though are not utilized as often. For example, an external database could be pulled in on a scheduled basis, then delivered to the client immediately, using a Translate Now signal 42.
Alternatively, data could be allowed to pool in Collect Endpoint Data 24 and parsed periodically on a scheduled basis, ignoring the Data Available signa140.

007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
The system 10 preferably supports the archiving of data that accumulates in the system. Accordingly, a permanent record of the data can be kept without using up system storage resources. The process of archiving data and deleting it from the system stores is under system control via translation rules. In order to support this archiving activity, all data to be archived preferably resides in an ODBC-compliant database.
The data to be archived is specified by gather criteria. The rules define the destination and format of the archive, and the marking or deletion of the original system data. For example, the data for a particular owner and date range might be selected for archiving, delivered to an external flat fife, and deleted from the system database. The provisioning of an owner of data preferably includes a minimum data retention time so that the conditions for selecting data to be archived can include this minimum retention time.
Scalabilitv In order to provide scalability in terms of both size and performance, the system can optionally be partitioned onto multiple machines. Each machine can provide additional storage or processing hardware to improve system capability. The machine layout of the system can be determined at installation time, and can preferably be changed without rebuilding the system source code.

007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
Data storage capacity can also be scaled if desired. For scaling of data storage capacity, it is preferred to store such data in a database, and rely on an underlying database technology that supports splitting a single logical database across multiple machines. This allows the scaling to occur via tools supplied with the database without any knowledge by the system using the database that it may be resident on multiple machines.
System performance may be enhanced if desired by splitting different processes onto separate machines so their activities do not need to share processing power. Performance may also be enhanced by creating multiple instances of a single process, each running on a separate machine, where coordinating the activities of such multiple instances is beneficial.
FIGURES 3-5 illustrate some examples of possible machine topologies. In the drawings, processes are shown in bubbles, and machines are shown as boxes surrounding a set of processes implemented on that machine. Processes that are implemented on a per-machine basis are replicated on each machine, and are not shown in the topology drawings.
In the drawings, the communications paths for system data flow are shown between the processes/machines. Actual communications are oo~sls.ooo3i:oss~sxi.oi - Atty. Dkt. No. 7816/31 PATENT
preferably made via a secure wide area or local area network so that any process could theoretically communicate with any other.
FIGURE 3 shows a simple machine topology comprising a single machine 44 hosting an entire system having one of each of the Collect Endpoint Data, Parse and Translate processes 24, 26, 28.
FIGURE 4 shows a system 46 with an additional instance of the Collect Endpoint Data process 48 running on a separate machine 50.
The FIGURE 5 example illustrates a tree-type topology of a highly distributed system. As processes are added, they are related by the communications paths required to be k-way trees from a root at a Common Format data store 30A, which in this case occurs only once per system.
Even though the Common Format store 30A may be implemented across multiple machines, any of those machines can be viewed as a location for the root of the machine trees. There is a tree of fan-in from leaves at Collect Endpoint Data 24, through branches at Parse 26, to the root at Common Format 30A. There is a tree of fan-out from the root at Common Format to leaves at Translate 28. The example shown is highly distributed, but it should be noted that the maximum distribution is limited only by how big a tree can be provisioned.
Each process preferably has its own provisioning. In the highly distributed topology shown, each instance of Collect Endpoint Data 24 is D07816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
separately provisioned to gather its own endpoint data without knowledge of the other endpoints. Each instance of Parse 26 has its own parsing rule sets, designed to handle the data being collected by its subordinates. Each instance of Translate 28 has its own schedules for reporting data to its own set of clients, separate from the clients served by the another instance of Translate.
Messa4e/Error Reportin4 Each process preferably generates messages conveying activity and any error information. Each message preferably has a message code, message text, and provisioning that includes a presentation text, a message category, whether the message requires acknowledgment, whether the message represents a persistent condition with a repetition rate, a priority level, and where the message should be reported (message log or SNMP or both). The priority level and message category associated with a message provide a means of filtering messages. Each process preferably constantly reports the progress of all transactions, as well as any errors that occur so that great levels of detail are available if desired for troubleshooting and analysis. Most of the time, however, an operator will be interested in viewing only a small subset of the available information. A per-user verbosity selection can be used to select the priority level of messages that are displayed to that user. A similar per-007816.00031:0387881.01 s - Atty. Dkt. No. 7816/31 PATENT
user message category can be used to limit the messages presented to those in a category.
Securitv The Handle Requests process 32 can grant access to system 10 preferably on a connection basis based on a user name and password.
Each user preferably has a specific set of allowed activities conveyed to processes internal to the system. Each process in the system preferably determines for itself what individual permissions may be granted for its subsystem, and what the default state is for each permission. Each process that serves requests verifies that the user has permission to be served for the given request.
For the sake of robustness under loss of connectivity, user permissions are preferably stored with each process or each machine that has user interface processes. This permits the system to continue operating when machines are lost. Any processes or machines still operating and accessible can serve requests with proper access permission control.
Permissions are required to allow an operation to be performed.
Once that operation has been allowed, additional permissions may be invoked to further restrict the operation.
_1$_ 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
Methods by which data is read are generally controlled by both the operation and data level permissions. The user may be restricted from reading at all or he may be allowed to read only certain data. For example, a privileged user might be allowed to create a rule set, and tag it with an owner. A less privileged user might be not be allowed to create rule sets, but could read rule sets for certain owners. An even less privileged user might not be allowed to even read rule sets.
In order to apply restrictions with regard to ownership, a means of associal:ing a user with a list of owners is provided. A user is then defined to qualify with regard to ownership if the owner of the data matches any of the owners on the list of owners for the user.
Decomposition Data Flow Diagrams FIGURE 6 illustrates the Collect Endpoint Data process 24 in greater detail. As previously mentioned, the Collect Endpoint Data process 24 is responsible for collecting data from endpoint devices 12 and making it available as device-specific raw data preferably with a device-independent common header that can be used to later identify the parsing rules to be applied to the data. It is also responsible for hiding from other parts of the system differences between various types of endpoint devices.
The Collect Endpoint Data process 24 includes an Endpoint Communication Triplet 52, comprising a collection of three drivers: a Link 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
Driver 54, a Connection Driver 56, and a Protocol Driver 58. The Link Driver 54 is responsible for communications transport mechanisms associated with a given communications medium. For example, TCP/IP
and direct serial communications are each supported by a separate Link Drivers. The Link Driver 54 hides the communications medium and presents a uniform interface to a connection driver.
The Connection Driver 56 in combination with the Link Driver 54 presents to the Protocol Driver 58 a two-way session with a single endpoint device 12. Examples of hardware controlled by a Connection Driver 56 are a network access switch or a modem. The Connection Driver 56 may be omitted if the endpoint 12 has a generally permanent connection.
The Protocol Driver 58 provides communication with the endpoint device 12. It contains specific knowledge about command structure and message contents for a particular device type. It generally supports the gathering of all the information the device can provide and all pushback data that can be stored in the device. For data collected from an endpoint device, the Protocol Driver 58 determines a raw format for representing the data. It preferably attaches a standard header, which includes a "processed" flag, a device type, raw format ID, a device ID, the time the data was collected, the duration of the session, and the priority of the 007816.00031:0387881.01 . Atty. Dkt. No. 7816/31 PATENT
data, to the data collected from the endpoint device 12 and places it in a Priority Sensitive Buffer process 60. The Protocol Driver 58 looks in a DPI
Specific database 62 for pushback data waiting to be sent to the device 12 whenever a session with the device 12 is active, sends any such pushback data, and marks the data as sent after success.
The Protocol Driver 58 preferably supports the complete command repertoire of the device type it supports. Pushback data comprises a list of individual commands to be sent at the next opportunity. The Protocol Driver 58 determines how to deal with any data returned from each of the -supported commands. Simple acknowledgments can be discarded upon receipt, and result in an error if not received. Response data that contains information is packaged in a raw record with a raw format ID unique to the command, and forwarded through a normal Raw Data channel 64.
The Protocol Driver 58 also determines a set of commands to issue and data to gather on every connection with the endpoint 12, and a record format for delivering such data to the Raw Data channel 64. This "normal"
raw data record preferably contains everything of possible interest to be learned from the endpoint device 12, within reasonable size limits, so that data not previously of interest can be supported solely by the creation of parsing rules. Data not expected to be useful to collect on a periodic basis 007816.00031:038~881.01 - Atty. Dkt. No. 7816/31 PATENT
can be omitted from the "normal" record, since it is still possible to gather it on a request basis via the pushback data mechanism.
The DPI Specific database 62 contains device-specific communications configuration parameters (e.g., connection address, connection timeout value and retry count) and data to be pushed back to the endpoint. When the Protocol Driver 58 consumes one of the pushback data records, it will generate zero or one record of Raw Data. A piece of pushback data is preferably processed only once, and at least the last pushback data sent to each endpoint for each supported device operation is kept for reference. Toward this end, pushback data is marked as sent after it has been sent, and deleted when a newer piece of pushback for the same endpoint and device operation has been sent. Messages are generated during the processing of pushback data that can then act as an activity trail.
The Listener process 64 connects to a port, waiting for connection from a dial-in endpoint 12. Upon detecting connection from an endpoint device 12, the Listener 64 instantiates an Endpoint Communications Triplet 52, passing along the initial connection configuration. The Listener 64 then resumes its connection to the port waiting for the next dial-in endpoint. For each supported device type able to connect to each port, a listener will be instantiated.

007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
A DPI Provisioning database 66 generally contains the complete configuration information necessary to support a DPI. This information includes the device type, the name of the Endpoint Communications Triplet binary, the name of the Listener binary, and a list of ports on which the Listeners are to be instantiated at run time.
The Device Driver Control process 68 writes pushback records through a Validate DPI Specific process 70 to the DPI Specific store 62 (and issues a message to the message log as to the contents of the write), and the Protocol Driver 58 reads records through Validate DPI Specific 70 from the DPI Specific Store 62. When the device exchange completes, a message is sent to the message log and the pushback data record is marked as sent. There is preferably one type of pushback record per "command" supported by the device (or protocol driver).
The Validate DPI Specific process 70 provides validation of device-type specific pushback data and communications configuration parameters both before they are stored in the DPI Specific data store 62 and upon retrieval from that data store. The first instance verifies that incoming pushback data and configuration parameters from the requester interfaces is valid for the device-type prior to storage. Invalid data is preferably immediately signaled to the source requester and the data is rejected. The second instance of Validate DPI Specific 70 verifies that all records directly 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
inserted into the data store by an external agent are valid prior to retrieval by the Protocol Driver 58 for processing. Invalid records detected upon retrieval from the store are logged to the error log and deleted from the data store.
The Device Driver Control process 68 is responsible for creation of the Endpoint Communications Triplet necessary to collect data from a given endpoint and coordinating all requests for endpoint communication.
At times specified in the Collection Schedules, it will instantiate an Endpoint Communication Triplet 52 and instruct it whether to collect endpoint data or just perform a data push back. At start up, the Device Driver Control 68 will instantiate one or more listeners for each communications link that is set up to accept incoming communications.
The Device Driver Control 68 acts on all Device Requests directed at Collect Endpoint Data. These include: adding, deleting, or modifying the Endpoint Provisioning Info for Store 72 an endpoint, scheduling endpoint data collection or pushback, requesting the last sent pushback data for a device, requesting an immediate endpoint transaction, activating or deactivating a DPI, stopping all Collect Endpoint Data activity, or restarting. When Collect Endpoint Data is stopped, any activities in progress are preferably completed, but no new activities are started.
Collection Schedules preferably do not generate Actions while stopped.

007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
Incoming calls or connection attempts are not answered while stopped.
Data may still be pulled from the Priority Sensitive Buffer, but no Data Available signals are generated while stopped.
There are no Device Requests supported to examine data in the Priority Sensitive Buffer. This data can be examined using a database tool, since it can be accessed using an ODBC interface.
The Device Driver Control 68 process is responsible for checking access permissions on Device Requests to insure that the user making the request. operates on or receives only permitted data. This may involve ownership restrictions relating to the ownership of endpoints or schedules.
From the perspective of Collect Endpoint Data 24, Device Driver Control 68 attempts to restrict access based on ownership, but uses service functions provided in Common Format Database to determine whether access should be granted with regard to ownership.
The Endpoint Provisioning Information database 72 preferably contains information on Device ID (serial number, Device type), Protocol type, Connection type, Link type, and directionality (dial-in, dial-out, or both). Device ID is an identifier of an endpoint device that is unique across the system. It preferably relies on a serial number or some type of identification provided by the endpoint device itself, but adds information to assure uniqueness such as the device type.

007816.00031:0387881.01 ' Atty. Dkt. No. 7816131 PATENT
The Endpoint Communications Triplet 52 is a high level concept that describes the relationship between the Link Driver 54, Connection Driver 56, and Protocol Driver 58. It is up to Device Driver Control 68 to create running instances of the Endpoint Communications Triplet 52 to handle communications from any device configured in the system or to delegate this responsibility to the appropriate listener. This configuration is derived from the Endpoint Provisioning Info and DPI Provisioning stores 72, 66.
For example, consider a Listener 64 that waits for a connection to be established, and can support multiple connections. Such a Listener 64 would be activated by Device Driver Control 68 with instructions about which Endpoint Communications Triplet to use. Upon establishment of a connection, the Listener 64 activates the Endpoint Communications Triplet 52 to handle the call, and continues to listen for further connections.
Upon completion of the session, the Endpoint Communications Triplet 52 dissolves.
As another example, consider a need for dialing out to establish a connection with a single device. An Endpoint Communications Triplet 52 would be activated by Device Driver Control 68 with instructions about its configuration. Upon completion of the session, the Endpoint Communications Triplet 52 also dissolves.

007816.00031:0387881.01 Atty. Dkt. No. 78'16/3'1 PATENT
The Device Driver Executables database 74 contains the executable files for the Listeners, Link Drivers, Connection Drivers, and Protocol Drivers. This data store is provisioned by a system administrator by copying the files into system defined directories on the computers) running Collect Endpoint Data.
As a separate activity from the installation of the software, an operator can preferably activate or deactivate an installed Device-Type Protocol Interface. This capability is useful because it allows a subset of the possible DPI's to be made available for use when not all of them are of interest to a given installation. In addition, it provides a point at which a permission to add a DPI may be granted or denied (such restrictions may not be possible to apply to the installation procedures). It also insures that bogus DPI's cannot appear simply by altering installation files on the disk.
The Scheduler process 76 is programmed by Device Driver Control 68 with Schedule Requests containing actions to be performed at a scheduled time and alerts Device Driver Control 68 when actions must be performed. The Scheduler 76 supports a Done signal to avoid missing events that were in progress when the system goes down.
The Priority Sensitive Buffering process 60 allows high priority data to be processed before lower priority data, but all data with a given priority to be processed in the order received. It provides a high speed data 007816.00031:0387881.01 ~ Atty. Dkt. No. 7816/31 PATENT
transfer mechanism with a potentially large scalable capacity since the endpoint data is pulled from it, not pushed onto the next processing step.
It is responsible for generating a Data Available signal 78 whenever new data becomes available, and for regenerating such a signal at a tunable period as long as data remains available to insure the signal was not missed. In order to keep from signaling too often in the presence of rapid incoming data, a tunable minimum delay between signals is also supported.
The Data Available signal 78 is made available when a receptor for it can be;found (such as when Parse 26 is present), but is not considered an error otherwise. It conveys the priority of the data available to be pulled.
It is intended to enhance performance by notifying an instance of Parse 26 when data is available, so Parse 26 does not need to spend time polling for the data. By making the use of the signal optional, Parse can choose whether to poll for data or be notified, by whether it supplies a receptor for the signal.
Output Endpoint Data 80 is provided via an ODBC Interface to make the endpoint data appear to an outsider as if it is stored in a database for ease of integration with customer systems. Using an ODBC interface also allows an incremental development strategy. A degenerate form of Priority Sensitive Buffering, which merely writes data to an ODBC compliant database, can be used. Arrival time and priority are combined to create a 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
composite key for the database. These columns are required by the "standard header." Alternatively, the database can be replaced with the high performance buffering, with separate storage spaces for each priority, and an ODBC server driver to make the data appear to come from a database, without changing the interface that outsiders see.
Parse FIGURE 7 illustrates the Parse process 26 in greater detail. The Parse process 26 includes a Scheduler 82 responsible for generating Actions~that convey Parse Gather Criteria and Parse Rules to be performed on a time based schedule, or on an immediate basis either when a Data Available signal arrives or on request by an operator. The Actions provided by the Scheduler 82 convey the Parse Gather Criteria 84 and Parse Rules 86 to be used, and a time period to allow a Rules Engine 88 to remain instantiated after it has returned a Done signal. The Scheduler 82 is also responsible for formatting schedules for storage and for detecting errors occurring in the schedule requested.
The Parse Thread Control process 90 creates an instance of Rules Engine 88 running in its own thread to handle each parsing activity, and is then free to accept further Actions and create further instances of Rules Engines 88. Parse Thread Control 90 responsible for recognizing when a given instance of Rules Engine 88 completed its delivery, via its Done 007816.00031:0387881.01 ~ Atty. Dkt. No. 7816/31 PATENT
signal, and for returning a corresponding Done indication to the Scheduler after any timeout specified has been served. Note that there may be parsing activities that never terminate (for immediate delivery to a client), as long as new data continues to arrive before the timeout.
The Data Available signal 92 indicates the source of the request, and the priority of the records available. All records moved from data collection into the Parse function are on an "immediate" basis, but records in the "priority" store are to be moved before any that may exist in the "normal" store. The Parse process 26 preferably handles multiple priorities to provide for flexibility. The Immediate Actions store 94 contains actions with references to the Parse Gather Criteria store, which identifies the type of data to be pulled over, and the ODBC data connection from which to pull it. The actions also refer to Parse Rules 86 to be applied, which identify the starting point for the desired transformation of individual records. The Parse Schedules store 96 contains actions with similar references to Parse Gather Criteria 84 and Parse Rules 86, but are associated with a scheduled time when they are permitted to transfer data.
When the Rules Engine 88 of Parse is presented with a set of Actions to be performed, it takes responsibility for pulling data from an external source, through an ODBC interface, transforming into Common Format Data, and storing via an ODBC interface. The Rules Engine 88 OD7816.00031:0397881.01 Atty. Dkt. No. 7816/31 PATENT
pulls records in based upon the location specified in the Parse Gather Criteria store, for as many records as match the Parse Gather Criteria 84.
The Rules Engine 88 is capable of issuing an optional Translate Now signal 98, which is intended to speed the processing of priority data by the Translate process. The generation of this signal 98 is under control of the rules by which the Rules Engine operates. The signal conveys not only when to perform the translation but also which client translation is desired.
When the requested actions have been performed, the Rules Engine 88 notifies;the Scheduler 82 via the Done signal 100, so that it can mark the event generating the actions, if any, as completed.
The Handle Parse Requests process 102 provides a means for changing provisioning data directly in the Parse Rules and Parse Gather Criteria stores, and indirectly in the Parse Schedules and Immediate Actions stores 96, 94 through the Scheduler 82. The requester interface for Parse is contained in Handle Parse Requests, to provide:
a) Parse schedule changes indirectly through Scheduler-set event date, time, action references, start date, end date, delta time if appropriate;
b) Immediate action requests indirectly through Scheduler-set action references;

007816.00031:0387881.01 Y - Atty. Dkt. No. 7816/31 PATENT
c) Parse Gather Criteria directly-set data source location based upon data type (priority level, external ODBC source), and qualifiers for data to pull in;
d) Parse Rules directly into store-set rules to use on records as they are pulled in based upon action references; and e) Stop/Start capability. While stopped, any activities in progress are completed, but no new activities are begun. The Scheduler does not generate Actions from Parse Schedule and ignores (but holds pending) incoming Data Available signals.
Common Format Database FIGURE 8 illustrates the Common Format Database 30 in greater detail. The Common Format Data store 30A contains information useful for a client 14 and is used as a means of selecting data to be delivered to the client. This includes collected data (service records), but also includes provisioning information that relates a client owner to an endpoint, and information about a client.
The data path for collected data generally involves direct ODBC
access to the Common Format Data store 30A, pushing service records into the store (Common Data), or pulling data of interest to a particular client (Client Data). The schema of the service record data is controlled by the system operator so all reads and writes of service records occur via 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
Gather Criteria and Rules, which are also controlled by the system operator.
The Handle Provisioning Requests process 104 services requests to view or modify owner provisioning. It also provides service functions used within the system for determining whether a user has permission to access a certain endpoint or piece of provisioning data based on its ownership.
The goal is to house all owner relationships in the Common Format Data store, and wrap them with service functions for validating ownership. In this wary, the owner relationships are also available to Translation Gather Criteria.
Translate FIGURE 9 illustrates the Translate process 28 in greater detail. The Translate process 28 includes a Scheduler process 106, which is responsible for generating Actions to be performed, either from an Immediate Delivery Actions 108 (when a Translate Now signal arrives), or from a Delivery Schedule, or on request by an operator. The Actions stored in the Scheduler 106 represent a set of Client Gather Criteria 110, Translation Rules 112, and a Client Delivery Method 114 to be used by the Deliver To Client process 116. For Immediate Delivery Actions 108, the Translate Now signal 98 preferably conveys the same information since there may be Immediate Actions stored for several clients.

007816.00031:0387881.01 ' Atty. Dkt. No. 7816/31 PATENT
For actions generated on a Delivery Schedule 110, the scheduled event of interest is generally the delivery of data to the client 14, and that whatever delay transpires to translate the data is acceptable, so that independent scheduling of translation and delivery are not required.
The Deliver To Client process 116 is responsible for retrieving service records from the Common Format Data then translating and delivering the data in the desired format to the client site.
The Handle Client Requests process 118 provides a means for changing provisioning data, and a Stop/Start capability. While stopped, any activities in progress are continued until completion, but no new activities are begun. Also, the Scheduler 106 does not generate Actions from Delivery Schedule 110 and ignores (but holds pending) incoming Translate Now signals 98.
Rules Engine FIGURE 10 illustrates the Rules Engine 88, which is common to both Parse 26 and Translate 28, in greater detail. The Rules Engine 88 is responsible for pulling data from an external ODBC data source when signaled, transforming that data, and delivering it to a data sink, along with an optional signal. When the data has been delivered to the data sink, the source records) can be updated as "processed" or deleted at the discretion of the rules.

007816.00031:0387881.01 . CA 02296136 2000-OS-19 - Atty. Dkt. No. 7816/31 PATENT
The Rules Engine 88 is preferably run in its own thread, which is managed by its creator. It preferably does not deal with thread creation or management.
The Actions supplied to the Rules Engine convey the specific Gather Criteria 84 and set of Rules to be used.
The Query Database process 120 requests data from the ODBC
source using an SQL query as defined by the Gather Criteria 84 specified with the Actions. Data is retrieved until no more data is available (one signal could result in many records being retrieved). These Gather Criteria 84 include: OBDC data source identifier, table and row structure and selection information (from which the actual SQL query is created), information on how to mark the source record as processed (update or delete information). Actions may arrive many times during the period or time that a Rules Engine is instantiated 88. This allows immediate transformation to be supported for data that arrives occasionally, without forcing a new instantiation of the Rules Engine to be created each time a piece of data arrives.
The Actions also specify Rule identification information, and destination and signal specifications. Upon retrieving the first record, the Query Database 120 creates an instance of the Record Engine 122, and passes this information in Record Eng Init. As Query Database 120 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
retrieves "records" (data from "joined" rows across several tables) from the data source, it indicates Record Available to the Record Engine 122.
The passing mechanism is implemented as a "pull" type interface, in which the Record Engine 122 is called for each record, after which the Record Engine 122 asks for each particular field data as it is needed. (This is preferable to a "push" type interface, in which the table data is physically merged into a single data block and sent to the Record Engine 122 along with the block structure.) Once the Record Engine 122 has completed and returns a Done indication, the Query Database optionally updates the source database to mark the record as processed or to delete it.
The Record Engine 122 pulls a single source data record, applies its rule set to the data, and pushes zero or more destination records. There can be more than one destination for output records to support in Translate, the generation of accounting records to a different location than the client data.
The Record Engine 122 generates an optional signal based on the Rules. Signals are intended to give immediate notification to a follow-on process that data is available. They are intended for use by data paths that remain instantiated indefinitely, handling data as it arrives and signaling its availability to the next stage of processing. A given set of 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
rules generates one signal per record. There is preferably only one signal available to be generated, but the value it conveys and the process to which it is sent is determined at the time the Record Engine 122 is instantiated, so each instantiation can have a unique signal. When the Record Engine 122 generates a signal, it waits for a Done from the signal before returning Done to the Record Available indication that started it.
The rules define how each field is modified during this process.
There can be many source records that result in modification of internal variables without generating any destination records. There can be one source record that generates many destination records. There can be one source record that results in a single destination record. Even when there is a one-to-one relationship between source records and destination records, the destination records may not all have the same format. There may be a special destination record that precedes all others, or a special destination record that follows all others. Fields can be mathematically modified (by a constant), have characters added or removed, and have Boolean operations applied. The rules can contain simple conditional testing and branching in the form of nestable If-Else-Endif statements.
The ultimate destination of data generated by the Record Engine may be either ODBC or flat file, but preferably is not determined by the Record Engine. The Record Engine generally always produces Field Data 007816.00031:0387881.01 - Atty. Dkt. No. 7816131 PATENT
output that includes the destination specification (database name or filename), table name, field name, and data. This form of output makes it immediately usable when the destination is ODBC. When flat file output has been specified, the receiver of the data is expected to write it sequentially to the output flat file, without regard to any destination table or field names specified.
On a given data path instantiation, records are handled sequentially from source to the next store. In this way, the source record is not marked ; as processed until any destination records have been safely delivered to the next nonvolatile store in the processing path. Depending on the database technology, the ODBC interface may or may not handle queuing requests with later performance guaranteed. From the perspective of the Record Engine, it waits for each record to be written before proceeding.
Scheduler FIGURE 11 illustrates the Scheduler process, which is common to Parse, Translate, and Collect Endpoint Data processes 26, 28, 24, in greater detail. The Scheduler handles both time-based schedule events and immediate events initiated by a signal. In order to be applied in each of its different contexts, the Scheduler disseminates a set of Actions that have been stored along with the schedule or signal information. Actions 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
may include processes to be activated, data gathering criteria, rules, or delivery methods. They are to be preferably represented in a form whereby the Scheduler can retrieve them and pass them on, without understanding what they represent.
The Maintain Time-Based Schedule process 124 accepts Time-based Schedule Requests 126, which include requests to add or delete an event from the schedule. It stores Time-based Schedules 126 in the store provided by its context, along with any state information it needs to control., the processing of the time-based events. A time-based event can either be specified as a one-time event or a recurring event.
When a recurring event is added to the schedule, the following information is preferably supplied:
Recurring Delta Time (minutes, hours, days, weeks, months, or years between repetitions);
Actions to be passed on to other processes;
Start Date/Time; and End Date/Time.
The Maintain Time-Based Schedule process 124 copies the Start Date/Time to the Next Occurrence Date/Time for a recurring event when it is initially stored, and updates the Next Occurrence Date/Time thereafter whenever the event recurs.

007816.OOD31:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
When a recurring event is read from the schedule, the following information is preferably supplied:
Recurring Delta Time (minutes, hours, days, weeks, months, or years between repetitions);
Actions to be passed on to other processes;
Start Date/Time;
End Date/Time; and Next Occurrence Date/Time.
Preferably, the Start Date/Time is ignored and not needed after initial storage, but is retained for informational purposes. Also, if a Start Date/Time well in the past is supplied, many events may be generated right away, until the Next Occurrence Date/Time lands in the future.
When a one-time event is added to the schedule or read from the schedule, the following information is preferably supplied:
Date/Time; and Actions to be passed on to other processes The Maintain Time-Based Schedule process 128 is responsible for generating Actions whenever a time-based event comes due (is in the past). It generates Actions for multiple events without waiting for a Done indication from any of them. When a set of Actions indicates Done, the Maintain Time-Based Schedule either removes the schedule entry (for one-007816.D0031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
time events) or calculates the Next Occurrence DateITime (for recurring events). If the Next Occurrence Date/Time for a recurring event falls past the End Date/Time, the event is removed. The purpose of the Done signal is to insure that each event has been completed before removing it. If the Scheduler's machine should fail before the event is complete, the event Actions must be reissued the next time the Scheduler is started.
There may be a great number of time-based events (e.g., endpoint collection schedules), so efficiency of locating events that are candidates to come due and locating events to be marked as completed are prime considerations.
The Maintain Time-Based Schedule process 128 accepts stop and start requests as part of Time-based Schedule Requests 126. While stopped, no new scheduled Actions are generated. Any outstanding Done signals are still processed when they arrive. Schedules can still be modified. On a start request, Maintain Time-Based Schedule 128 resumes its normal operation, generating any Actions whose time has come due.
The Maintain Immediate Schedule process 124 accepts Immediate Schedule Requests 130, which include requests to add or delete Actions to be performed in response to an Immediate Signal. It stores Immediate Actions 130 in the store provided by its context, along with any state information it needs to control the processing of the Actions.

007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
When an Immediate Action 130 is added or read, the following information is supplied:
Value of the Immediate Signal on which to generate the Actions; and Actions to be passed on to other processes.
The Maintain Immediate Schedule process 124 is responsible for generating Actions whenever an Immediate Signal 132 with the proper value arrives. It generates an error if a signal arrives for which it has no Actions. It generates multiple Actions preferably without waiting for a Done indication from any of them. The Maintain Immediate Schedule 124 insures that there is only one set of outstanding Actions for a given signal value. When Actions for a particular signal value have been issued, further occurrences of the same value of the signal are recognized, but are held pending until the Done indication for that signal value arrives, at which time the Actions are generated again.
Immediate Actions preferably are not removed automatically; they are added and deleted by request.
The Maintain Immediate Schedule process 124 accepts stop and start requests as part of Immediate Schedule Requests 130. White stopped, Immediate Signals are ignored and no new Actions are generated.
Any outstanding Actions are allowed to be completed, but Done signals that arrive do not cause any further Actions to be generated. Immediate 007816.00031:0387881.01 - Atty. Dkt. No. 7816/31 PATENT
Actions can still be modified. On a start request, Maintain Immediate Schedule 124 resumes its normal operation, generating Actions when Immediate Signals arrive, but does not generate Actions for any signals that arrived while it was stopped.
Deliver to Client Process FIGURE 12 illustrates the Deliver to Client process 116 in greater detail. The information specifying what to do to accomplish a specific client delivery arrives at a Delivery Thread Control process in the form of Actions; which include the Client Gather Criteria 110, Translation Rules 112, and Delivery Method to be used, and an amount of time to allow the delivery mechanics to remain instantiated before dissolving them. The Delivery Thread Control process creates an instance of a Delivery Manager 134 process running in its own thread to handle a complete delivery, and is then free to accept further Actions and create further instances of Delivery Manager. The Delivery Thread Control process 136 recognizes when a given instance of Delivery Manager 134 has completed its delivery, via its Done signal, and for returning a corresponding Done indication to the source of the Actions after any specified timeout has been served.
There may be deliveries that never terminate (for a continuous connection to a client) as long as data continues to arrive within the timeout period.

007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
The Delivery Manager 134 preferably deletes the contents of any flat file that is to be used as a buffer for collecting data to be later copied to the client. Such a buffering situation is determined by the Delivery Method. By deleting the contents of such a buffer file prior to creating a Rules Engine to fill it, a single named buffer file can support a "live"
continuous connection with intermittent deliveries. Data files that may reside directly at the client are generally never deleted or emptied; data is appended to them.
The Delivery Manager 134 instantiates a Rules Engine 138. The Rules Engine 138 is given the Actions to be performed as a means of identifying the Client Gather Criteria 110 and Translation Rules 112 to be used. Based on the Delivery Method, the Delivery Manager 134 selects either a flat file or ODBC output mechanism to supply to the Rules Engine 138. For flat files, the file name is part of the information in the Delivery Methods.
The Rules Engine 138 gathers the desired data, translates it, and delivers output using the output mechanism supplied to it. Depending on the rules, it may need to create accounting output to a destination distinct from the client delivery.
The Delivery Manager 134 instantiates any communications drivers that are needed to support the delivery based on the delivery method 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
specified. There are many possible delivery methods supported by the design. In the most general case, a trio of drivers 140 are used to layer the communications protocol.
A Client Link Driver 142 is responsible for communications transport mechanisms associated with a given communications medium. For example, TCP/IP and direct serial communications would be supported by separate Client Link Drivers. The purpose of the Client Link Driver 142 is to hide the communications medium and present a uniform interface to a Client Connect Driver 144.
A Client Connect Driver 144, in combination with a Client Link Driver 142 presents to the Client Protocol Driver 146 a two-way session with a single client machine. The Client Connect Driver 144 may be absent if the client has a "permanent" connection.
The Client Protocol Driver 146 provides communication with the client 14. It contains specific knowledge about command structure and message contents for a particular communications method. It uses this knowledge to facilitate the delivery of Client Data, passed to it via the Delivery Manager 134, to the client 12.
The set of delivery communications drivers 140 is optional and may not be needed in all cases. In fact, many of the typical Delivery Methods will not require them. Specifically, in the case of ODBC delivery, the Rules 007816.00031:0387881.01 . Atty. Dkt. No. 7816/31 PATENT
Engine output mechanism delivers the data directly to the client 14 and no further communications are needed. In the case of a flat file delivery via a network file server, either directly to the client, or to a location where the client can obtain the data, the communications is handled by the server connection, and no further communications are needed. Communications drivers are needed only when the delivery involves communications via a protocol not supported elsewhere.
Upon receipt of a Deliver Now signal or a Done signal from the Rules Engine '.138, the Delivery Manager 134 consults the Delivery Method to determine whether there is a flat file to be transferred or not. if there is, it makes the transfer to the client, passing the data or a handle to it to the Client Protocol Driver 146. When the data has been delivered on a Deliver Now signal, the flat file contents just delivered are deleted and a Done response to the Deliver Now signal is generated. This allows the Rules Engine 138 to proceed to send more data to the flat file. Such a technique supports immediate delivery of data to a continuous connection via a single buffer flat file. If no file transfer is implied by the Delivery Method, the data can be assumed to have been delivered directly from the Rules Engine 138.
Handle Requests The Handle Requests process 32 (shown in FIGURE 2) authenticates 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
users via a login procedure for providing connectivity between a machine or operator requester 20, 18 and the processes providing user interface methods, and for providing such processes with a notion of what user is making the requests so that the process can impose access permission restrictions for the user establishing the connection. The responsibilities of Handle Requests 32 are preferably fulfilled by CORBA without any further software implemented on the system.
By using CORBA to connect directly to a handful of user interface objects comprising the visible part of the system, external requesters receive the connectivity to the user interface methods they require.
CORBA allows a requester to determine which user interface objects are present in the installation, and to connect to the ones that have user interface methods of interest. The CORBA security service is used to authenticate users with a password before they qualify to establish connections with these user interface objects. Once an object to object connection has been established, the CORBA security service provides a means of identifying the user who established the connection so that the request server can impose the appropriate access restrictions.
Requests are directed to a particular instance of a process, preferably identified by a unique name. Instances of the same class of process (e.g., instances of Collect Endpoint Data) ace given unique names D07816.D0031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
by which they can be distinguished so that a request can be directed to a particular instance (e.g., a request for the collection schedules for Collect XYZ Endpoint Data versus the collection schedules for Collect ABC
Endpoint Data).
Each request preferably receives a single response. To support requesters who do not know the instance of a process to which to address a request, a means for determining all the instances of a given process is provided by CORBA. For example, a requester might wish to collect immediate data from a particular device ID without knowing which instance of Collect Endpoint Data 24 has the information to dial out to that device. The requester can first get a list of all instances of Collect Endpoint Data 24, then pass its request to each of them one at a time until a positive response is obtained.
As another example, a requester might wish to see the pending pushback data for all instances of Collect Endpoint Data 24. The requester can first get a list of all instances of Collect Endpoint Data 24, then request the pending pushback data from each one, and collect the results into a single presentation.
Manage Tasks The Manage Tasks process 32, shown in greater detail in FIGURE
13, preferably operates on a global, machine-dependent view of the OD7816.D0031:0387881.01 . Atty. Dkt. No. 7816/31 PATENT
system. To monitor the health of machines and satisfy a hardware watchdog that may be present on each such machine, the process 32 is preferably distributed across all machines in the system. Each machine implements a peer portion of Manage Tasks.
Toward this end, each machine preferably has its own Monitor Local Processes process 148, which is responsible for monitoring the health of the processes on its machine. Since Monitor Local Processes 148 runs on the local machine, it is in a position to satisfy a hardware watchdog (if present). When its machine fails (or in some other way it fails to runt, the hardware watchdog will fail to be satisfied and will restart that machine.
Whenever an instance of a process to be monitored is created, it registers with its local Monitor Local Processes 148 and provides a periodic signal of health thereafter. When a monitored process terminates, it provides a termination indication to Monitor Local Processes 148. If any monitored process fails to report, Monitor Local Processes 148 attempts to generate an error and preferably initiates shutdown and restart of the local machine.
The Monitor Local Processes 148 also accepts a software trap indication from any process as an indication that a condition that should not be possible has occurred (usually indicating a coding error). Upon receipt of a software trap, Monitor Local Processes 148 initiates a shutdown/restart sequence immediately.

00'7816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
The Log Loca! Messages process 150 accepts messages from processes running on a local machine. Messages are primarily intended to convey error/alarm information, but can be used for debugging or informational purposes as well. An error event message can describe a transient error such as receipt of an anomalous record completely. A
persistent error, such as a full disk or unavailable database, is represented by an initial error event message and a return to normal error event message. Each machine has an instance of Lag Local Messages 150 running, on it. All messages are reported to the local machine.
Messages arriving at Log Local Messages 150 preferably include a message text, identification of the process that generated the message, an indication of whether the message requires acknowledgment, and whether the message is to be reported to the SNMP interface, or the Message Log, both, or neither. Log Local Messages 150 provides message reports to the Message Log 152 that preferably include the message text, identification of the process that generated the message, a time stamp, an indication of whether the message requires acknowledgment, and an initial indication that the message has not been acknowledged.
Log Local Messages 150 also takes responsibility for periodically logging a new entry in the Message Log 152 for persistent error conditions. Messages that represent persistent errors are expected to 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
arrive with the rate at which they should be repeatedly logged as part of their message provisioning information.
Each machine preferably has its own SNMP Agent 154, which collects process status and forwards errors to an SNMP Manager 156.
This interface is intended to provide read-only information that has been determined to be of interest to an external SNMP Manager. The provisioning of this interface is not supported by the requester interface.
The Message Log store on any given machine preferably contains only the messages generated on that machine. The request handling interface can create a presentation of message requesters that contains all message for the entire system. This approach provides robustness in the event of loss of communication between machines or machine failures, wherein the operator can still see the messages for the local machine, and any other machines that are accessible. No single machine failure can make all messages unavailable.
In order to support the archiving of messages, the Message Log stores preferably reside in an ODBC database.
The Handle Global Requests process also preferably appears as an instance running on each machine, so that information from the global stores can be made available to a user even in the absence of perfect connectivity among the machines. The Handle Global Requests serves oo~si6.ooo3i:o3s~ssi.oi - Atty. Dkt. No. 7816/3'1 PATENT
requests for Message Log data, and for updating the Message Log with operator acknowledgments.
SNMP Agent FIGURE 14 illustrates the SNMP Agent 154 in greater detail. The SNMP Agent 154 and SNMP Manager 156 share a common Management Information Base (MIB) definition 158. The MIB definition file 158 is loaded into the Agent and into the Manager to establish the structure of the data exchanged.
The SNMP Trap 160 collects SNMP Error Reports for errors that have been earmarked for reporting via SNMP, and generates traps to the SNMP Manager via the snmptrap command. The input to snmptrap is a message text, a numerical object identifier, and an error number. Thus the SNMP Error Report must include this information or a means of creating it.
It is for this reason that Messages include a message number and the object that generated the message, as well as the text of the message.
SNMP Trap 160 uses the message number and its generating object to create a unique error number, and uses the generating object and the machine on which it is instantiated to create a numerical object identifier.
An SNMP Manager 156 can query for process status by using standard SNMP mechanisms. The Process Status Reports 162 of interest are collected into a Process Status File 164 for read-only use by the SNMP

007816.00031:0387881.01 _ Atty. Dkt. No. 7816/31 PATENT
mechanisms. A Master SNMP Agent 166 provides the standard SNMP
capabilities for an agent. An Extensible SNMP Subagent 168 provides the capability to get information from a file. A Process Status Parse Script 170 process takes the needed information from the Process Status File and puts it in the expected MIB format.
The processes in accordance with the invention preferably comprise software, and thus one of the preferred implementations of the invention is as a set of instructions (program code) in a code module resident in the random access memory of one or more general purpose computers. Until required by a computer, the set of instructions may be stored in another computer memory, e.g., in a hard disk drive or in a removable memory such -as an optical disk (for eventual use in a CD ROM) or a floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or some other computer network. In addition, although the various methods described are conveniently implemented in a computer selectively activated or reconfigured by software, one of ordinary skill in the art would also recognize that such methods may be carried out in hardware, in firmware, or in more specialized apparatus constructed to perform the required method steps.
A representative computer on which the inventive operation is performed includes a processor (e.g., Intel-, PowerPC°- or RISC°-based), oo~sis.ooo3i:o3s~ssi.oi Atty. Dkt. No. 7816/31 PATENT
random access or other volatile memory, disc storage, a display having a suitable display interface, input devices (mouse, keyboard, and the like), and appropriate communications devices for interfacing the computer to a computer network. Random access memory supports a computer program that provides the functionality of the present invention.
Although the invention has been described in the context of utility metering, this is not a limitation. The inventive architecture may be implemented in the telecommunications industry to collect endpoint data from a telephone access switch, e.g. such as a Lucent Model 5ESS switch.
The collection system would instantiate a DPI to connect and extract phone usage data or be used to provision new phone service via pushback data. This data in turn would be parsed, stored, then ultimately translated and delivered to a client user of the data. That data could be used for customer service, switch maintenance or monitoring, or for telephone billing data requirements. The inventive architecture can additionally provide integration features to interpret data from one telephone company computer application to another telephone company computer application if they do not currently process or interpret data the same way or in the same format.
In addition, the architecture is useful in the integration role in the AMR and any other industry where systems are disparate and need to 007816.00031:0387881.01 Atty. Dkt. No. 7816/31 PATENT
communicate and exchange data. The system offers parsing, translation, and delivery capabilities to essentially any industry.
Having thus described our invention, what we claim as new and desire to secure by Letters Patent is set forth in the following claims.

007816.00031:0387881.01

Claims (37)

1. A method of processing service usage and status data collected from diverse sources, each source being associated with a client, comprising:
(a) parsing the data into a common format;
(b) translating data associated with a given client from the common format into a format specified by said client; and (c) delivering said data to said client in the specified format.
2. The method of Claim 1 further comprising assigning data from each source to a client associated with said source prior to Step (a).
3. The method of Claim 1 wherein said sources comprise endpoint service monitoring devices.
4. The method of Claim 1 wherein said sources comprise real-time data sources.
5. The method of Claim 1 wherein said sources comprise non-live data sources.
6. The method of Claim 1 wherein the format specified in Step (b) is compatible with an existing information system of said client.
7. The method of Claim 1 wherein at least one of said steps is controlled by rules-based logic processing.
8. The method of Claim 7 further comprising modifying rules to determine how, when or what data is delivered to a client.
9. The method of Claim 1 wherein said data contains information on utility outage, consumption and tampering.
10. The method of Claim 1 wherein at least one of said steps is performed on an on-request basis.
11. The method of Claim 1 wherein at least one of said steps is performed on a scheduled basis.
12. The method of Claim 1 further comprising storing the data in a common format database after Step (a) until required for client delivery.
13. The method of Claim 1 further comprising archiving the data.
14. The method of Claim 1 further comprising identifying high priority data collected from said sources and prioritizing the processing of said data.
15. A system for processing service usage and status data collected from diverse sources, each source being associated with a client, comprising:
(a) means for parsing the data into a common format;
(b) means for translating data associated with a given client from the common format into a format specified by said client; and (c) means for delivering said data to said client in the specified format.
16. The system of Claim 15 wherein said sources comprise endpoint service monitoring devices.
17. The system of Claim 15 wherein said sources comprise real-time data sources.
18. The system of Claim 15 wherein said sources comprise non-live data sources.
19. The system of Claim 15 wherein the format specified is compatible with an existing information system of said client.
20. The system of Claim 15 wherein at least one of said elements (a), (b) and (c) is controlled by rules-based logic processing.
21. The system of Claim 20 further comprising means for modifying rules to determine how, when or what data is delivered to a client.
22. The system of Claim 15 wherein said data contains information on service outage, usage, service tampering or corruption of service.
23. The system of Claim 15 wherein at least one of said elements (a), (b) and (c) is performed on an on-request basis.
24. The system of Claim 15 wherein at least one of elements (a), (b) and (c) is scheduled.
25. The system of Claim 15 further comprising a common format database for storing the data parsed by the means for parsing until needed for client delivery.
26. The system of Claim 15 further comprising means for archiving the data.
27. The system of Claim 15 further comprising means for identifying high priority data collected from said sources and prioritizing the processing of said data.
28. The system of Claim 27 wherein said high priority data comprises outage detection data.
29. The system of Claim 15 wherein each of elements (a), (b) and (c) are implemented on a single machine.
30. The system of Claim 15 wherein at least one of the elements (a), (b) and (c) is implemented on a separate machine from said other elements.
31. The system of Claim 15 further comprising means for gathering data from said sources.
32. A computer program product in a computer-readable medium for use in a computer for processing service usage and status data collected from diverse sources, each source being associated with a client, comprising:
(a) means for parsing the data into a common format;
(b) means for translating data associated with a given client from the common format into a format specified by said client; and (c) means for delivering said data to said client in the specified format.
33. The system of Claim 32 wherein said sources comprise endpoint service monitoring devices.
34. The system of Claim 32 wherein at least one of said elements (a), (b) and (c) is controlled by rules-based logic processing.
35. The system of Claim 34 further comprising means for modifying rules to determine how, when or what data is delivered to a client.
36. The system of Claim 32 further comprising means for identifying high priority data collected from said sources and prioritizing the processing of said data.
37. The system of Claim 36 wherein said high priority data comprises outage detection data.
CA 2296136 1999-01-15 2000-01-14 System for processing and distributing service and status data from diverse sources Abandoned CA2296136A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US23203199A 1999-01-15 1999-01-15
US09/232,031 1999-01-15

Publications (1)

Publication Number Publication Date
CA2296136A1 true CA2296136A1 (en) 2000-07-15

Family

ID=31887674

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2296136 Abandoned CA2296136A1 (en) 1999-01-15 2000-01-14 System for processing and distributing service and status data from diverse sources

Country Status (1)

Country Link
CA (1) CA2296136A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221699A (en) * 2018-11-27 2020-06-02 北京神州泰岳软件股份有限公司 Resource association relationship discovery method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221699A (en) * 2018-11-27 2020-06-02 北京神州泰岳软件股份有限公司 Resource association relationship discovery method and device and electronic equipment
CN111221699B (en) * 2018-11-27 2023-10-03 北京神州泰岳软件股份有限公司 Resource association relation discovery method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11244233B2 (en) Intelligent adaptor service in unified automation platforms for robotic process automation
US8204870B2 (en) Unwired enterprise platform
US6349333B1 (en) Platform independent alarm service for manipulating managed objects in a distributed network management system
US6160988A (en) System and method for managing hardware to control transmission and reception of video broadcasts
EP1604486B1 (en) Relational model for management information in network devices
US8707336B2 (en) Data event processing and application integration in a network
CN101690123B (en) Apparatus, system, and method for a data server-managed web services runtime
US10698745B2 (en) Adapter extension for inbound messages from robotic automation platforms to unified automation platform
US11308061B2 (en) Query management for indexer clusters in hybrid cloud deployments
KR20010103731A (en) Method and apparatus for the dynamic filtering and routing of events
US8209412B2 (en) Methods for managing a plurality of devices using protectable communication protocol, including determination of marketing feedback to assess a response to an advertisement
US7069184B1 (en) Centralized monitoring and early warning operations console
KR20100066468A (en) Method and apparatus for propagating accelerated events in a network management system
US20040205154A1 (en) System for integrated mobile devices
US20060235852A1 (en) System for inter-database communication
US20080162690A1 (en) Application Management System
US7779113B1 (en) Audit management system for networks
JP4485112B2 (en) Log data collection management method and apparatus
US10678803B2 (en) Single point of dispatch for management of search heads in a hybrid cloud deployment of a query system
US7523086B1 (en) System for retrieving and processing stability data from within a secure environment
KR20010073083A (en) Web application system having session management/distributed management function and mechanism for operating the same
US7499937B2 (en) Network security data management system and method
CN113486095A (en) Civil aviation air traffic control cross-network safety data exchange management platform
US7814140B2 (en) Method of monitoring and administrating distributed applications using access large information checking engine (ALICE)
CN110532313A (en) DEU data exchange unit

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead