WO2004063866A2 - A system and method for processing hardware or service usage and intelligent data caching - Google Patents

A system and method for processing hardware or service usage and intelligent data caching Download PDF

Info

Publication number
WO2004063866A2
WO2004063866A2 PCT/US2004/000186 US2004000186W WO2004063866A2 WO 2004063866 A2 WO2004063866 A2 WO 2004063866A2 US 2004000186 W US2004000186 W US 2004000186W WO 2004063866 A2 WO2004063866 A2 WO 2004063866A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
service
instances
transaction
instructions operable
Prior art date
Application number
PCT/US2004/000186
Other languages
French (fr)
Other versions
WO2004063866A3 (en
Inventor
Anthony L. Sorrentino
Michael S. Fischer
Rachel M. Smith
Original Assignee
Sbc Properties, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/338,172 external-priority patent/US7080060B2/en
Priority claimed from US10/338,560 external-priority patent/US7827282B2/en
Application filed by Sbc Properties, L.P. filed Critical Sbc Properties, L.P.
Publication of WO2004063866A2 publication Critical patent/WO2004063866A2/en
Publication of WO2004063866A3 publication Critical patent/WO2004063866A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/183Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
    • G06F1/185Mounting of expansion boards

Definitions

  • the present invention relates generally to computer processing and, more particularly, to a system and method for high-volume data processing and the intelligent " management of stored data.
  • rules-based management systems operate according to rules-based management systems.
  • the typical product of rules-based systems is a single data processing model containing comprehensive logic that represents all of the business rules for a particular line of business.
  • one consequence of such a program structure is that as each transaction or data record is received for processing, the comprehensive program will typically need to evaluate each business rule to determine its applicability to the current transaction or data record before performing any processing operations.
  • processing transaction records becomes very hardware intensive and highly inefficient, especially in those rules-based program structures adapted to accommodate a wide variety of transaction record types .
  • rules-based management systems do provide for the easy incorporation of additional business logic, they typically do not achieve the levels of transaction or data record throughput desired and demanded by most high- volume data processing operations.
  • FIGURE 1 is a schematic drawing generally depicting one embodiment of a data processing system incorporating teachings of the present invention and deployed in cooperation with a telecommunications network;
  • FIGURE 2 is a flow diagram depicting one embodiment of a method for operating a service-based architecture in a data processing system according to teachings of the present invention
  • FIGURE 3 is a flow diagram illustrating one embodiment of a method for implementing a service control manager in a service-based architecture according to teachings of the present invention
  • FIGURE 4 is a flow diagram illustrating one embodiment of a method for launching a service control manager according to teachings of the present invention
  • FIGURES 5-8 are flow diagrams illustrating one embodiment of a method for operating a service control manager in a service-based architecture according to teachings of the present invention
  • FIGURE la is a perspective view, partially exploded, showing an embodiment of a data processing system incorporating teachings of the present invention
  • FIGURE 2a is a block diagram illustrating one embodiment of a data caching service incorporating teachings of the present invention
  • FIGURE 3a is a block diagram illustrating one embodiment of a common data memory object incorporating teachings of the present invention
  • FIGURE 4a is a flow diagram illustrating one embodiment of a method for implementing a data caching service incorporating teachings of the present invention.
  • FIGURES 5a and 6a are flow diagrams illustrating one embodiment of a method for maintaining a common data memory object incorporating teachings of the present invention.
  • FIGURES 1 through 8 and la through 6a of the drawings like numerals being used for like and corresponding parts of the various drawings .
  • FIGURE 1 a diagram depicting one environment in which teachings of the present invention may be implemented is shown.
  • the present invention is directed to enhancing the efficiency with which a high volume data processing system can process data, for example, telecommunications hardware or service usage transaction records.
  • teachings of the present invention may be employed in data processing environments other than telecommunications systems including, but not limited to, retail transaction systems, accounting systems, point-of-sale systems, shipping systems, data caching systems, as well as others .
  • the present invention may be employed to process telecommunication hardware or service usage transaction records from such sources as POTS (plain old telephone system) telephone 103, wireless telephone 106, wireless communications enabled PDA
  • POTS plain old telephone system
  • Computer system 112 may communicate via a wireless or wireline Internet 113 as well as via other means.
  • POTS telephone 103, wireless telephone 106 and PDA 109 may also be operable to communicate via Internet
  • Telecommunication switches 121, or a computing component operating therewith in a typical operating scenario, preferably accumulates the telecommunications hardware or service usage transaction records generated in response to the use of one or more of communication devices 103, 106, 109 or 112.
  • telecommunications switch 121 or a computing component cooperating therewith preferably sends the accumulated records via batch transactions transmission 127 to one or more of data processing systems 124.
  • one or more of data processing systems 124 may request or otherwise obtain the batch transaction records 127 from telecommunications switch 121.
  • the transfer of data between telecommunications switch 121 and one or more of data processing systems 124 may be implemented via Internet 113, wireline communications 115, wireless communications 118, or other communication technologies, according to teachings of the present invention.
  • telecommunication switch 121 is responsible for handling large volumes of telecommunication hardware or service usage transaction records.
  • data processing systems 124 are typically responsible for processing large quantities of transaction records from a defined set of transaction record types. According to teachings of the present invention, the efficiency with which data processing systems 124 process such large volumes of telecommunication transaction records may be enhanced by implementing, executing or otherwise maintaining a plurality processing services where each of the processing service instances is dedicated or specifically adapted to processing selected ones of the defined transaction record type set .
  • data processing systems 124 may assume a variety of forms.
  • data processing systems 124 may include a plurality of rack servers.
  • data processing systems 124 will preferably include one or more processor, memory, at least one communications port, one or more user input devices, one or more displays, as well as other components.
  • Examples of data processing system 124 manufacturers include, but are not limited to, DELL, Hewlett Packard, Sun Microsystems and Apple.
  • data processing systems 124 may enhance the efficiency with which they process transaction records or other data through the implementation of a service-based architecture.
  • a service-based architecture may be defined as a data processing operation implemented such that it deploys a plurality of data processing service instances, where the service instances are dedicated to the processing of one or more selected transaction record types.
  • One example of a service-based architecture implementation incorporating teachings of the present invention is depicted generally at 133 of FIGURE 1.
  • Service-based architecture implementation 133 preferably includes a service control manager 136 generally operable to supervise, manage or otherwise maintain the plurality of dedicated transaction record type processing service instances running or executing on data processing system 124.
  • service control manager (SCM) 136 preferably cooperates with, employs or otherwise implements the preferred functionality of one or more message queue managers (MQM) 139, service queue managers (SQM) 142 and one or more service application management modules (SVC) 145.
  • MQM message queue managers
  • SQM service queue managers
  • SVC service application management modules
  • the present invention preferably implements a service-based architecture in that a plurality of individual services make up the processing architecture.
  • SCM 136, MQM 139, SQM 142 and SVC 145 are each preferably services forming a portion of the service-based architecture.
  • the transaction record processing service instances may be adapted to process a limited plurality of selected transaction record types where the limited plurality of transaction record types require substantially similar processing logic.
  • the service or processing objects of the present invention may be accumulated into like-type groups or groupings.
  • one or more 911 transaction record processing service instances may be organized into a single group. It is these grouping for which the message queue management and service queue management functions discussed herein may optimally be employed.
  • data processing system 124 may more readily adapt to changing workloads by adding or subtracting service instances to or from the group.
  • each SVC 145 preferably includes a minimum number of service instances 148 dedicated or configured to process selected transaction record types.
  • SVC 145 may desire to maintain at least three (3) instances of a dedicated transaction record type service adapted to process 911 emergency calls, a minimum number of service instances dedicated or adapted to process transaction records resulting from directory assistance calls, one or more service instances for processing calls placed during peak/off-peak times, as well as service instances dedicated to the processing of other transaction record types.
  • the one or more service instances 148 dedicated or configured to process one or more selected record types may each operate in accordance with an operating schedule. As such, in one aspect, having one or more service instances that remain substantially continuously available suggests that such service instances remain substantially continuously available within their scheduled time slots of operation.
  • service instances operable to process transaction records of type ' E' may be scheduled to execute or run Monday through Friday during the hours of 6:00 AM to 7:00 PM, while service instances operable to process transaction records of type ' F 1 may be scheduled to run every Thursday during the hours of noon to 6:00 PM. Therefore, during the Monday through Friday, 6:00 AM to 7:00 PM, the service instances operable to process transaction records of type 'E' are preferably monitored and otherwise maintained such that they remain substantially continuously for the desired period.
  • selected service instances may be scheduled to run monthly, bimonthly, semi- annually, annually, etc., as well as not to run on selected dates or not to run on selected recurring dates.
  • Each of the successive scheduled service instances is preferably assigned a priority to ensure its preferred processing.
  • the scheduling of the various service instances as well as any associated priority processing is preferably maintained by SCM 136.
  • SVC 145 may be adapted to initiate a transaction record processing service adapted to process the record type.
  • the transaction record processing service may be called upon receipt of such transaction records, according to a defined schedule, in response to accumulation of a number of such transaction records, or otherwise.
  • SQM 142 in one aspect of the present invention, is preferably adapted to adjust or otherwise manage one or more service queues associated with a corresponding service instance 148.
  • Service queues may be defined as queues which are operable to maintain for processing transaction records received from a message queue associated with the grouping of like-type transaction record processing service instances.
  • SQM 142 Among the operations preferably effected by SQM 142 is the balancing of queued transaction records across or among the active service instances available for processing the specific transaction record types, e.g., across the grouped instances of 911 transaction record processing services.
  • SQM 142 may be further adapted to perform additional operations.
  • MQM 139 preferably cooperates with SQM 142, in part, to further increase the efficiency with which the various service instances process queued transaction records.
  • MQM 139 preferably prepares, readies or otherwise initiates one or more services adapted to manipulate the message queues associated with the one or more groups or groupings of service instances directed to processing a selected transaction record type.
  • MQM 139 may be adapted to prioritize messages, e.g., transaction, service instance cancellation messages, etc., within associated service queues MQM 139 may also be adapted to initiate additional service instances in response to a backlog in one or more message or service queues, as well as to execute or cancel other services according to a service schedule. Further detail regarding the operation and cooperation of SCM 136, MQM 139, SQM 142 and SVC 145 will be discussed below.
  • data processing system 124 may be employed to perform one or more preprocessing operations.
  • the batch transaction records processed by data processing systems 124 may subsequently be passed on to a billing statement process or other subsequent transaction record processing operation 154.
  • implementation of a service-based architecture preferably includes maintaining a plurality of service instances operable to process selected types of transaction records.
  • the plurality of service instances are available on one or more of data processing systems 124, as indicated at 203, on a substantially continuous basis.
  • one or more of data processing systems 124 may substantially continuously execute or run one or more telecommunication service or hardware usage transaction record processing services for each of transaction record types 'A'- 'M', for a total of (n) record type services .
  • a service-based architecture incorporating teachings of the present invention preferably enables the queuing of transaction records according to transaction record type to a queue associated with a respective dedicated transaction record type processing service instance or grouping of service instances.
  • each type 'E' transaction record will preferably be queued to a message or service queue associated with a group having one or more service instances directed or dedicated to processing type ' E' transaction records.
  • the present invention preferably enables the queued transaction records to be balanced across or among the plurality of service instances such that more efficient data processing may be achieved.
  • Initial balancing and balancing as-needed generally enhances the efficiency with which data processing system 124 manages the service instances it maintains and completes it designated tasks.
  • the queued transaction records may be processed in accordance with their associated transaction record processing service. Upon completion of the processing operations, the processed transaction records may then pass to one or more subsequent operating centers, such as a billing operations center 154 of FIGURE 1.
  • the present invention may be employed in a variety of data processing systems.
  • telecommunications system 100 transaction records may be created and received in a variety of formats. One format which is typically produced in a telecommunications system such as system
  • AMA 100 is the Automated Message Accounting (AMA) format from Telecordia and is employed by most telecommunications carriers.
  • Other formats such as EMI (Exchange Message Interface) records, may also be employed with the present invention.
  • the present invention may be employed with other transaction record formats, such as a format developed to track downloads or other operations performed by computer 112 via Internet, private network or other network connections 113.
  • FIGURE 3 a flow diagram depicting one embodiment of a method for implementing SCM 136 is shown.
  • SCM 136 may be launched at 306. Additional details regarding the launch of SCM 136 are discussed with reference to FIGURE 4.
  • method 300 preferably proceeds to 309 where one or more service instances currently or recently operating on data processing system 124 may be recovered, preferably by SCM 136.
  • SCM 136 After completing its recovery operations, SCM 136 preferably enters its normal operating mode at 312. SCM 136 may be configured such that it returns to
  • SCM 136 may be adapted to rest or sleep for a predetermined period after the passing of which it may return to a prior operation, e.g., 309 or 312, in method 300. Additional detail regarding the launch, recovery, and normal operating mode of SCM 136 will be discussed in greater detail below with respect to FIGURES 4-8.
  • FIGURES 4-8 disclose one embodiment of a preferred operating cycle of a data processing system incorporating teachings of the present invention. It should be understood that FIGURES 4-8 primarily discuss one method but that many variations in may be made without departing from the spirit and scope of the present invention.
  • SQM 142 preferably includes one or more services which may be called from SCM 136 or from one or more other data processing system 124 services.
  • the services included in SQM 142 are preferably operable to manipulate, maintain or otherwise manage one or more service queues associated with the active or existing transaction record processing service instances.
  • the services of SQM 142 may also be adapted to provide information regarding one or more aspects of the transaction record processing service queues. For example, one ore more services of SQM 142 may be operable to determine the number of transaction records remaining in a service queue, determine the throughput of a service queue as well as perform other operations.
  • MQM 139 may be launched.
  • MQM 139 preferably includes one or more services which may be called from SCM 136 or from one or more transaction record processing service instances.
  • the one or more services included in MQM 139 are preferably adapted to manipulate or provide information regarding the one or more message queues associated with the groupings of or individual service instances running in the service-based architecture of the present invention.
  • MQM 139 is preferably operable to prioritize transaction records queued for processing.
  • MQM 139 may be enabled to reorder a message or service queue to cause priority designated transaction records to be processed first and the remaining transaction records to be processed according to a FIFO (first-in-first-out) or other method.
  • MQM 139 may be adapted to initiate additional service instances if it is determined that the number of transaction records in a service or message queue for a given record type service group exceeds a desired performance parameter. More detail regarding the operation and cooperation of SCM 136, MQM 139, SQM 142 and SVC 145 is described below.
  • SVC 145 may be launched at 412 of FIGURE 4. MQM 139, SQM 142 and 145 may be launched in any order or substantially simultaneously, according to teachings of the present invention.
  • SVC 145 preferably includes one or more services that may be called upon to serve one or more transaction record processing needs of data processing system 124.
  • Each of the services included in SVC 145 preferably include all of the logic necessary to process the transaction record types associated with a particular service.
  • method 400 preferably proceeds to method 500 of FIGURE 5.
  • SCM 136 preferably initiates and effects the recovery of any stalled service instances.
  • a stalled service instance may include, but is not limited to, those service instances which have ceased processing for reasons other than a command to terminate .
  • Recovery of a stalled service instance may involve restarting the service instance and its associated service queue, removing any unprocessed transaction records from the instance ' s service queue for processing by another service instance as well as performing other operations .
  • method 500 preferably proceeds to 506.
  • SCM 136 preferably effects the recovery of any terminated transaction service instances.
  • Terminated services instances which SCM 136 may desire to recover include, for example, any service instances or queues which were terminated before completing processing of one or more queued transaction records. Other definitions of recovered terminated services instances may be employed in the present invention. In addition, other termination events may be addressed and recovered by SCM 136.
  • method 500 Upon completion of terminated service instance recovery at 506, method 500 preferably proceeds to 509.
  • SCM 136 preferably begins a process of verifying all existing or active service instances.
  • a data processing system 124 operating in accordance with teachings of the present invention maintain, in addition to SCM 136, MQM 139, SQM 142 and SVC 145, a minimum number of service instances adapted to the processing of one or more selected transaction record types, running or executing substantially continuously.
  • SCM 136 preferably determines the number of active instances of each transaction record service type currently available on data processing system 124.
  • SCM 136 For each type of transaction record service instance counted at 509, SCM 136 preferably compares the total number of such active service instances to its respective preferred minimum number of service instances selected to remain substantially continuously available on data processing system 124 at 512. If it is determined that the total number of active instances of the particular transaction type service under evaluation matches the preferred minimum number of such service instances selected to remain substantially continuously available at 512, method 500 preferably proceeds to 515. If it is determined at 512 that the number of active instances of the current transaction type service is below the preferred number of service instances for such service type, method 500 preferably proceeds to 518. Finally, if it is determined at 512 that too many active instances of the current service type are available, method 500 preferably proceeds to 521.
  • SCM 136 in response to too few instances of a particular transaction type service, SCM 136 will preferably initiate an additional number of transaction record service instances to bring the number of active service instances into accordance with the preferred minimum. Following initiation of the additional transaction record service instances at 518, method 500 preferably proceeds to 515. Alternatively, at 521, in response to a determination of too many active service instances of a given type, SCM 136 may initiate the cancellation of one or more of the excess service instances . Once an appropriate number of transaction record service instances have been cancelled at 518, method 500 preferably proceeds to 515. At 515, a determination may be made, preferably by SCM 136, as to whether all types of currently available or active transaction record service instances have been reviewed for their compliance with their respective preferred minimums .
  • method 500 preferably returns to 509 where the next group of instances of a particular transaction record service type may be evaluated for its compliance with a corresponding number of service instances preferably substantially continuously available. Alternatively, if it is determined at 515 that all currently available service types have been reviewed for their compliance with their respective preferred numbers of active service instances, method 500 preferably proceeds to 524.
  • SCM 136 may determine whether each of the transaction record service types selected to remain substantially continuously available on data processing system 124 is active and/or available. If it is determined at 524 that each of the dedicated transaction record service types selected to remain substantially continuously available are not currently available, method 500 preferably proceeds to 527 where the preferred minimum number of instances for a currently unavailable transaction record service type may be identified. Proceeding then to 530, the minimum number of service instances for the unavailable service type identified at 527 may be activated, initiated or otherwise made available. Upon initiation of the currently inactive transaction record service instances at 530 or upon the determination at 524 that each of the transaction record service types selected to remain substantially continuously available is made, method 500 preferably proceeds to 533.
  • operation of the queues, message and/or service, associated with each of the groupings of one or more transaction record service instances may be paused or placed on hold. Holding each of the queues associated with the dedicated transaction record service instance groupings may occur substantially simultaneously, substantially sequentially, or otherwise.
  • any queued transaction records may be balanced or distributed across the service instances of each corresponding service type grouping at 536.
  • queue management may be effected by SQM 142 or MQM 139, preferably as instructed or coordinated by SCM 136.
  • processing by the service instance of a current transaction record may also be paused, be permitted to complete or continue unabated.
  • the service instances designated for cancellation at 518 may be appropriately cancelled at 539.
  • Cancellation of one or more service instances may occur, for example, by SCM 136 inserting in a service queue associated with each service instance to be cancelled a cancellation instruction whereby the service instance will be cancelled or ended upon completion of its current service queue load and the subsequent processing of the cancellation instruction.
  • the queues are preferably released such that processing of the transaction records may resume at 542.
  • SCM 136 preferably updates a service registry to indicate each instance, type and other information regarding the services running on data processing system 124 at 545.
  • a service registry operating in accordance with the service-based architecture of the present invention may track the varied types of service instances active on data processing system 124, the number of instances in each service type grouping, a termination status for one or more service instances or groups, when one or more service instances or groups will awake from a sleep state, as well as other information.
  • SCM 136 After updating the service registry as desired at 545, SCM 136 preferably proceeds to 548 where it may loop, sleep or remain in a wait state until its next processing period, event or occurrence .
  • method 500 preferably proceeds to 551 where it may be determined whether data processing system 124 is in a recovery mode, other than the 'recovery mode' which occurs at initialization, or its normal operating mode. If it is determined that data processing system 124 is in a recovery mode, such as a recovery mode resulting from one or more system malfunctions, software updates, etc., method 500 preferably returns to 503 where SCM 136 may begin its service recovery operations. Alternatively, if it is determined that data processing system 124 is in a normal operating mode, method 500 preferably proceeds to method 600 of FIGURE 6.
  • the 'normal operating mode' of data processing system 124 and SCM 136 repeats or includes many of the operations preferably performed during the 'recovery mode' illustrated generally at 500 in FIGURE 5. It should be understood that operations other than those of the recovery mode 500 may be incorporated into a 'normal operating mode' without departing from the spirit and scope of the present invention.
  • SCM 136 preferably identifies those dedicated transaction record service types selected to remain substantially continuously available on data processing system 124.
  • the services selected to remain substantially continuously available may be maintained by a service registry, by setting a bit on one or more service calls, in a data file or in another useful by data processing system 124 and SCM 136.
  • SCM 136 preferably checks or counts the number of active instances for each of the service types selected to remain substantially continuously available.
  • an active service instance may be defined as a service instance currently processing one or more transaction records, a service instance substantially immediately available to process one or more transaction records or a service instance awaiting receipt of one or more transaction records for processing.
  • Other definitions or descriptions of an active service instance may be used without departing from the spirit and scope of the present invention.
  • the preferred number of active instances for each service type selected to remain substantially continuously available may be obtained by SCM 136. Subsequently, SCM 136 may also determine whether the number of active service instances for each selected service type is in accordance with the preferred number of service instances for that particular service type. If it is determined at 609 that the number of active service instances for a particular service type is in accordance with the desired minimum number of service instances, method 600 preferably proceeds to 612.
  • method 600 preferably proceeds to 615 where SCM 136 may interrogate data processing system 124 to ascertain the presence of any stalled instances of the current service type.
  • method 600 if SCM 136 identifies any stalled instances of the current service type, method 600 preferably proceeds to 618 where SCM 136 may initiate recovery of such stalled service instances before returning to 609.
  • 600 may proceed to 621 where SCM 136 may interrogate data processing system 124 to ascertain whether there exists any improperly terminated instances of the current service type.
  • method 600 may proceed to 624 where SCM 136 preferably initiates an appropriate number of instances for the current service type.
  • method 600 preferably proceeds to 627 where
  • SCM 136 may initiate the recovery of the improperly terminated instances.
  • method 600 Upon initiation of an appropriate number of current service type instances at 624, or upon completion of the recovery of any improperly terminated instances of the current service type at 627, method 600 preferably returns to 612 where, as mentioned above, SCM 136 may determine whether each of the service types selected to remain substantially continuously available has been evaluated. Once all services selected to remain substantially continuously available have been evaluated in accordance with method 600, method 600 preferably proceeds to method 700 of FIGURE 7.
  • Illustrated generally at 700 in FIGURE 7 is a flow diagram depicting one manner in which teachings of the present invention may enhance the efficiency with which data processing system 124 processes transaction records received, for example, from telecommunications switch 121.
  • SCM 136 preferably timely monitors the processing operation of each service type grouping to ensure that each is performing in accordance with one or more system performance parameters.
  • method 700 can be understood to run as sequenced in FIGURES 3 - 8.
  • method 700 can be understood to run appropriately after receipt of batch transaction transmission 127 and upon subsequent distribution or allocation of the transaction records to their respective transaction record service type instances or transaction type groups .
  • SCM 136 After receipt and distribution or allocation of the transaction records received from telecommunications switch 121, or upon completion of method 600, SCM 136 preferably begins method 700 at 703. At 703, SCM 136 may compare the number of transaction records in one or more message or service queues associated with a selected transaction type service group to a preferred system performance parameter, such as a queue throughput or queue volume. As mentioned above, MQM 139 and SQM 142 are preferably adapted to manipulate and provide information on message and service queues, respectively. If at 703 SCM 136 determines that the number of transaction records in a queue associated with a group or individual instance of a particular dedicated transaction type service exceeds a preferred system performance parameter, method 700 may proceed to 706. Alternatively, if SCM 136 determines that the number of queued transaction records associated with a particular transaction type service group or individual service instance matches or is below the preferred system performance parameter, method 700 may proceed to 709.
  • a preferred system performance parameter such as a queue throughput or queue volume.
  • SCM 136 may again interrogate data processing system 124 to determine whether any stalled or improperly terminated service instances or queues associated with the transaction type under evaluation exist. If one or more stalled or improperly terminated service instances, groupings or queues are identified, method 700 preferably proceeds to 712 where the improperly terminated or stalled groups, queues or instances may be recovered before returning to 703. In an alternate embodiment, upon recovery of any improperly terminated or stalled queues, groups or instance, method 700 may instead proceed from 712 to 721 for processing as described below.
  • method 700 may proceed to 715 where the total number of active service instances adapted to process the current transaction type may be compared to a preferred system limit on such service type instances.
  • the present invention may incorporate a limit to the number of instances of any one service type that may be running at one time. If the maximum number of allowed service instances is not exceeded at 715, method 700 preferably proceeds to 718 where one or more additional service instances may be initiated. From 718, method 700 will preferably proceed to 721.
  • each of the queues associated with the current service type being reviewed may be paused or held at 721.
  • the transaction records remaining in the queues to be processed may be substantially equally distributed across each of the active service queues and/or instances for the current transaction type at 724.
  • the queues and service instances may be released for normal processing operation at 727. If all service types have been balanced, as determined at 733, method 700 may proceed to method 800 of FIGURE 8, otherwise, method 700 preferably returns to 703.
  • SCM 136 may compare the number of available service instances for processing the current transaction record type to a minimum number of such service type instances selected to remain substantially continuously available on data processing system 124. If the number of such available service instances is in accordance with the preferred minimum, method 700 may proceed to 733 for performance of the operations described above. However, if it is determined at 709 that the number of such service instances exceeds the minimum number of service type instances selected to remain substantially continuously available, method 700 may proceed from 709 to 730 where one or more of the service instances may be terminated upon completion of its current processing load. In one aspect, eliminating excess service instances may free up system resources such that additional instances of other service types may be initiated or such that data processing system 124 may dynamically adjust it resource consumption. Method 700 may then proceed to 733 for processing as described above .
  • method 700 before method 700 proceeds from 709 to 730, balancing operations similar to those performed at 721, holding the queues, and 724, distributing the transaction records across active service instances and queues, may be effected. Further, before method 700 proceeds from 730 to 733, a balancing operation similar to that of 727, releasing the held queues, may be performed. Adding operations similar to those of 721, 724 and 727 preferably effects the balancing of the transaction record loads in each queue associated with each service type instance or service type grouping. Still other embodiments of method 700 may be incorporated in the present invention without departing from its spirit and scope.
  • Method 800 of FIGURE 8 generally illustrates a scheduling capability preferably implemented by SCM 136 and MQM 139, according to teachings of the present invention.
  • SCM 136 may cooperate with MQM 139 to effect as-needed operation of one or more service types.
  • MQM 139 may perform the bulk of the operations necessary to effect as-needed service calls.
  • As-needed service operation may include, but is not limited to, initiating or canceling service instances according to a service schedule and initiating service instances in response to receipt of one or more transaction types for which a service instance is selected to remain substantially continuously available.
  • SCM 136 or MQM 139 may access a system clock or other hardware available on data processing system 124 to determine the current time. With a determination of the current time, SCM 136 may proceed to 806 for the review of a service initiation or cancellation schedule to determine if one or more events are scheduled for execution. Alternatively or in addition, SCM 136 or MQM 139 may consult a queue holding transaction record for which there is no active service instance available.
  • method 800 preferably proceeds to 809 where SCM 136 may take the appropriate initiation or cancellation action.
  • SCM 136 may loop or pause in a wait state .
  • the active service instances may be adapted to sleep upon completion of queue processing
  • queue balancing may be effected in a manner that takes into account the complexity of transaction records waiting in each queue
  • the various benchmarks, metrics, performance parameters, throughput measures, etc. may be preset values or dynamically determined according to assorted processing characteristics over time
  • data, program or other information refreshes may be implemented in conjunction with the varied transaction record processing disclosed herein and the data processing system of the present invention may be implemented over a number of computer systems spanning a number of processing centers. Still other alterations are possible .
  • data processing system 100a may be implemented as a rack server.
  • tower servers, mainframe servers as well as other configurations may be employed.
  • data processing system 100a may be produced by such computer manufacturers as DELL, Hewlett Packard, Sun Microsystems, International Business Machines, Apple as well as others.
  • data processing system 100a illustrated in FIGURE la a number of common computing components may be compiled to create a computing device capable of processing various data types, preferably in large quantities, either on its own or in conjunction with a plurality of other data processing systems 100a.
  • data processing system 100a of FIGURE la preferably includes one or more HDD (Hard Disk Drive) devices 103a and may include, but is not limited to, a FDD (Floppy Disk Drive) device 106a and a CD/RW (Compact Disc/Read Write) device 109a.
  • HDD Hard Disk Drive
  • FDD Floppy Disk Drive
  • CD/RW Compact Disc/Read Write
  • Power supply 112a may provide power to the components of data processing system 100a via power connections 115a.
  • These and other computing components may be integrated into chassis 118a. Other computing components may be added and some of the aforementioned computing components removed without departing from teachings of the present invention.
  • System board 121a typically electrically or communicatively interconnects, often with the aid of cables 124a, the various computing components.
  • ASICs Application Specific Integrated Circuit
  • processors 130a may include one or more processors from such manufacturers as Intel, Advanced Micro Devices, Sun Microsystems, International Business Machines, Transmeta and others.
  • System board 118a may also include one or more expansion slots 133a adapted to receive one or more riser cards, one or more expansion cards adapted to enable additional computing capability as well as other components.
  • a plurality of memory slots 136a are also preferably included on system board 121a.
  • Memory slots 136a are preferably operable to receive one or more memory devices 139a operable to maintain program, process or service code, data as well as other items usable by processors 130a and data processing system 100a.
  • Memory devices 139a may include, but are not limited to, SIMMs (Single In-line Memory Module) , DIMMs (Dual In-line Memory
  • memory devices 139a may alone, or with the aid of HDD device 103a, FDD device 106a, CD/RW device 109a, or other data storage device, implement or otherwise effect one or more data type independent binary data cache containers 142a.
  • data cache containers 142a are preferably operable to intelligently cache data used by a plurality of services, programs or applications running on data processing system 100a.
  • Data cache containers 142a preferably organize common data shared by a number active processes, programs or services using a hashed data storage vector scheme.
  • data cache containers 142a preferably index a plurality of data storage vectors according to a hash table 145a.
  • Common data cache service 200a preferably includes listener 203a, one or more query threads 206a, and common data memory object 209a, among other components.
  • common data cache service 200a is preferably operable to access, store or otherwise maintain information for a plurality of application programs, processes or services 212a running or executing on data processing system 100a.
  • Application services 212a may include a wide variety of data processing services as well as multiple instances of one or more of such data processing services .
  • application services 212a may include a variety of transaction record service instances adapted to process a selected transaction record type.
  • Common data cache service 200a will preferably entertain all data access requests from the myriad services instances with the service instances maintaining no more logic than is necessary to know what data is necessary to effectively process its designated transaction record type and where to request access to such data.
  • Common data cache service 200a may also, especially in a service-based architecture where common data cache service 200a is one of many service instances, provide segregated or private areas of memory or storage for use by designated ones of application services 212a. Such private memory areas may be effected in a variety of manners .
  • listener 203a is preferably operable to perform a variety of functions in the operation of common data cache service 200a.
  • listener 203a preferably loops or remains in a wait state until it receives a data access request message from an application service 212a.
  • a data access request may involve a request to return the value of a stored constant, variable or other information, a request to change the value of an existing constant, variable or other information, a request to store a new constant, variable or other information, as well as other data access operations.
  • listener 203a may establish a communicative connection with the requesting one of application services 212a. After connecting with the current requesting one of application services 212a, in one embodiment of the present invention, listener 203a preferably endeavors to assign or designate one of query threads 206a for the requesting one of application services 212a. According to one embodiment of the present invention, listener 203a may be adapted or configured to initiate additional query threads 206a to enable the requested data access. Once a query thread
  • listener 203a preferably hands-off the current requesting application service to the assigned or designated query thread 206a. Following hand-off, listener 203a may return to its wait or loop state where it may await receipt of additional data access requests from an application service 212a. In an alternate embodiment, listener 203a may be configured to process data access requests from a plurality of application services 212a substantially simultaneously.
  • query threads 206a are preferably operable to provide a query thread, link or channel between an application service 212a and common data memory object 209a.
  • the connection between query thread 206a and the current requesting application service may be severed or otherwise ended.
  • query thread 206a may be reassigned by listener 203a to the next application service 212a requesting data access.
  • query threads 206a may be based on IPC (Interprocess Communication) technology principles. In general, however, query threads 206a may be implemented using a technology generally adapted to permit one computing process or service to communicate with another computing process or service, whether operating on the same or different data processing system.
  • An IPC query thread 206a may be based on named pipe technology, in one embodiment of the invention.
  • named pipe technology is a method for passing information from one computer process or service to other processes or services using a pipe, message holding place or query thread given a specific name. Named pipes, in one aspect, may require less hardware resources to implement and use.
  • Other technologies that may be used in the implementation of an IPC query thread 206a include, but are not limited to, TCP (Transfer Control Protocol) , sockets, semaphores, and message queuing.
  • common data memory object 209a preferably does not discriminate as to the type of data it stores, accesses or otherwise maintains. Accordingly, simple, complex, binary or virtually any other format of data may be maintained therein.
  • common data memory object 209a is preferably operable to store, access or otherwise intelligently maintain information according to a variety of methodologies. For example, as discussed further below, if the amount of data to be stored in common data memory object 209a is below a first threshold, the efficacy with which common data memory object 209a maintains or stores data may be such that caching will degrade the performance of data processing system 100a.
  • Such a threshold may involve data hit ratios, data seek or read times, as well as others.
  • an application service 212a seeking data may be referred elsewhere in the data processing system 100a, such as an application or service external to common data cache service 200a, for processing its data access request.
  • common data memory object 209a may store, access or otherwise maintain information for use in accordance with a single data storage vector methodology.
  • common data memory object 209a may be adapted or configured to store, access or otherwise maintain information for use in one of a number of data storage vectors, where the various data storage vectors are organized in accordance with a hash table. Further discussion regarding the variety of methods by which common data memory object 209a may access, store or otherwise maintain data or information will be discussed in greater detail below with respect to FIGURES 3a, 4a, 5a and 6a. As mentioned above, a service-based data processing system may benefit from teachings of the present invention.
  • listener 203a and query threads 206a are preferably generated or operated as light-weight processes, as indicated at 215a, for at least purposes of sharing CPU time.
  • CPU time for common data cache service 200a would be distributed among six (6) processes, common data cache service 200a, listener 203a, and the four (4) query threads 206a.
  • the result of such an implementation permits an existing operating system running on data processing system 100a to assume management responsibilities for the slicing of CPU time between each of light-weight processes 215a.
  • an additional application service 212a may be developed and provided to govern or otherwise manage the sharing of CPU or other hardware time among the varied processes, programs or services running on data processing system 100a.
  • Other methods of CPU and hardware management or sharing may be employed without deporting from the spirit and scope of the present invention.
  • common data memory object 209a preferably uses specific classes to store and access reference or shared data.
  • sorting data in data storage vectors typically provides faster data access times when a medium to large number of entries reside in a vector.
  • hashing data into buckets, where the buckets comprise data storage vectors has proven to be effective in breaking up large lists of data into smaller, more manageable and identifiable blocks of data.
  • common data memory object 209a is preferably based on two storage classes or techniques, hashing and data storage vectors.
  • hashing enables the creation of hash table 306a which in turn provides a data container adapted to hold multiple data storage vectors 309a or buckets.
  • Hashing in general, involves the generation of a key representing a bucket of data in the cache in which a single item may be stored. Duplicate hash key values may be generated to allow for the even distribution of data across a multitude of bucket containers.
  • Hash table 306a preferably enables common data memory object 209a to provide a varying number of entries to aid in locating stored data.
  • Data storage vectors 309a may be used to provide a container-managed method for storing a dynamic amount of data in one or more lists. Data storage vectors 309a are preferably created in such a way that initial and next allocations may be used to minimize the amount of memory allocations and copying when data lists contained therein grow. Increasing the efficiency desired in a high volume data processing system, such as data processing system 100a, may be furthered by sorting the data items in each of data storage vectors 309a. Sorting the data typically permits each data storage vector 309a to be searched using a binary split algorithm, further increasing the efficiency with which common data memory object 209a may serve .
  • one or more routines such as complex data helper routines 321a, may be provided which are adapted to generate or create a unique identifier from such complex data that may be used by hash table 306a.
  • application services 212a seeks to access, store or otherwise maintain complex data, e.g., a data structure as defined in the C/C++ programming language
  • common data memory object 209a may consult one or more of hashing algorithms 312a via hashing interface 315a and/or one or more of complex data helper routines 321a to effect such function or operation.
  • hashing algorithms 312a and hash table 306a may be employed to determine in which data storage vector 309a the simple data should be placed.
  • data access object 303a may be effected or aided by default helper routines 318a or complex data helper routines 321a.
  • searches for data in data storage vectors 309a may be aided, for simple data searches, by default helper methods 318a and, for complex data searches, by complex data helper routines 321a.
  • Complex data helper routines 321a may be implemented as a function class defined in data access object 303a or otherwise.
  • Data access object 303a preferably informs common data memory object 209a of its complex data helper routines 321a when common data memory object 209a is constructed.
  • common data memory object 209a is constructed with complex data helper routines as a parameter
  • the default helper methods 318a inside common memory object 209a may be replaced.
  • Complex data helper routines 321a may also be provided by data access object 303a to enable the performance of binary split searches as well as list scanning searches for the subject data of a data access request.
  • data access object 303a may provide a pointer to one or more functions, program operations or services operable to perform binary split searches, list scanning, as well as other processes.
  • Preferably included among the class of functions, programs, operations, processes or services implemented in default helper methods 318a or complex helper routines 321a to store, access or otherwise maintain complex data are a 'below vector range' test, an
  • Data caching service 200a of the present invention is preferably designed for intelligent caching, accessing and other data manipulation operations.
  • One aspect for which data caching service 200a is designed to operate intelligently is the manner in which data contained data storage vectors 309a is searched. For example, when one or more of data storage vectors 309a contains a large amount of data, it may be more efficiently searched if the data storage vectors 309a are sorted and searched using a binary split algorithm rather than a list scan methodology.
  • one or more performance parameters may be established which, when encountered, will cause data caching service 200a, common data memory object 209a, or data access object 303a to reformat, reorder or otherwise adjust the manner in which common memory object 209a stores, searches, accesses or otherwise maintains data.
  • performance parameters include, but are not limited to, search hit ratios, seek times for searched data, the amount of data stored in one or more of common data memory objects 209a or in one or more of data storage vectors 309a, as well as other memory or search performance metrics .
  • an automated method or process to determine or evaluate the current performance of storage, access or maintenance operations as compared to one or more performance parameters or metrics may be designed such that when one or more of the performance parameters is not met, common data memory object 209a may be adjusted, reformatted or otherwise altered.
  • benchmarks for the one or more performance parameters may be established within the system such that a process or routine running on the system may evaluate recent values of system performance parameters, compare them to one or more performance thresholds, benchmarks or metrics and initiate, as may be suggested, a restructuring of one or more common data memory objects 209a associated with a data access object 303a.
  • a data processing system 100a may be designed to dynamically and intelligently adjust the manner in which data is maintained in common data memory object 209a.
  • common data memory object 209a may be implemented such that it caches no data. For example, when it would cost less in system resources to keep the data sought by one or more application services 212a in a simple storage area, common data memory object 209a may cache no data.
  • a threshold, performance parameter, benchmark or metric, for accessing, storing or otherwise maintaining data in common data memory object 209a is surpassed, common data memory object 209a may be reconfigured such that it begins caching data in a single data storage vector.
  • common data memory object 209a may be reconfigured to migrate from single data vector storage to the hashed data storage vector model of FIGURE 3a.
  • Other migrations between data storage methodologies and implementations are considered with the scope of the present invention.
  • FIGURE 4a a flow diagram generally illustrating a method of operation for listener 203a is shown at 400a, according to teachings of the present invention. Upon initialization of common data service 200a at 403a, method 400a preferably proceeds to 406a.
  • listener 203a preferably remains in a wait or loop state until it receives one or more data access requests from one or more of application services 212a.
  • common data cache service 200a is preferably operable to support a plurality of data access requests substantially simultaneously. Such multithreading capabilities may be supported by enabling one or more listeners 203a to execute in common data cache service 200a and/or through the existence of a plurality of query threads 206a.
  • method 400a Upon receipt of a data access request from an application service 212a, method 400a preferably proceeds to 409a.
  • listener 203a preferably communicatively connects to the application service 212a from which the data access request was received, i.e., the current requesting application service.
  • communicatively connecting with the current requesting application service may enable listener 203a to identify one or more characteristics of the data access request submitted by the requesting application service. For example, if query threads 206a are implemented using a named pipe technology, listener 203a may need to identify to which named pipe the current requesting application service should be assigned at 409a. In another example, listener 203a may be adapted to discern the type of data sought to be accessed, stored or otherwise maintained by the current requesting application service at 409a.
  • Listener 203a may proceed to 412a.
  • Listener 203a preferably begins the process of assigning or designating a query thread 206a to the current requesting application service at 412a.
  • listener 203a preferably reviews, analyzes or otherwise evaluates the operating status of one or more existing query threads 206a to determine whether one or more is available for use by the current requesting application service.
  • listener 203a may identify the first or any currently or soon to be available query thread 206a.
  • listener 203a may determine whether an appropriately named pipe is available for the current requesting application service. Listener 203a may also be adapted to identify the least recently used query thread 206a or to keep a use-listing through which listener 203a may cycle to ensure that each query thread 206a is periodically used. If at 412a listener 203a identifies an available existing query thread 206a for use by the current requesting application service, method 400a preferably proceeds to 415a. Alternatively, if listener 203a determines that a query thread 206a is not currently available at 412a, method 400a preferably proceeds to 418a.
  • listener 203a may designate or assign an available query thread 206a to the current requesting application service. Designating or assigning a query thread 206a to a current requesting application service 212a may involve notifying query thread 206a to expect the current requesting application service to soon be connecting. Alternatively, listener 203a may inform the current requesting application service of the specific query thread 206a which listener 203a has designated or assigned for its use. Such notification may result from listener 203a sending the current requesting application service an address or name for the assigned query thread 206a. Further, listener 203a may initiate a connection between the assigned or designated query thread 206a and the current requesting application service. Once an available query thread 206a has been designated or assigned to the current requesting application service, method 400a preferably proceeds to 430a.
  • listener 203a In response to the unavailability of a query thread 206a at 412a, listener 203a will preferably determine whether an additional query thread 206a may be initiated at 418a. In one embodiment of the present invention, the number of query threads 206a may be limited in order to prevent the processing capabilities of system 100a from being depleted. In such an implementation, listener 203a may determine whether the total number of active or existing query threads 206a exceeds or is in accordance with a preferred system limit on query threads 206a at 418a. If listener 203a determines that the number of active or existing query threads 206a is below the preferred system limit, method 400a preferably proceeds to 424a.
  • listener 203a determines that the number of active or existing query threads 206a meets or exceeds the threshold number of query threads 206a
  • method 400a preferably proceeds to 421a.
  • listener 203a in response to having the preferred maximum number of allowed query threads 206a already existing or active, listener 203a may assign or designate a message queue associated with one or more of the active or existing query thread 206a for the current requesting application service. Subsequent to queuing the current data access request, the current requesting application service will preferably have its data access request held in the queue for processing until a query thread 206a becomes available.
  • listener 203a may be further configured to determine which queue will likely be able to process the data access request the soonest, for example. Further, listener 203a may also be configured to evaluate the quantity of data access requests or other operations pending in a queue of one or more existing query threads 206a and assign or designate the queue with the least amount of processing remaining as the queue for the current data access request. Following queuing of the data access request, method 400a preferably proceeds to 430a.
  • listener 203a may initiate one or more additional query threads 206a at 424a.
  • listener 203a may be required to generate an appropriately named pipe or query thread for use by the current requesting application service.
  • listener 203a may need only initiate an additional TCP enabled query thread 206a.
  • method 400a Upon designation of a query thread 206a at 427a, method 400a preferably proceeds to 430a.
  • listener 203a preferably passes or hands- off the current requesting application service to the assigned or designated query thread 206a. Once the current requesting application service 212a has been handed-off, method 400a preferably returns to 406a where listener 203a may loop or wait to receive the next data access request from one or more of application services 212a.
  • Methods 500a and 600a of FIGURES 5a and 6a preferably begins at 503a after hand-off of the current requesting application service and upon connection of the requesting application service to the query thread 206a. Following effective or communicative connection between the current requesting application service and its designated query thread, method 500a preferably proceeds to 506a.
  • the data access request generated by the current requesting application service may be evaluated to determine whether the data access request is seeking to store data to or retrieve data from common data memory object 209a.
  • a variety of interrogation routines may be used to make the determination of whether a store or retrieve operation is sought by the current requesting application service. If it is determined that the data access request seeks to retrieve data from common data memory object 209a, method 500a preferably proceeds to 509a. Alternatively, if it is determined that the data access request seeks to store information in common data memory object 209a, method 500a preferably proceeds to 603a of method 600a in FIGURE 6.
  • the current structure or caching methodology of common data memory object 209a may be identified or determined at 509a.
  • the current structure or caching methodology of common data memory object 209a may be determined to enable data caching service 200a to initiate or call the routines necessary to process the current data access request. Accordingly, whether or not common data memory object 209a is currently caching data, is caching data in a single vector or is caching data in hashed data vector storage is preferably determined at 509a.
  • method 500a may proceed to 512a.
  • processing of the current data access request may be otherwise effected.
  • the current requesting application service and/or data access request may be referred to an external application for processing.
  • the data access request may be processed from a simple storage area implemented by common data memory object 209a.
  • method 500a preferably proceeds to 515a.
  • the hashing algorithm used to generate hash table 306a may be initiated.
  • the hashing algorithm initiated and used may be selected from default helper methods 318a, hashing algorithms 312a or from an alternate source.
  • the hashing algorithm employed may be determined or dictated by whether the data stored in one or more common data memory objects 209a of a data access object 303a is complex or simple .
  • method 500a preferably proceeds to 518a.
  • the current data access request and selected hashing algorithm may be employed to identify the data storage vector 309a likely to hold the data sought.
  • the subject data of the data access request may be hashed according to the hashing algorithm such that the data storage vector 309a in which the actual data would be stored if written to the current common data memory object 209a may be identified.
  • method 500a preferably proceeds to 521a.
  • a determination may be made as to whether a key assigned to the data sought in the data access request is simple or complex.
  • complex data is assigned a complex key generated by one or more complex data helper routines 321a.
  • Simple data may be assigned a key by one or more of default helper routines 318a. Whether it is determined that the assigned key is complex or simple, method 500a preferably proceeds to 524a or 527a, respectively.
  • a complex data key search helper routine may be called or initiated from complex data helper routines 321a.
  • a default search helper routine operable to search simple keys may be called or initiated from default helper routines 318a.
  • method 500a preferably proceeds to 530a.
  • selection of an optimum search methodology may be performed.
  • stored data may be searched via a list scan, binary search algorithm or otherwise. If the amount of data in a data storage vector 309a is below a certain level, a list scan may be the fastest search method available.
  • a binary split search may provide the quickest search results.
  • Alternative search may be employed and may depend on the system used, the data stored as well as a number of other factors .
  • method 500a preferably proceeds to 533a where a search in accordance with the preferred search methodology may be performed. Upon performing the search at 533a, a determination regarding whether a match has been located may be performed at 536a.
  • method 500a may proceed from 536a to 512a where the current requesting application service may be otherwise processed, such as referred to an external application for additional processing. Alternatively, if at 536a a match is determined to have been located, method 500a may proceed to 539a where the data is preferably returned or communicated to the requesting application service from common data memory object 209a the requesting application service's assigned query thread 206a.
  • method 500a After returning the requested data to the requesting application service at 539a, method 500a preferably proceeds to 542a where the current requesting application service may be polled or interrogated to determine whether one or more additional data access requests remain for processing. If it is determined that one or , more additional data access requests remain to be processed at 542a, method 500a preferably returns to 506a where the next data access request may be processed. Alternatively, if it is determined that the current requesting application service contains no further data access requests at 542a, method 500a preferably proceeds to 545a where the query thread 206a and current requesting application service may be disconnected, freeing query thread 206a for communication with the next requesting application service and returning current application service to its own processing operations.
  • a current data request may be otherwise processed at 512a.
  • the current requesting application service may be referred to one or more external routines for such processing. For example, if at 509a it is determined that common data memory object 209a is not presently caching data, for fulfillment of a data access requested received from the current requesting application service, one or more external routines may be necessary to retrieve, store or otherwise maintain the object of the data access request. Alternatively, if upon completion of the optimum search methodology at 536a, the data sought to be accessed, stored or otherwise maintained by the current requesting application service has not been found, one or more external applications or services may be necessary for the data access request to be processed to completion.
  • method 500a proceeds to 512a
  • method 500a then preferably proceeds to 545a where query thread 206a and the current requesting application service may be disconnected as described above .
  • FIGURE 6a one embodiment of a continuation of method 500a is shown. Illustrated generally at 600a in FIGURE 6a is one embodiment of a method for storing data in accordance with teachings of the present invention.
  • a received data access request may be interrogated or evaluated to determine whether it contains a retrieve or store operation at 506a. If it is determined that the received data access request contains a store operation, method 500a preferably proceeds from 506a to 603a.
  • the current data storage structure or methodology employed by common data memory object 209a may be determined. Similar to the processing of a data retrieval request, the structure or methodology with which common data memory object 209a is currently storing data will generally dictate how data to be added may be stored.
  • method 600a preferably proceeds from 603a to 512a where the current requesting application service may be referred to an external service in accordance with the description above. However, if it is determined at 603a that common data memory object 209a is currently storing data in accordance with a single vector data storage method, method 600a preferably proceeds to 606a.
  • a determination may be made regarding the efficiency with which common data memory object 209a is currently maintaining data. Specifically, a determination is preferably made at 606a regarding whether the addition of the data sought to be stored in the current data access request suggests that a change in the current storage structure employed by common data memory object 209a should be effected. For example, according to teachings of the present invention, when a certain amount of data is to be shared by a plurality of application service 212a or processes, the data may be more efficiently shared by maintaining the data according to the hashed data vector storage method disclosed herein.
  • data processing system 100a, data access object 303a or common data memory object 209a may be adapted to recognize such an event and initiate a cache reformatting in an effort to increase data access efficiency.
  • Other thresholds from which a cache structure change may be suggested or intimated include, but are not limited to, hit ratios, seek return times, read times, as well as others .
  • method 600a preferably proceeds to 609a.
  • a reformatting of the current common data memory object 209a cache structure may be initiated.
  • a routine adapted to reconfigure the format of the current data cache may be initiated. The data will preferably be stored in the reformatted common data memory object 209a before method 600a proceeds to 639a.
  • method 600a if at 606a it is determined that the addition of data sought to be stored by the current data access request does not suggest a change in the format of common data memory object 209a, method 600a preferably proceeds to 612a.
  • the data to be stored may be evaluated for a determination regarding whether the data is complex or simple. If the data sought to be stored is complex, method 600a preferably proceeds to 615a where a complex key generation and assignment helper routine may be called or initiated before proceeding to 618a. Alternatively, if the current data access request seeks to store simple data, method 600a preferably proceeds to 618a.
  • a key is preferably generated and assigned for the simple or complex data in accordance with the appropriate key helper routine. For example, if there is complex data to be stored, such as a data structure having fields one (1) through ten (10) , the complex key generation and assignment helper routine called at 615a may select data from one or more of the ten (10) fields to generate a key which will be used to store and sort the data as well as for comparisons in data searches.
  • the key may be defined using an offset and a length.
  • a simple key for the data may be defined beginning at character four (4) and carrying forward for ten (10) characters.
  • the simple key is preferably employed to store and sort the data within the data storage vectors.
  • the assigned keys are also preferably employed during data searches for comparison and matching purposes.
  • the data is preferably stored in the single vector maintained by common data memory object 209a. As mentioned above, the data is preferably positioned within the single storage vector according to its assigned key. After inserting the data at its appropriate location in the single vector at 621a, method 600a preferably proceeds to 639a.
  • method 600a preferably proceeds to 624a.
  • the data to be stored may be evaluated for a determination regarding whether the data is complex or simple. If the data sought to be stored is complex, method 600a preferably proceeds to 627a where a complex key generation and assignment helper routine may be called or initiated before proceeding to 630a. Alternatively, if the current data access request seeks to store simple data, method 600a preferably proceeds to 630a where a key is preferably generated and assigned for the simple or complex data in accordance with the appropriate key helper routine. The operations preferably performed at 624a, 627a and 630a may proceed in a manner similar to that described above at 612a, 615a and 618a, respectively.
  • method 600a After a key has been generated and assigned at 630, method 600a preferably proceeds to 633a where the assigned keys may be hashed in accordance with a preferred hashing algorithm employed by caching data service 200a and common data memory object 209a. Employing the hashed key, hash table 306a and one of data storage vectors 309a, at 636a of method 600a, the data may be inserted into its appropriate storage location. Proceeding to 639a from either 609a, 621a or 636a, the current requesting application service may be polled or interrogated to determine whether any data access requests remain to be processed.
  • method 500a preferably proceeds from 639a to 506a where the additional data access requests may be processed in accordance with methods 500a and 600a of FIGURES 5a and 6a, respectively.
  • method 600a if it is determined at 639a that the current requesting application service has no more additional data access requests for processing, method 600a preferably proceeds from 639a to 545a where the current requesting application service and its assigned or designated query thread 206a may be disconnected from one another.

Abstract

A data processing system and method are disclosed. The system maintains a plurality of substantially continuously available service instances and includes a caching service having a listener and plurality of query threads adapted to enable communications with a data type independent cache container. The service instances may be aggregated into groups of like services. A service control manager, an individual system service, organizes, manages and maintains the various service instances. The service control manager cooperates with a message queue manager, service queue manager and application services manager to distribute/process data and perform other management operations. The listener acknowledges requests for data access from services and connects requesting services to a respective query thread. Application services leverage data manipulation functions associated with the container to store/access data. The cache container is constructed using data storage vectors organized according to a hash table.

Description

A SYSTEM AND METHOD FOR PROCESSING HARDWARE OR SERVICE USAGE AND INTELLIGENT DATA CACHING
TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to computer processing and, more particularly, to a system and method for high-volume data processing and the intelligent "management of stored data.
BACKGROUND OF THE INVENTION
Many data processing systems operate according to rules-based management systems. The typical product of rules-based systems is a single data processing model containing comprehensive logic that represents all of the business rules for a particular line of business. Further, one consequence of such a program structure is that as each transaction or data record is received for processing, the comprehensive program will typically need to evaluate each business rule to determine its applicability to the current transaction or data record before performing any processing operations. As a result, processing transaction records becomes very hardware intensive and highly inefficient, especially in those rules-based program structures adapted to accommodate a wide variety of transaction record types . While rules-based management systems do provide for the easy incorporation of additional business logic, they typically do not achieve the levels of transaction or data record throughput desired and demanded by most high- volume data processing operations.
In many high volume data processing systems, a myriad of processes or programs are typically running, sometimes with multiple copies of the same process or program. To make the necessary data available to each of these processes, many system designs call for the data to be replicated for and stored by each process. Other designs simply place a single copy of the frequently used data in a shared memory space . Such data management techniques are generally ineffective and commonly cause significant system performance degradations.
While many efforts have been extended to resolve these data sharing issues, each has either failed or presents its own limitations which make it less than desirable. For example, the most recently used/least recently used methods for managing data in many database applications is too generic for the data lookups typically required in a high volume data processing system. In array storage, another attempted resolution, performance degradation stems from fixed array capacities and data wrapping. In vector classes, a related attempt, it is typically costly for the system to manipulate the vector's contents when such content surpasses a certain volume.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
FIGURE 1 is a schematic drawing generally depicting one embodiment of a data processing system incorporating teachings of the present invention and deployed in cooperation with a telecommunications network;
FIGURE 2 is a flow diagram depicting one embodiment of a method for operating a service-based architecture in a data processing system according to teachings of the present invention;
FIGURE 3 is a flow diagram illustrating one embodiment of a method for implementing a service control manager in a service-based architecture according to teachings of the present invention;
FIGURE 4 is a flow diagram illustrating one embodiment of a method for launching a service control manager according to teachings of the present invention; FIGURES 5-8 are flow diagrams illustrating one embodiment of a method for operating a service control manager in a service-based architecture according to teachings of the present invention;
FIGURE la is a perspective view, partially exploded, showing an embodiment of a data processing system incorporating teachings of the present invention;
FIGURE 2a is a block diagram illustrating one embodiment of a data caching service incorporating teachings of the present invention; FIGURE 3a is a block diagram illustrating one embodiment of a common data memory object incorporating teachings of the present invention;
FIGURE 4a is a flow diagram illustrating one embodiment of a method for implementing a data caching service incorporating teachings of the present invention; and
FIGURES 5a and 6a are flow diagrams illustrating one embodiment of a method for maintaining a common data memory object incorporating teachings of the present invention. DETAILED DESCRIPTION OF THE INVENTION
Preferred embodiments of the present disclosure and its advantages are best understood by referring to
FIGURES 1 through 8 and la through 6a of the drawings, like numerals being used for like and corresponding parts of the various drawings .
Referring first to FIGURE 1, a diagram depicting one environment in which teachings of the present invention may be implemented is shown. In one aspect, the present invention is directed to enhancing the efficiency with which a high volume data processing system can process data, for example, telecommunications hardware or service usage transaction records. However, teachings of the present invention may be employed in data processing environments other than telecommunications systems including, but not limited to, retail transaction systems, accounting systems, point-of-sale systems, shipping systems, data caching systems, as well as others . In a telecommunication system such as that illustrated generally in FIGURE 1, the present invention may be employed to process telecommunication hardware or service usage transaction records from such sources as POTS (plain old telephone system) telephone 103, wireless telephone 106, wireless communications enabled PDA
(personal digital assistant) 109 as well as computer system 112. Computer system 112 may communicate via a wireless or wireline Internet 113 as well as via other means. POTS telephone 103, wireless telephone 106 and PDA 109 may also be operable to communicate via Internet
113. In typical operation, when POTS telephone 103, wireless telephone 106, PDA 109 or computing system 112 is employed or used in its respective communicative capacity via Internet 113, wireline communications 115, or wireless communications 118, one or more telecommunications hardware or service usage transaction records may be recorded by a telecommunications service provider. Telecommunication switches 121, or a computing component operating therewith, in a typical operating scenario, preferably accumulates the telecommunications hardware or service usage transaction records generated in response to the use of one or more of communication devices 103, 106, 109 or 112. Once a number of transaction records have been accumulated, or a predetermined reporting time has arrived, for example, telecommunications switch 121 or a computing component cooperating therewith preferably sends the accumulated records via batch transactions transmission 127 to one or more of data processing systems 124. Alternatively, instead of awaiting batch transaction transmission 127, one or more of data processing systems 124 may request or otherwise obtain the batch transaction records 127 from telecommunications switch 121. The transfer of data between telecommunications switch 121 and one or more of data processing systems 124 may be implemented via Internet 113, wireline communications 115, wireless communications 118, or other communication technologies, according to teachings of the present invention.
As commonly employed, telecommunication switch 121 is responsible for handling large volumes of telecommunication hardware or service usage transaction records. With respect to the batch transaction record transmission indicated generally at 127, data processing systems 124 are typically responsible for processing large quantities of transaction records from a defined set of transaction record types. According to teachings of the present invention, the efficiency with which data processing systems 124 process such large volumes of telecommunication transaction records may be enhanced by implementing, executing or otherwise maintaining a plurality processing services where each of the processing service instances is dedicated or specifically adapted to processing selected ones of the defined transaction record type set .
Although illustrated as tower computers, data processing systems 124 may assume a variety of forms. For example, data processing systems 124 may include a plurality of rack servers. In general, data processing systems 124 will preferably include one or more processor, memory, at least one communications port, one or more user input devices, one or more displays, as well as other components. Examples of data processing system 124 manufacturers include, but are not limited to, DELL, Hewlett Packard, Sun Microsystems and Apple.
In one embodiment of the present invention, as illustrated generally at 130, data processing systems 124 may enhance the efficiency with which they process transaction records or other data through the implementation of a service-based architecture. A service-based architecture may be defined as a data processing operation implemented such that it deploys a plurality of data processing service instances, where the service instances are dedicated to the processing of one or more selected transaction record types. One example of a service-based architecture implementation incorporating teachings of the present invention is depicted generally at 133 of FIGURE 1.
Service-based architecture implementation 133, according to teachings of the present invention, preferably includes a service control manager 136 generally operable to supervise, manage or otherwise maintain the plurality of dedicated transaction record type processing service instances running or executing on data processing system 124. In operation, service control manager (SCM) 136 preferably cooperates with, employs or otherwise implements the preferred functionality of one or more message queue managers (MQM) 139, service queue managers (SQM) 142 and one or more service application management modules (SVC) 145.
As mentioned above, the present invention preferably implements a service-based architecture in that a plurality of individual services make up the processing architecture. Specifically, SCM 136, MQM 139, SQM 142 and SVC 145 are each preferably services forming a portion of the service-based architecture. Further representing the service-based architecture of the present invention is the implementation, in one embodiment, of a plurality of service instances each of which preferably includes all of the logic necessary to process a selected transaction record type, e.g., 911 transaction records, directory assistance transaction records, credit card transaction records, etc. In a further implementation, the transaction record processing service instances may be adapted to process a limited plurality of selected transaction record types where the limited plurality of transaction record types require substantially similar processing logic.
In a further aspect, as discussed herein, the service or processing objects of the present invention may be accumulated into like-type groups or groupings. For example, one or more 911 transaction record processing service instances may be organized into a single group. It is these grouping for which the message queue management and service queue management functions discussed herein may optimally be employed. By organizing groups of one or more service instances according to like-type transaction record processing capabilities, data processing system 124 may more readily adapt to changing workloads by adding or subtracting service instances to or from the group. These and other aspects of the present invention will be discussed in greater detail below.
In a preferred embodiment, each SVC 145 preferably includes a minimum number of service instances 148 dedicated or configured to process selected transaction record types. For example, SVC 145 may desire to maintain at least three (3) instances of a dedicated transaction record type service adapted to process 911 emergency calls, a minimum number of service instances dedicated or adapted to process transaction records resulting from directory assistance calls, one or more service instances for processing calls placed during peak/off-peak times, as well as service instances dedicated to the processing of other transaction record types.
Further, the one or more service instances 148 dedicated or configured to process one or more selected record types may each operate in accordance with an operating schedule. As such, in one aspect, having one or more service instances that remain substantially continuously available suggests that such service instances remain substantially continuously available within their scheduled time slots of operation.
For example, service instances operable to process transaction records of type ' E' may be scheduled to execute or run Monday through Friday during the hours of 6:00 AM to 7:00 PM, while service instances operable to process transaction records of type ' F1 may be scheduled to run every Thursday during the hours of noon to 6:00 PM. Therefore, during the Monday through Friday, 6:00 AM to 7:00 PM, the service instances operable to process transaction records of type 'E' are preferably monitored and otherwise maintained such that they remain substantially continuously for the desired period.
In further embodiments, selected service instances may be scheduled to run monthly, bimonthly, semi- annually, annually, etc., as well as not to run on selected dates or not to run on selected recurring dates. Each of the successive scheduled service instances is preferably assigned a priority to ensure its preferred processing. As will be discussed in greater detail below, the scheduling of the various service instances as well as any associated priority processing is preferably maintained by SCM 136.
In the event a transaction record is received for which a service instance 148 is not currently active, available or running, SCM 136, MQM 139, SQM 142 and/or
SVC 145 may be adapted to initiate a transaction record processing service adapted to process the record type. The transaction record processing service may be called upon receipt of such transaction records, according to a defined schedule, in response to accumulation of a number of such transaction records, or otherwise. SQM 142, in one aspect of the present invention, is preferably adapted to adjust or otherwise manage one or more service queues associated with a corresponding service instance 148. Service queues may be defined as queues which are operable to maintain for processing transaction records received from a message queue associated with the grouping of like-type transaction record processing service instances. Among the operations preferably effected by SQM 142 is the balancing of queued transaction records across or among the active service instances available for processing the specific transaction record types, e.g., across the grouped instances of 911 transaction record processing services. SQM 142 may be further adapted to perform additional operations. MQM 139 preferably cooperates with SQM 142, in part, to further increase the efficiency with which the various service instances process queued transaction records. Among other operations, MQM 139 preferably prepares, readies or otherwise initiates one or more services adapted to manipulate the message queues associated with the one or more groups or groupings of service instances directed to processing a selected transaction record type. In one aspect, MQM 139 may be adapted to prioritize messages, e.g., transaction, service instance cancellation messages, etc., within associated service queues MQM 139 may also be adapted to initiate additional service instances in response to a backlog in one or more message or service queues, as well as to execute or cancel other services according to a service schedule. Further detail regarding the operation and cooperation of SCM 136, MQM 139, SQM 142 and SVC 145 will be discussed below.
In the telecommunication transaction record processing system example of FIGURE 1, data processing system 124 may be employed to perform one or more preprocessing operations. For example, the batch transaction records processed by data processing systems 124 may subsequently be passed on to a billing statement process or other subsequent transaction record processing operation 154.
Referring now to FIGURE 2, a flow diagram illustrating one embodiment of a method for implementing a service-based architecture is indicated generally at 200. According to teachings of the present invention, implementation of a service-based architecture preferably includes maintaining a plurality of service instances operable to process selected types of transaction records. Preferably, the plurality of service instances are available on one or more of data processing systems 124, as indicated at 203, on a substantially continuous basis. For example, one or more of data processing systems 124 may substantially continuously execute or run one or more telecommunication service or hardware usage transaction record processing services for each of transaction record types 'A'- 'M', for a total of (n) record type services . At 206, a service-based architecture incorporating teachings of the present invention preferably enables the queuing of transaction records according to transaction record type to a queue associated with a respective dedicated transaction record type processing service instance or grouping of service instances. For example, each type 'E' transaction record will preferably be queued to a message or service queue associated with a group having one or more service instances directed or dedicated to processing type ' E' transaction records.
At 209, when a transaction record type processing group or grouping includes a plurality of service instances, the present invention preferably enables the queued transaction records to be balanced across or among the plurality of service instances such that more efficient data processing may be achieved. Initial balancing and balancing as-needed generally enhances the efficiency with which data processing system 124 manages the service instances it maintains and completes it designated tasks.
At 212, the queued transaction records may be processed in accordance with their associated transaction record processing service. Upon completion of the processing operations, the processed transaction records may then pass to one or more subsequent operating centers, such as a billing operations center 154 of FIGURE 1. As mentioned above, the present invention may be employed in a variety of data processing systems. In one possible system, telecommunications system 100, transaction records may be created and received in a variety of formats. One format which is typically produced in a telecommunications system such as system
100 is the Automated Message Accounting (AMA) format from Telecordia and is employed by most telecommunications carriers. Other formats, such as EMI (Exchange Message Interface) records, may also be employed with the present invention. Further, the present invention may be employed with other transaction record formats, such as a format developed to track downloads or other operations performed by computer 112 via Internet, private network or other network connections 113.
Referring now to FIGURE 3, a flow diagram depicting one embodiment of a method for implementing SCM 136 is shown. In method 300 of FIGURE 3, after initialization of data processing system 124 at 303, SCM 136 may be launched at 306. Additional details regarding the launch of SCM 136 are discussed with reference to FIGURE 4. Once SCM 136 has been launched, method 300 preferably proceeds to 309 where one or more service instances currently or recently operating on data processing system 124 may be recovered, preferably by SCM 136. After completing its recovery operations, SCM 136 preferably enters its normal operating mode at 312. SCM 136 may be configured such that it returns to
309 or 312 after completing its normal operating procedures at 315. Alternatively, SCM 136 may be adapted to rest or sleep for a predetermined period after the passing of which it may return to a prior operation, e.g., 309 or 312, in method 300. Additional detail regarding the launch, recovery, and normal operating mode of SCM 136 will be discussed in greater detail below with respect to FIGURES 4-8.
In general, FIGURES 4-8 disclose one embodiment of a preferred operating cycle of a data processing system incorporating teachings of the present invention. It should be understood that FIGURES 4-8 primarily discuss one method but that many variations in may be made without departing from the spirit and scope of the present invention.
Referring now to FIGURE 4 , one embodiment of the operations preferably associated with the launch of SCM 136, are shown generally at 400. As illustrated in FIGURE 4, after the launch of SCM 136 at 403, method 400 may proceed to 406 where SQM 142 is preferably launched. In one embodiment, SQM 142 preferably includes one or more services which may be called from SCM 136 or from one or more other data processing system 124 services. The services included in SQM 142 are preferably operable to manipulate, maintain or otherwise manage one or more service queues associated with the active or existing transaction record processing service instances. In addition, the services of SQM 142 may also be adapted to provide information regarding one or more aspects of the transaction record processing service queues. For example, one ore more services of SQM 142 may be operable to determine the number of transaction records remaining in a service queue, determine the throughput of a service queue as well as perform other operations.
At 409 of method 400, MQM 139 may be launched. According to teachings of the present invention, MQM 139 preferably includes one or more services which may be called from SCM 136 or from one or more transaction record processing service instances. The one or more services included in MQM 139 are preferably adapted to manipulate or provide information regarding the one or more message queues associated with the groupings of or individual service instances running in the service-based architecture of the present invention. In one aspect, MQM 139 is preferably operable to prioritize transaction records queued for processing. For example, MQM 139 may be enabled to reorder a message or service queue to cause priority designated transaction records to be processed first and the remaining transaction records to be processed according to a FIFO (first-in-first-out) or other method. In another aspect, MQM 139 may be adapted to initiate additional service instances if it is determined that the number of transaction records in a service or message queue for a given record type service group exceeds a desired performance parameter. More detail regarding the operation and cooperation of SCM 136, MQM 139, SQM 142 and SVC 145 is described below. SVC 145 may be launched at 412 of FIGURE 4. MQM 139, SQM 142 and 145 may be launched in any order or substantially simultaneously, according to teachings of the present invention. SVC 145 preferably includes one or more services that may be called upon to serve one or more transaction record processing needs of data processing system 124. Each of the services included in SVC 145 preferably include all of the logic necessary to process the transaction record types associated with a particular service. After the launch of MQM 139, SQM 142 and SVC 145, method 400 preferably proceeds to method 500 of FIGURE 5.
Beginning at 503, after launch or initiation as described above, SCM 136 preferably initiates and effects the recovery of any stalled service instances. A stalled service instance may include, but is not limited to, those service instances which have ceased processing for reasons other than a command to terminate . Recovery of a stalled service instance may involve restarting the service instance and its associated service queue, removing any unprocessed transaction records from the instance ' s service queue for processing by another service instance as well as performing other operations . Upon completion of stalled service instance recovery at 503, method 500 preferably proceeds to 506.
At 506, SCM 136 preferably effects the recovery of any terminated transaction service instances. Terminated services instances which SCM 136 may desire to recover include, for example, any service instances or queues which were terminated before completing processing of one or more queued transaction records. Other definitions of recovered terminated services instances may be employed in the present invention. In addition, other termination events may be addressed and recovered by SCM 136. Upon completion of terminated service instance recovery at 506, method 500 preferably proceeds to 509.
At 509, SCM 136 preferably begins a process of verifying all existing or active service instances. As mentioned above, there is a preference that a data processing system 124 operating in accordance with teachings of the present invention maintain, in addition to SCM 136, MQM 139, SQM 142 and SVC 145, a minimum number of service instances adapted to the processing of one or more selected transaction record types, running or executing substantially continuously. Accordingly, at 509, SCM 136 preferably determines the number of active instances of each transaction record service type currently available on data processing system 124.
For each type of transaction record service instance counted at 509, SCM 136 preferably compares the total number of such active service instances to its respective preferred minimum number of service instances selected to remain substantially continuously available on data processing system 124 at 512. If it is determined that the total number of active instances of the particular transaction type service under evaluation matches the preferred minimum number of such service instances selected to remain substantially continuously available at 512, method 500 preferably proceeds to 515. If it is determined at 512 that the number of active instances of the current transaction type service is below the preferred number of service instances for such service type, method 500 preferably proceeds to 518. Finally, if it is determined at 512 that too many active instances of the current service type are available, method 500 preferably proceeds to 521.
At 518, in response to too few instances of a particular transaction type service, SCM 136 will preferably initiate an additional number of transaction record service instances to bring the number of active service instances into accordance with the preferred minimum. Following initiation of the additional transaction record service instances at 518, method 500 preferably proceeds to 515. Alternatively, at 521, in response to a determination of too many active service instances of a given type, SCM 136 may initiate the cancellation of one or more of the excess service instances . Once an appropriate number of transaction record service instances have been cancelled at 518, method 500 preferably proceeds to 515. At 515, a determination may be made, preferably by SCM 136, as to whether all types of currently available or active transaction record service instances have been reviewed for their compliance with their respective preferred minimums . In the event all types of currently available transaction record service instances have not been reviewed, method 500 preferably returns to 509 where the next group of instances of a particular transaction record service type may be evaluated for its compliance with a corresponding number of service instances preferably substantially continuously available. Alternatively, if it is determined at 515 that all currently available service types have been reviewed for their compliance with their respective preferred numbers of active service instances, method 500 preferably proceeds to 524.
At 524, SCM 136 may determine whether each of the transaction record service types selected to remain substantially continuously available on data processing system 124 is active and/or available. If it is determined at 524 that each of the dedicated transaction record service types selected to remain substantially continuously available are not currently available, method 500 preferably proceeds to 527 where the preferred minimum number of instances for a currently unavailable transaction record service type may be identified. Proceeding then to 530, the minimum number of service instances for the unavailable service type identified at 527 may be activated, initiated or otherwise made available. Upon initiation of the currently inactive transaction record service instances at 530 or upon the determination at 524 that each of the transaction record service types selected to remain substantially continuously available is made, method 500 preferably proceeds to 533.
At 533, operation of the queues, message and/or service, associated with each of the groupings of one or more transaction record service instances may be paused or placed on hold. Holding each of the queues associated with the dedicated transaction record service instance groupings may occur substantially simultaneously, substantially sequentially, or otherwise. Once each of the queues has been paused or placed on hold, any queued transaction records may be balanced or distributed across the service instances of each corresponding service type grouping at 536. As mentioned above, queue management may be effected by SQM 142 or MQM 139, preferably as instructed or coordinated by SCM 136. Further, in holding or pausing the service or message queues, processing by the service instance of a current transaction record may also be paused, be permitted to complete or continue unabated.
After balancing or redistribution of the queued transaction records at 536, the service instances designated for cancellation at 518 may be appropriately cancelled at 539. Cancellation of one or more service instances may occur, for example, by SCM 136 inserting in a service queue associated with each service instance to be cancelled a cancellation instruction whereby the service instance will be cancelled or ended upon completion of its current service queue load and the subsequent processing of the cancellation instruction.
Other methods for canceling a service instance may be employed without departing from the spirit and scope of the present invention. Upon redistribution or balancing of the queued transaction records and the insertion of any desired cancellation instructions at 536 and 539, respectively, the queues are preferably released such that processing of the transaction records may resume at 542.
Following release of the queues and the return to processing of the service instances, SCM 136 preferably updates a service registry to indicate each instance, type and other information regarding the services running on data processing system 124 at 545. As suggested, a service registry operating in accordance with the service-based architecture of the present invention may track the varied types of service instances active on data processing system 124, the number of instances in each service type grouping, a termination status for one or more service instances or groups, when one or more service instances or groups will awake from a sleep state, as well as other information. After updating the service registry as desired at 545, SCM 136 preferably proceeds to 548 where it may loop, sleep or remain in a wait state until its next processing period, event or occurrence .
From loop or wait state 548, upon arrival of the next SCM 136 processing period, method 500 preferably proceeds to 551 where it may be determined whether data processing system 124 is in a recovery mode, other than the 'recovery mode' which occurs at initialization, or its normal operating mode. If it is determined that data processing system 124 is in a recovery mode, such as a recovery mode resulting from one or more system malfunctions, software updates, etc., method 500 preferably returns to 503 where SCM 136 may begin its service recovery operations. Alternatively, if it is determined that data processing system 124 is in a normal operating mode, method 500 preferably proceeds to method 600 of FIGURE 6.
In one embodiment of the present invention, as will be discussed below, the 'normal operating mode' of data processing system 124 and SCM 136, as depicted generally in FIGURES 6, 7 and 8, repeats or includes many of the operations preferably performed during the 'recovery mode' illustrated generally at 500 in FIGURE 5. It should be understood that operations other than those of the recovery mode 500 may be incorporated into a 'normal operating mode' without departing from the spirit and scope of the present invention.
At 603 of method 600, SCM 136 preferably identifies those dedicated transaction record service types selected to remain substantially continuously available on data processing system 124. The services selected to remain substantially continuously available may be maintained by a service registry, by setting a bit on one or more service calls, in a data file or in another useful by data processing system 124 and SCM 136.
At 606, SCM 136 preferably checks or counts the number of active instances for each of the service types selected to remain substantially continuously available. In one embodiment, an active service instance may be defined as a service instance currently processing one or more transaction records, a service instance substantially immediately available to process one or more transaction records or a service instance awaiting receipt of one or more transaction records for processing. Other definitions or descriptions of an active service instance may be used without departing from the spirit and scope of the present invention.
At 609, after obtaining an active service instance count at 606, the preferred number of active instances for each service type selected to remain substantially continuously available may be obtained by SCM 136. Subsequently, SCM 136 may also determine whether the number of active service instances for each selected service type is in accordance with the preferred number of service instances for that particular service type. If it is determined at 609 that the number of active service instances for a particular service type is in accordance with the desired minimum number of service instances, method 600 preferably proceeds to 612.
However, if at 609 it has been determined that the number of active service instances for a particular service type is not in accordance with the preferred number of such service instances, method 600 preferably proceeds to 615 where SCM 136 may interrogate data processing system 124 to ascertain the presence of any stalled instances of the current service type.
At 612, a determination may be made as to whether all service types selected to remain substantially continuously available have been verified or checked for their compliance with the preferred number of such service instances. If all selected service types have not been checked, method 600 preferably returns to 606 where the next service type identified as a service selected to remain substantially continuously available may be evaluated for compliance with its preferred number of instances. Alternatively, if all selected service types have been evaluated, method 600 may proceed to method 700 of FIGURE 7.
At 615, if SCM 136 identifies any stalled instances of the current service type, method 600 preferably proceeds to 618 where SCM 136 may initiate recovery of such stalled service instances before returning to 609.
Alternatively, if it is determined at 615 that there are no stalled instances of the current service type, method
600 may proceed to 621 where SCM 136 may interrogate data processing system 124 to ascertain whether there exists any improperly terminated instances of the current service type.
If SCM 136 determines at 621 that data processing system 124 contains no improperly terminated instances of the current service type, method 600 may proceed to 624 where SCM 136 preferably initiates an appropriate number of instances for the current service type.
Alternatively, should SCM 136 identify one or more improperly terminated instances of the current service type at 621, method 600 preferably proceeds to 627 where
SCM 136 may initiate the recovery of the improperly terminated instances.
Upon initiation of an appropriate number of current service type instances at 624, or upon completion of the recovery of any improperly terminated instances of the current service type at 627, method 600 preferably returns to 612 where, as mentioned above, SCM 136 may determine whether each of the service types selected to remain substantially continuously available has been evaluated. Once all services selected to remain substantially continuously available have been evaluated in accordance with method 600, method 600 preferably proceeds to method 700 of FIGURE 7.
Illustrated generally at 700 in FIGURE 7 is a flow diagram depicting one manner in which teachings of the present invention may enhance the efficiency with which data processing system 124 processes transaction records received, for example, from telecommunications switch 121. In general, SCM 136 preferably timely monitors the processing operation of each service type grouping to ensure that each is performing in accordance with one or more system performance parameters. Accordingly, method 700 can be understood to run as sequenced in FIGURES 3 - 8. Alternatively, method 700 can be understood to run appropriately after receipt of batch transaction transmission 127 and upon subsequent distribution or allocation of the transaction records to their respective transaction record service type instances or transaction type groups .
After receipt and distribution or allocation of the transaction records received from telecommunications switch 121, or upon completion of method 600, SCM 136 preferably begins method 700 at 703. At 703, SCM 136 may compare the number of transaction records in one or more message or service queues associated with a selected transaction type service group to a preferred system performance parameter, such as a queue throughput or queue volume. As mentioned above, MQM 139 and SQM 142 are preferably adapted to manipulate and provide information on message and service queues, respectively. If at 703 SCM 136 determines that the number of transaction records in a queue associated with a group or individual instance of a particular dedicated transaction type service exceeds a preferred system performance parameter, method 700 may proceed to 706. Alternatively, if SCM 136 determines that the number of queued transaction records associated with a particular transaction type service group or individual service instance matches or is below the preferred system performance parameter, method 700 may proceed to 709.
At 706, SCM 136 may again interrogate data processing system 124 to determine whether any stalled or improperly terminated service instances or queues associated with the transaction type under evaluation exist. If one or more stalled or improperly terminated service instances, groupings or queues are identified, method 700 preferably proceeds to 712 where the improperly terminated or stalled groups, queues or instances may be recovered before returning to 703. In an alternate embodiment, upon recovery of any improperly terminated or stalled queues, groups or instance, method 700 may instead proceed from 712 to 721 for processing as described below.
If it is determined at 706 that there exist no stalled or improperly terminated service queues or instances, method 700 may proceed to 715 where the total number of active service instances adapted to process the current transaction type may be compared to a preferred system limit on such service type instances. To avoid strangling data processing system 124 when an unexpectedly large number of transaction records of a particular type are received, the present invention may incorporate a limit to the number of instances of any one service type that may be running at one time. If the maximum number of allowed service instances is not exceeded at 715, method 700 preferably proceeds to 718 where one or more additional service instances may be initiated. From 718, method 700 will preferably proceed to 721. At 721, 724 and 727, operations similar to the balancing or redistribution operations discussed above with reference to FIGURE 5 may be effected. Specifically, each of the queues associated with the current service type being reviewed may be paused or held at 721. Once the selected queues have been paused or held, the transaction records remaining in the queues to be processed may be substantially equally distributed across each of the active service queues and/or instances for the current transaction type at 724. After distribution or balancing, the queues and service instances may be released for normal processing operation at 727. If all service types have been balanced, as determined at 733, method 700 may proceed to method 800 of FIGURE 8, otherwise, method 700 preferably returns to 703.
At 709, in response to a queue being in accordance with one or more system parameters, SCM 136 may compare the number of available service instances for processing the current transaction record type to a minimum number of such service type instances selected to remain substantially continuously available on data processing system 124. If the number of such available service instances is in accordance with the preferred minimum, method 700 may proceed to 733 for performance of the operations described above. However, if it is determined at 709 that the number of such service instances exceeds the minimum number of service type instances selected to remain substantially continuously available, method 700 may proceed from 709 to 730 where one or more of the service instances may be terminated upon completion of its current processing load. In one aspect, eliminating excess service instances may free up system resources such that additional instances of other service types may be initiated or such that data processing system 124 may dynamically adjust it resource consumption. Method 700 may then proceed to 733 for processing as described above .
In an alternate embodiment of the present invention, before method 700 proceeds from 709 to 730, balancing operations similar to those performed at 721, holding the queues, and 724, distributing the transaction records across active service instances and queues, may be effected. Further, before method 700 proceeds from 730 to 733, a balancing operation similar to that of 727, releasing the held queues, may be performed. Adding operations similar to those of 721, 724 and 727 preferably effects the balancing of the transaction record loads in each queue associated with each service type instance or service type grouping. Still other embodiments of method 700 may be incorporated in the present invention without departing from its spirit and scope.
Method 800 of FIGURE 8 generally illustrates a scheduling capability preferably implemented by SCM 136 and MQM 139, according to teachings of the present invention. In one embodiment of the present invention, SCM 136 may cooperate with MQM 139 to effect as-needed operation of one or more service types. In an alternate embodiment, MQM 139 may perform the bulk of the operations necessary to effect as-needed service calls. As-needed service operation may include, but is not limited to, initiating or canceling service instances according to a service schedule and initiating service instances in response to receipt of one or more transaction types for which a service instance is selected to remain substantially continuously available.
At 803, SCM 136 or MQM 139 may access a system clock or other hardware available on data processing system 124 to determine the current time. With a determination of the current time, SCM 136 may proceed to 806 for the review of a service initiation or cancellation schedule to determine if one or more events are scheduled for execution. Alternatively or in addition, SCM 136 or MQM 139 may consult a queue holding transaction record for which there is no active service instance available.
If the time has arrived for one or more services to be initiated or cancelled or a number of transaction records without an active service instance available to process them have accumulated, method 800 preferably proceeds to 809 where SCM 136 may take the appropriate initiation or cancellation action. Upon effecting the scheduled events identified at 806 and initiated at 809, or in response to an absence of scheduled events, method 800 preferably returns to 548 of FIGURE 5 where SCM 136 may loop or pause in a wait state .
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of the invention as defined by the following claims. For example, the active service instances may be adapted to sleep upon completion of queue processing, queue balancing may be effected in a manner that takes into account the complexity of transaction records waiting in each queue, the various benchmarks, metrics, performance parameters, throughput measures, etc., may be preset values or dynamically determined according to assorted processing characteristics over time, data, program or other information refreshes may be implemented in conjunction with the varied transaction record processing disclosed herein and the data processing system of the present invention may be implemented over a number of computer systems spanning a number of processing centers. Still other alterations are possible . Referring now to FIGURE la, one embodiment of a data processing system 100a operable to implement teachings of the present invention is shown. As illustrated, data processing system 100a may be implemented as a rack server. However, tower servers, mainframe servers as well as other configurations may be employed.
Accordingly, data processing system 100a may be produced by such computer manufacturers as DELL, Hewlett Packard, Sun Microsystems, International Business Machines, Apple as well as others. In the embodiment of data processing system 100a illustrated in FIGURE la, a number of common computing components may be compiled to create a computing device capable of processing various data types, preferably in large quantities, either on its own or in conjunction with a plurality of other data processing systems 100a. As illustrated, data processing system 100a of FIGURE la preferably includes one or more HDD (Hard Disk Drive) devices 103a and may include, but is not limited to, a FDD (Floppy Disk Drive) device 106a and a CD/RW (Compact Disc/Read Write) device 109a.
Also preferably included in data processing system 100a is power supply 112a. Power supply 112a may provide power to the components of data processing system 100a via power connections 115a. These and other computing components may be integrated into chassis 118a. Other computing components may be added and some of the aforementioned computing components removed without departing from teachings of the present invention.
At the heart of data processing system 100a, as in most computing devices, is motherboard or system board 121a. System board 121a typically electrically or communicatively interconnects, often with the aid of cables 124a, the various computing components. Preferably included among a variety of ASICs (Application Specific Integrated Circuit) on system board 121a, such as ASICs (not expressly shown) operable to enable and control communication or I/O (Input/Output) ports 127a, are one or more central processing units (CPU) or processors 130a. Processors 130a may include one or more processors from such manufacturers as Intel, Advanced Micro Devices, Sun Microsystems, International Business Machines, Transmeta and others. System board 118a may also include one or more expansion slots 133a adapted to receive one or more riser cards, one or more expansion cards adapted to enable additional computing capability as well as other components. A plurality of memory slots 136a are also preferably included on system board 121a. Memory slots 136a are preferably operable to receive one or more memory devices 139a operable to maintain program, process or service code, data as well as other items usable by processors 130a and data processing system 100a. Memory devices 139a may include, but are not limited to, SIMMs (Single In-line Memory Module) , DIMMs (Dual In-line Memory
Module) , RDRAM (Rambus Dynamic Random Access Memory) , as well as other memory structures. According to teachings of the present invention, memory devices 139a may alone, or with the aid of HDD device 103a, FDD device 106a, CD/RW device 109a, or other data storage device, implement or otherwise effect one or more data type independent binary data cache containers 142a.
As discussed in greater detail below, data cache containers 142a are preferably operable to intelligently cache data used by a plurality of services, programs or applications running on data processing system 100a. Data cache containers 142a preferably organize common data shared by a number active processes, programs or services using a hashed data storage vector scheme. In general, data cache containers 142a preferably index a plurality of data storage vectors according to a hash table 145a.
Referring now to FIGURE 2a, a block diagram depicting a common data caching service incorporating teachings of the present invention is shown generally at 200a. Common data cache service 200a preferably includes listener 203a, one or more query threads 206a, and common data memory object 209a, among other components.
In a preferred embodiment, common data cache service 200a is preferably operable to access, store or otherwise maintain information for a plurality of application programs, processes or services 212a running or executing on data processing system 100a. Application services 212a may include a wide variety of data processing services as well as multiple instances of one or more of such data processing services . For example, in a telecommunications hardware and service transaction record processing environment, application services 212a may include a variety of transaction record service instances adapted to process a selected transaction record type. In this environment, a service-based architecture, common data cache service
200a will preferably entertain all data access requests from the myriad services instances with the service instances maintaining no more logic than is necessary to know what data is necessary to effectively process its designated transaction record type and where to request access to such data. Common data cache service 200a may also, especially in a service-based architecture where common data cache service 200a is one of many service instances, provide segregated or private areas of memory or storage for use by designated ones of application services 212a. Such private memory areas may be effected in a variety of manners .
As will be discussed in greater detail below with respect to FIGURES 4a, 5a and 6a, listener 203a is preferably operable to perform a variety of functions in the operation of common data cache service 200a. In general, listener 203a preferably loops or remains in a wait state until it receives a data access request message from an application service 212a. A data access request may involve a request to return the value of a stored constant, variable or other information, a request to change the value of an existing constant, variable or other information, a request to store a new constant, variable or other information, as well as other data access operations.
Upon receipt of a data access request from an application service 212a, listener 203a may establish a communicative connection with the requesting one of application services 212a. After connecting with the current requesting one of application services 212a, in one embodiment of the present invention, listener 203a preferably endeavors to assign or designate one of query threads 206a for the requesting one of application services 212a. According to one embodiment of the present invention, listener 203a may be adapted or configured to initiate additional query threads 206a to enable the requested data access. Once a query thread
206a has been initiated, assigned, or designated for the current requesting application service, listener 203a preferably hands-off the current requesting application service to the assigned or designated query thread 206a. Following hand-off, listener 203a may return to its wait or loop state where it may await receipt of additional data access requests from an application service 212a. In an alternate embodiment, listener 203a may be configured to process data access requests from a plurality of application services 212a substantially simultaneously.
According to teachings of the present invention, query threads 206a are preferably operable to provide a query thread, link or channel between an application service 212a and common data memory object 209a. Once communications have been established or otherwise enabled between an application service 212a requesting data and common data memory object 209a via a designated one of query threads 206a, the transfer of data between common data memory object 209a and the current requesting application service may be effected or permitted. Upon completion of the data access, management or other maintenance activities desired by the current requesting application service, the connection between query thread 206a and the current requesting application service may be severed or otherwise ended. Following severance, query thread 206a may be reassigned by listener 203a to the next application service 212a requesting data access.
In one embodiment of the present invention, query threads 206a may be based on IPC (Interprocess Communication) technology principles. In general, however, query threads 206a may be implemented using a technology generally adapted to permit one computing process or service to communicate with another computing process or service, whether operating on the same or different data processing system. An IPC query thread 206a may be based on named pipe technology, in one embodiment of the invention. In general, named pipe technology is a method for passing information from one computer process or service to other processes or services using a pipe, message holding place or query thread given a specific name. Named pipes, in one aspect, may require less hardware resources to implement and use. Other technologies that may be used in the implementation of an IPC query thread 206a include, but are not limited to, TCP (Transfer Control Protocol) , sockets, semaphores, and message queuing.
According to teachings of the present invention, common data memory object 209a preferably does not discriminate as to the type of data it stores, accesses or otherwise maintains. Accordingly, simple, complex, binary or virtually any other format of data may be maintained therein. In an effort to maximize the efficiency with which common data cache service 200a operates, common data memory object 209a is preferably operable to store, access or otherwise intelligently maintain information according to a variety of methodologies. For example, as discussed further below, if the amount of data to be stored in common data memory object 209a is below a first threshold, the efficacy with which common data memory object 209a maintains or stores data may be such that caching will degrade the performance of data processing system 100a. Such a threshold may involve data hit ratios, data seek or read times, as well as others. When common memory object 209a is in such a state, an application service 212a seeking data may be referred elsewhere in the data processing system 100a, such as an application or service external to common data cache service 200a, for processing its data access request.
As a next phase, continuing the example, common data memory object 209a may store, access or otherwise maintain information for use in accordance with a single data storage vector methodology. In a still further phase, according to teachings of the present invention, common data memory object 209a may be adapted or configured to store, access or otherwise maintain information for use in one of a number of data storage vectors, where the various data storage vectors are organized in accordance with a hash table. Further discussion regarding the variety of methods by which common data memory object 209a may access, store or otherwise maintain data or information will be discussed in greater detail below with respect to FIGURES 3a, 4a, 5a and 6a. As mentioned above, a service-based data processing system may benefit from teachings of the present invention. In a preferred embodiment, listener 203a and query threads 206a are preferably generated or operated as light-weight processes, as indicated at 215a, for at least purposes of sharing CPU time. For example, in an embodiment of the present invention comprising one (1) listener 203a and four (4) active query threads 206a, CPU time for common data cache service 200a would be distributed among six (6) processes, common data cache service 200a, listener 203a, and the four (4) query threads 206a. The result of such an implementation permits an existing operating system running on data processing system 100a to assume management responsibilities for the slicing of CPU time between each of light-weight processes 215a. In an alternate embodiment, an additional application service 212a may be developed and provided to govern or otherwise manage the sharing of CPU or other hardware time among the varied processes, programs or services running on data processing system 100a. Other methods of CPU and hardware management or sharing may be employed without deporting from the spirit and scope of the present invention.
According to teachings of the present invention, common data memory object 209a preferably uses specific classes to store and access reference or shared data. In addition, sorting data in data storage vectors typically provides faster data access times when a medium to large number of entries reside in a vector. Further, hashing data into buckets, where the buckets comprise data storage vectors, has proven to be effective in breaking up large lists of data into smaller, more manageable and identifiable blocks of data.
As illustrated in FIGURE 3a, merging these methods enables more intelligent data management routines to be implemented in common data memory object 209a and further allows data to be efficiently stored, accessed or otherwise maintained in data access object 303a and common data memory object 209a. In general, according to teachings of the present invention, common data memory object 209a is preferably based on two storage classes or techniques, hashing and data storage vectors.
Conceptually, hashing enables the creation of hash table 306a which in turn provides a data container adapted to hold multiple data storage vectors 309a or buckets.
Hashing, in general, involves the generation of a key representing a bucket of data in the cache in which a single item may be stored. Duplicate hash key values may be generated to allow for the even distribution of data across a multitude of bucket containers. Hash table 306a preferably enables common data memory object 209a to provide a varying number of entries to aid in locating stored data.
Data storage vectors 309a may be used to provide a container-managed method for storing a dynamic amount of data in one or more lists. Data storage vectors 309a are preferably created in such a way that initial and next allocations may be used to minimize the amount of memory allocations and copying when data lists contained therein grow. Increasing the efficiency desired in a high volume data processing system, such as data processing system 100a, may be furthered by sorting the data items in each of data storage vectors 309a. Sorting the data typically permits each data storage vector 309a to be searched using a binary split algorithm, further increasing the efficiency with which common data memory object 209a may serve .
Some data, such as complex data structures, does not lend itself to the use of hashing or hash table 306a.
However, according to teachings of the present invention, one or more routines, such as complex data helper routines 321a, may be provided which are adapted to generate or create a unique identifier from such complex data that may be used by hash table 306a. In the event one or more of application services 212a seeks to access, store or otherwise maintain complex data, e.g., a data structure as defined in the C/C++ programming language, common data memory object 209a may consult one or more of hashing algorithms 312a via hashing interface 315a and/or one or more of complex data helper routines 321a to effect such function or operation. Similarly, when a requesting application service seeks to access, store or otherwise maintain simple data, such as an integer or character string, hashing algorithms 312a and hash table 306a may be employed to determine in which data storage vector 309a the simple data should be placed.
As suggested above, the various functions with which data access object 303a operates may be effected or aided by default helper routines 318a or complex data helper routines 321a. For example, searches for data in data storage vectors 309a may be aided, for simple data searches, by default helper methods 318a and, for complex data searches, by complex data helper routines 321a. Complex data helper routines 321a may be implemented as a function class defined in data access object 303a or otherwise. Data access object 303a preferably informs common data memory object 209a of its complex data helper routines 321a when common data memory object 209a is constructed. In one aspect, once common data memory object 209a is constructed with complex data helper routines as a parameter, the default helper methods 318a inside common memory object 209a may be replaced. Complex data helper routines 321a may also be provided by data access object 303a to enable the performance of binary split searches as well as list scanning searches for the subject data of a data access request.
In one implementation, once complex data is stored in one of common data memory objects 209a, data access object 303a may provide a pointer to one or more functions, program operations or services operable to perform binary split searches, list scanning, as well as other processes. Preferably included among the class of functions, programs, operations, processes or services implemented in default helper methods 318a or complex helper routines 321a to store, access or otherwise maintain complex data are a 'below vector range' test, an
'above vector range' test, a 'first item' vector comparison test, a 'last item' vector comparison' test, an 'equality to' vector item test, and a 'less than' vector item test. Additional and/or alternative tests or methods may be employed to manipulate data contained in common data memory object 209a without departing from the spirit and scope of the present invention. Data caching service 200a of the present invention is preferably designed for intelligent caching, accessing and other data manipulation operations. One aspect for which data caching service 200a is designed to operate intelligently is the manner in which data contained data storage vectors 309a is searched. For example, when one or more of data storage vectors 309a contains a large amount of data, it may be more efficiently searched if the data storage vectors 309a are sorted and searched using a binary split algorithm rather than a list scan methodology.
To perform intelligently, one or more performance parameters may be established which, when encountered, will cause data caching service 200a, common data memory object 209a, or data access object 303a to reformat, reorder or otherwise adjust the manner in which common memory object 209a stores, searches, accesses or otherwise maintains data. Examples of such performance parameters include, but are not limited to, search hit ratios, seek times for searched data, the amount of data stored in one or more of common data memory objects 209a or in one or more of data storage vectors 309a, as well as other memory or search performance metrics . In pursuit of dynamic and intelligent performance, an automated method or process to determine or evaluate the current performance of storage, access or maintenance operations as compared to one or more performance parameters or metrics may be designed such that when one or more of the performance parameters is not met, common data memory object 209a may be adjusted, reformatted or otherwise altered. Alternatively, benchmarks for the one or more performance parameters may be established within the system such that a process or routine running on the system may evaluate recent values of system performance parameters, compare them to one or more performance thresholds, benchmarks or metrics and initiate, as may be suggested, a restructuring of one or more common data memory objects 209a associated with a data access object 303a.
In another aspect, a data processing system 100a incorporating teachings of the present invention may be designed to dynamically and intelligently adjust the manner in which data is maintained in common data memory object 209a. At first, common data memory object 209a may be implemented such that it caches no data. For example, when it would cost less in system resources to keep the data sought by one or more application services 212a in a simple storage area, common data memory object 209a may cache no data. Once a threshold, performance parameter, benchmark or metric, for accessing, storing or otherwise maintaining data in common data memory object 209a is surpassed, common data memory object 209a may be reconfigured such that it begins caching data in a single data storage vector. Further, in response to achieving a second threshold, performance parameter, benchmark or metric, common data memory object 209a may be reconfigured to migrate from single data vector storage to the hashed data storage vector model of FIGURE 3a. Other migrations between data storage methodologies and implementations are considered with the scope of the present invention. Referring now to FIGURE 4a, a flow diagram generally illustrating a method of operation for listener 203a is shown at 400a, according to teachings of the present invention. Upon initialization of common data service 200a at 403a, method 400a preferably proceeds to 406a.
At 406a, listener 203a preferably remains in a wait or loop state until it receives one or more data access requests from one or more of application services 212a. As suggested generally in FIGURE 2a above, common data cache service 200a is preferably operable to support a plurality of data access requests substantially simultaneously. Such multithreading capabilities may be supported by enabling one or more listeners 203a to execute in common data cache service 200a and/or through the existence of a plurality of query threads 206a. Upon receipt of a data access request from an application service 212a, method 400a preferably proceeds to 409a. At 409a, listener 203a preferably communicatively connects to the application service 212a from which the data access request was received, i.e., the current requesting application service. In one aspect, communicatively connecting with the current requesting application service may enable listener 203a to identify one or more characteristics of the data access request submitted by the requesting application service. For example, if query threads 206a are implemented using a named pipe technology, listener 203a may need to identify to which named pipe the current requesting application service should be assigned at 409a. In another example, listener 203a may be adapted to discern the type of data sought to be accessed, stored or otherwise maintained by the current requesting application service at 409a. Once listener 203a has effectively coupled with the current requesting application service at 409a, method 400a may proceed to 412a. Listener 203a preferably begins the process of assigning or designating a query thread 206a to the current requesting application service at 412a. To begin, listener 203a preferably reviews, analyzes or otherwise evaluates the operating status of one or more existing query threads 206a to determine whether one or more is available for use by the current requesting application service. In one embodiment of the present invention where query threads 206a utilize TCP technology, listener 203a may identify the first or any currently or soon to be available query thread 206a. Alternatively, in an embodiment of the present invention where query threads 206a are based on named pipe technology, listener 203a may determine whether an appropriately named pipe is available for the current requesting application service. Listener 203a may also be adapted to identify the least recently used query thread 206a or to keep a use-listing through which listener 203a may cycle to ensure that each query thread 206a is periodically used. If at 412a listener 203a identifies an available existing query thread 206a for use by the current requesting application service, method 400a preferably proceeds to 415a. Alternatively, if listener 203a determines that a query thread 206a is not currently available at 412a, method 400a preferably proceeds to 418a.
At 415a, listener 203a may designate or assign an available query thread 206a to the current requesting application service. Designating or assigning a query thread 206a to a current requesting application service 212a may involve notifying query thread 206a to expect the current requesting application service to soon be connecting. Alternatively, listener 203a may inform the current requesting application service of the specific query thread 206a which listener 203a has designated or assigned for its use. Such notification may result from listener 203a sending the current requesting application service an address or name for the assigned query thread 206a. Further, listener 203a may initiate a connection between the assigned or designated query thread 206a and the current requesting application service. Once an available query thread 206a has been designated or assigned to the current requesting application service, method 400a preferably proceeds to 430a.
In response to the unavailability of a query thread 206a at 412a, listener 203a will preferably determine whether an additional query thread 206a may be initiated at 418a. In one embodiment of the present invention, the number of query threads 206a may be limited in order to prevent the processing capabilities of system 100a from being depleted. In such an implementation, listener 203a may determine whether the total number of active or existing query threads 206a exceeds or is in accordance with a preferred system limit on query threads 206a at 418a. If listener 203a determines that the number of active or existing query threads 206a is below the preferred system limit, method 400a preferably proceeds to 424a. Alternatively, if listener 203a determines that the number of active or existing query threads 206a meets or exceeds the threshold number of query threads 206a, method 400a preferably proceeds to 421a. At 421a, in response to having the preferred maximum number of allowed query threads 206a already existing or active, listener 203a may assign or designate a message queue associated with one or more of the active or existing query thread 206a for the current requesting application service. Subsequent to queuing the current data access request, the current requesting application service will preferably have its data access request held in the queue for processing until a query thread 206a becomes available. In designating a queue to hold a data access request for processing, listener 203a may be further configured to determine which queue will likely be able to process the data access request the soonest, for example. Further, listener 203a may also be configured to evaluate the quantity of data access requests or other operations pending in a queue of one or more existing query threads 206a and assign or designate the queue with the least amount of processing remaining as the queue for the current data access request. Following queuing of the data access request, method 400a preferably proceeds to 430a.
In response to a determination that the number of active or existing query threads 206a does not exceed a system limit, listener 203a may initiate one or more additional query threads 206a at 424a. Depending on the implementation of query threads 206a, listener 203a may be required to generate an appropriately named pipe or query thread for use by the current requesting application service. Alternatively, in a TCP-based query thread 206a embodiment of the present invention, listener 203a may need only initiate an additional TCP enabled query thread 206a. Once an additional query thread 206a has been initiated in accordance with the appropriate query thread 206a technology parameters, method 400a preferably proceeds to 427a where the newly initiated query thread 206a may be assigned or designated for use by the current requesting application service. Upon designation of a query thread 206a at 427a, method 400a preferably proceeds to 430a. At 430a, listener 203a preferably passes or hands- off the current requesting application service to the assigned or designated query thread 206a. Once the current requesting application service 212a has been handed-off, method 400a preferably returns to 406a where listener 203a may loop or wait to receive the next data access request from one or more of application services 212a.
Referring now to FIGURES 5a and 6a, flow diagrams depicting one embodiment of a method for implementing a common data memory object are shown. Methods 500a and 600a of FIGURES 5a and 6a, respectively, preferably begins at 503a after hand-off of the current requesting application service and upon connection of the requesting application service to the query thread 206a. Following effective or communicative connection between the current requesting application service and its designated query thread, method 500a preferably proceeds to 506a.
At 506a, the data access request generated by the current requesting application service may be evaluated to determine whether the data access request is seeking to store data to or retrieve data from common data memory object 209a. Depending on the format of the data access request, a variety of interrogation routines may be used to make the determination of whether a store or retrieve operation is sought by the current requesting application service. If it is determined that the data access request seeks to retrieve data from common data memory object 209a, method 500a preferably proceeds to 509a. Alternatively, if it is determined that the data access request seeks to store information in common data memory object 209a, method 500a preferably proceeds to 603a of method 600a in FIGURE 6.
To begin processing of a retrieve data access request, the current structure or caching methodology of common data memory object 209a may be identified or determined at 509a. The current structure or caching methodology of common data memory object 209a may be determined to enable data caching service 200a to initiate or call the routines necessary to process the current data access request. Accordingly, whether or not common data memory object 209a is currently caching data, is caching data in a single vector or is caching data in hashed data vector storage is preferably determined at 509a.
If at 509a it is determined that common data memory object 209a is not currently caching data, e.g., because there is too little data to make caching efficient, method 500a may proceed to 512a. At 512a, processing of the current data access request may be otherwise effected. In one embodiment, the current requesting application service and/or data access request may be referred to an external application for processing.
Alternatively, the data access request may be processed from a simple storage area implemented by common data memory object 209a.
However, if it is determined at 509a that common memory object 209a is currently storing data in accordance with the hashed vector caching method of the present invention, method 500a preferably proceeds to 515a. At 515a, the hashing algorithm used to generate hash table 306a may be initiated. The hashing algorithm initiated and used may be selected from default helper methods 318a, hashing algorithms 312a or from an alternate source. As mentioned above, the hashing algorithm employed may be determined or dictated by whether the data stored in one or more common data memory objects 209a of a data access object 303a is complex or simple . Following initiation of the appropriate hashing algorithm at 515a, method 500a preferably proceeds to 518a. At 518a, the current data access request and selected hashing algorithm may be employed to identify the data storage vector 309a likely to hold the data sought. For example, the subject data of the data access request may be hashed according to the hashing algorithm such that the data storage vector 309a in which the actual data would be stored if written to the current common data memory object 209a may be identified. Upon identification at 518a of the appropriate data storage vector 309a, method 500a preferably proceeds to 521a.
At 521a, in response to a determination at 509a that common memory object 209a is employing a single vector storage structure or following identification of a likely data storage vector at 518a, a determination may be made as to whether a key assigned to the data sought in the data access request is simple or complex. According to teachings of the present invention, complex data is assigned a complex key generated by one or more complex data helper routines 321a. Simple data, on the other hand, may be assigned a key by one or more of default helper routines 318a. Whether it is determined that the assigned key is complex or simple, method 500a preferably proceeds to 524a or 527a, respectively.
At 524a, a complex data key search helper routine may be called or initiated from complex data helper routines 321a. At 527a, a default search helper routine operable to search simple keys may be called or initiated from default helper routines 318a. Upon initiation of the appropriate key search helper routine, method 500a preferably proceeds to 530a. At 530a, selection of an optimum search methodology may be performed. According to teachings of the present invention, stored data may be searched via a list scan, binary search algorithm or otherwise. If the amount of data in a data storage vector 309a is below a certain level, a list scan may be the fastest search method available. Alternatively, if the amount of data in a data storage vector 309a is above a certain threshold, a binary split search may provide the quickest search results. Alternative search may be employed and may depend on the system used, the data stored as well as a number of other factors .
Using the preferred search methodology identified at 530a and the appropriate data key search helper routine, method 500a preferably proceeds to 533a where a search in accordance with the preferred search methodology may be performed. Upon performing the search at 533a, a determination regarding whether a match has been located may be performed at 536a.
If after exhausting the data contained in the common data memory object 209a a match has not been found, method 500a may proceed from 536a to 512a where the current requesting application service may be otherwise processed, such as referred to an external application for additional processing. Alternatively, if at 536a a match is determined to have been located, method 500a may proceed to 539a where the data is preferably returned or communicated to the requesting application service from common data memory object 209a the requesting application service's assigned query thread 206a.
After returning the requested data to the requesting application service at 539a, method 500a preferably proceeds to 542a where the current requesting application service may be polled or interrogated to determine whether one or more additional data access requests remain for processing. If it is determined that one or , more additional data access requests remain to be processed at 542a, method 500a preferably returns to 506a where the next data access request may be processed. Alternatively, if it is determined that the current requesting application service contains no further data access requests at 542a, method 500a preferably proceeds to 545a where the query thread 206a and current requesting application service may be disconnected, freeing query thread 206a for communication with the next requesting application service and returning current application service to its own processing operations. As mentioned above, a current data request may be otherwise processed at 512a. Also as mentioned above, the current requesting application service may be referred to one or more external routines for such processing. For example, if at 509a it is determined that common data memory object 209a is not presently caching data, for fulfillment of a data access requested received from the current requesting application service, one or more external routines may be necessary to retrieve, store or otherwise maintain the object of the data access request. Alternatively, if upon completion of the optimum search methodology at 536a, the data sought to be accessed, stored or otherwise maintained by the current requesting application service has not been found, one or more external applications or services may be necessary for the data access request to be processed to completion. Alternative measures for solving the issues which may occur at 509a and 536a of method 500a may also be implemented without departing from the spirit and scope of the present invention. In the event method 500a proceeds to 512a, method 500a then preferably proceeds to 545a where query thread 206a and the current requesting application service may be disconnected as described above .
Referring now to FIGURE 6a, one embodiment of a continuation of method 500a is shown. Illustrated generally at 600a in FIGURE 6a is one embodiment of a method for storing data in accordance with teachings of the present invention. As mentioned above, a received data access request may be interrogated or evaluated to determine whether it contains a retrieve or store operation at 506a. If it is determined that the received data access request contains a store operation, method 500a preferably proceeds from 506a to 603a.
At 603a, similar to 509a, the current data storage structure or methodology employed by common data memory object 209a may be determined. Similar to the processing of a data retrieval request, the structure or methodology with which common data memory object 209a is currently storing data will generally dictate how data to be added may be stored.
If it is determined at 603a that common data memory object 209a is not currently caching data, method 600a preferably proceeds from 603a to 512a where the current requesting application service may be referred to an external service in accordance with the description above. However, if it is determined at 603a that common data memory object 209a is currently storing data in accordance with a single vector data storage method, method 600a preferably proceeds to 606a.
At 606a, a determination may be made regarding the efficiency with which common data memory object 209a is currently maintaining data. Specifically, a determination is preferably made at 606a regarding whether the addition of the data sought to be stored in the current data access request suggests that a change in the current storage structure employed by common data memory object 209a should be effected. For example, according to teachings of the present invention, when a certain amount of data is to be shared by a plurality of application service 212a or processes, the data may be more efficiently shared by maintaining the data according to the hashed data vector storage method disclosed herein. Therefore, should the addition of the current data to an existing single data storage vector push the amount of stored data over the threshold amount, data processing system 100a, data access object 303a or common data memory object 209a may be adapted to recognize such an event and initiate a cache reformatting in an effort to increase data access efficiency. Other thresholds from which a cache structure change may be suggested or intimated include, but are not limited to, hit ratios, seek return times, read times, as well as others .
Accordingly, if at 606a it is determined that the addition of the current data suggests an alternate method of storing data would be more efficient, method 600a preferably proceeds to 609a. At 609a, a reformatting of the current common data memory object 209a cache structure may be initiated. In one embodiment, a routine adapted to reconfigure the format of the current data cache may be initiated. The data will preferably be stored in the reformatted common data memory object 209a before method 600a proceeds to 639a.
However, if at 606a it is determined that the addition of data sought to be stored by the current data access request does not suggest a change in the format of common data memory object 209a, method 600a preferably proceeds to 612a. At 612a, the data to be stored may be evaluated for a determination regarding whether the data is complex or simple. If the data sought to be stored is complex, method 600a preferably proceeds to 615a where a complex key generation and assignment helper routine may be called or initiated before proceeding to 618a. Alternatively, if the current data access request seeks to store simple data, method 600a preferably proceeds to 618a.
At 618a, a key is preferably generated and assigned for the simple or complex data in accordance with the appropriate key helper routine. For example, if there is complex data to be stored, such as a data structure having fields one (1) through ten (10) , the complex key generation and assignment helper routine called at 615a may select data from one or more of the ten (10) fields to generate a key which will be used to store and sort the data as well as for comparisons in data searches. In the case of a simple key, such as when the data to be stored or searched consists of a string of characters, the key may be defined using an offset and a length. For example, if the simple data consists of a string thirty (30) characters long, a simple key for the data may be defined beginning at character four (4) and carrying forward for ten (10) characters. As with the complex key, the simple key is preferably employed to store and sort the data within the data storage vectors. In addition, the assigned keys are also preferably employed during data searches for comparison and matching purposes.
At 621a, after a complex or simple key has been generated and assigned in accordance with the appropriate key generation and assignation helper routine at 615a, the data is preferably stored in the single vector maintained by common data memory object 209a. As mentioned above, the data is preferably positioned within the single storage vector according to its assigned key. After inserting the data at its appropriate location in the single vector at 621a, method 600a preferably proceeds to 639a.
If the current cache implementation identified at 603a determines that common data memory object 209a is currently storing information using a the hashed vector caching methodology teachings of the present invention, method 600a preferably proceeds to 624a. At 624a, the data to be stored may be evaluated for a determination regarding whether the data is complex or simple. If the data sought to be stored is complex, method 600a preferably proceeds to 627a where a complex key generation and assignment helper routine may be called or initiated before proceeding to 630a. Alternatively, if the current data access request seeks to store simple data, method 600a preferably proceeds to 630a where a key is preferably generated and assigned for the simple or complex data in accordance with the appropriate key helper routine. The operations preferably performed at 624a, 627a and 630a may proceed in a manner similar to that described above at 612a, 615a and 618a, respectively.
After a key has been generated and assigned at 630, method 600a preferably proceeds to 633a where the assigned keys may be hashed in accordance with a preferred hashing algorithm employed by caching data service 200a and common data memory object 209a. Employing the hashed key, hash table 306a and one of data storage vectors 309a, at 636a of method 600a, the data may be inserted into its appropriate storage location. Proceeding to 639a from either 609a, 621a or 636a, the current requesting application service may be polled or interrogated to determine whether any data access requests remain to be processed. If one or more data access requests from the current data requesting application service remain to be processed, method 500a preferably proceeds from 639a to 506a where the additional data access requests may be processed in accordance with methods 500a and 600a of FIGURES 5a and 6a, respectively. Alternatively, if it is determined at 639a that the current requesting application service has no more additional data access requests for processing, method 600a preferably proceeds from 639a to 545a where the current requesting application service and its assigned or designated query thread 206a may be disconnected from one another. Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of the invention as defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for processing telecommunications hardware and service usage transaction records, comprising : maintaining substantially continuous availability of a minimum number of each of a plurality of service instances, one or more of the plurality of service instances operable to process one of a selected plurality of telecommunications transaction types; organizing the plurality of service instances according to transaction type into service groupings of one or more service instances; providing a message queue for each grouping, the message queues operable to maintain and prioritize received transaction records; providing a service queue for each service instance, the service queues operable to maintain transaction records for processing in accordance with the associated service instance; receiving batch telecommunications transaction records ; distributing the transaction records according to transaction type among the message queues of the corresponding service groupings ; prioritizing, in the message queues, the transaction records for processing according to a transaction record priority designation; balancing the transaction records from the message queues across the service queues associated with each service instance in the service grouping receiving transaction records; monitoring service grouping performance to determine whether the monitored service grouping is performing in accordance with a system performance parameter; adjusting the service instances of a service grouping failing to perform in accordance with the system performance parameter; initiating one or more selected transaction record type service instances according to a service schedule; queuing transaction records for which a corresponding transaction type service instance is unavai1ab1e ; and initiating at least one service instance corresponding to the queued transactions in response to a queued transaction accumulation.
2. A computer readable medium embodying a program of instructions, the program of instructions operable to: maintain a plurality of service groups, each service group associated with a transaction type and including one or more service instances adapted to process transaction records of the transaction type; queue a batch of transaction records according to transaction type to a corresponding service group; balance the transaction records across the service instances of a service group; and process the queued transaction records in accordance with each associated transaction type service instance.
3. The computer readable medium of Claim 2, further comprising the program of instructions operable to monitor one or more queues associated with each service group to determine whether the service group is performing in accordance with a system performance parameter.
4. The computer readable medium of Claim 3 , further comprising the program of instructions operable to adjust the service instances of a group in response to a failure of the group to perform in accordance with the system performance parameter.
5. The computer readable medium of Claim 4, further comprising the program of instructions operable to re-balance the queued transaction records across the service instances of an adjusted service group.
6. The computer readable medium of Claim 5, further comprising the program of instructions operable to repeat the monitor, adjust and re-balance operations until each monitored group performs in accordance with the system performance parameter.
7. The computer readable medium of ciaim 2, further comprising the program of instructions operable to monitor one or more queues associated with a service instance to determine whether the service instance is performing in accordance with a transaction record processing performance parameter.
8. The computer readable medium of Claim 7 , further comprising the program of instructions operable to initiate additional service instances in response to a failure by the service instance to perform in accordance with the transaction record processing performance parameter.
9. The computer readable medium of Claim 2, further comprising the program of instructions operable to maintain substantially continuous availability of a minimum number of selected transaction service instances ,
10. The computer readable medium of Claim 2, further comprising the program of instructions operable to: receive a change to one or more system performance parameters; and implement each change real-time such that the changed system performance parameter may be used in subsequent monitoring of performance .
11. The computer readable medium of Claim 2, further comprising each service group and associated service instances adapted to perform substantially all necessary processing of a selected transaction type.
12. The computer readable medium of Claim 2, further comprising the program of instructions operable to disable a service instance of a service group in response to identification of excess processing capacity in the service group.
13. The computer readable medium of Claim 2, further comprising the program of instructions operable to restore stalled and improperly terminated service instances .
14. The computer readable medium of Claim 2, further comprising the program of instructions operable to execute selected service instances in accordance with a service schedule.
15. A data processing system, comprising: at least one processor; at least one memory operably coupled to the processor; and a program of instructions storable in the memory and executable in the processor, the program of instructions operable to maintain substantially continuous availability of a plurality of service instances, one or more of the plurality of service instances adapted to process at least one of a selected plurality of data types, distribute received data to the service instances according to data type, adjust the service instances adapted to process a selected data type in response to failure of the associated service instances to perform in accordance with a system performance parameter and redistribute data across the adjusted service instances.
16. The data processing system of Claim 15, further comprising the program of instructions operable to balance the data substantially equally across a plurality of associated data type service instances.
17. The data processing system of Claim 15, further comprising the program of instructions operable to: organize the service instances according to data type into groups of one or more; and provide a message queue for each group operable to receive the data upon distribution.
18. The data processing system of Claim 17, further comprising the message queue operable to prioritize the data for processing.
19. The data processing system of Claim 15, further comprising the program of instructions operable to provide a service queue operable to receive and maintain the distributed data for each service instance.
20. The system of Claim 15, further comprising the program of instructions operable to: monitor service instance performance; and initiate at least one additional service instance in response to failure of a service instance to perform in accordance with a performance parameter.
21. The system of Claim 15, further comprising the program of instructions operable to initiate selected data type service instances in accordance with a service schedule.
22. The data processing system of Claim 15, further comprising the program of instructions operable to maintain substantially continuous availability of a minimum number of selected data type service instances .
23. A method for managing memory in a data processing system, comprising: maintaining a common memory component, the common memory component including a plurality of storage vectors indexed by a hash table; receiving at least one data access request from at least one application service; connecting to the requesting application service; assigning a common memory query thread to the requesting application service; connecting the common memory query thread and the requesting application service; communicating between the common memory component and the requesting application service; accessing, by the common memory component, data in accordance with the data access request received from the requesting application service; disconnecting the query thread from the requesting application service upon completion of the data access request; and providing at least one helper service, the helper service operable to access complex data stored in the common memory component .
24. A system for maintaining data, comprising: at least one memory; a processor operably coupled to the at least one memory; a first sequence of instructions storable in the memory and executable in the processor, the first sequence of instructions operable to receive a data access request from a first application service, designate a query thread for use by the first application service, hand-off the first application service to the query thread whereby the first application service may interact with a common data memory object, and await receipt of an additional data access request whereupon the operations of receive, designate, hand-off and await may be repeated; and a second sequence of instructions cooperating with the first sequence of instructions, the second sequence of instructions operable to store data common to a plurality of application services in the common data memory object and search the common data in the common data memory object upon request by an application service .
25. The system of Claim 24, further comprising the first sequence of instructions operable to initiate one or more additional query threads adapted to permit a requesting application service to communicate with the common data memory object .
26. The system of Claim 24, further comprising the second sequence of instructions operable to store data in the common data memory object using a plurality of data storage vectors .
27. The system of Claim 26, further comprising the second sequence of instructions operable to assign a key to data stored in the plurality of data storage vectors.
28. The system of Claim 27, further comprising the second sequence of instructions operable to store the data in the data storage vectors according to the assigned key and search for data by matching the assigned key with a key derived from the data access request .
29. The system of Claim 27, further comprising the second sequence of instructions operable to assign a key to complex data generated by a complex data key helper routine.
30. The system of Claim 26, further comprising the second sequence of instructions operable to store data in the plurality of data storage vectors in accordance with a hash table.
31. The system of Claim 24, further comprising the second sequence of instructions operable to select a method for storing data according to one or more common data memory object performance parameters.
32. The system of Claim 24, further comprising the second sequence of instructions operable to select a method for searching the data according to one or more common memory object performance parameters.
33. The system of Claim 24, further comprising the second sequence of instructions operable to allocate sections of the common memory object for use by individual application services.
34. A method for storing and accessing data, comprising : receiving data to be stored; determining a performance parameter for existing data in storage; if the performance parameter is below a first threshold, storing the data in accordance with a first storage method; if the performance parameter is between the first threshold and a second threshold, assigning a key to the data and storing the data in a data storage vector; and if the performance parameter is above the second threshold, assigning a key to the data and storing the data in one of a plurality of data storage vectors and in accordance with a hash table associated with the plurality of data storage vectors.
35. The method of Claim 34, further comprising: determining whether the data to be stored is simple data or complex; if the data is simple, defining the key as an array of characters located by an offset and length within the simple data; and if the data is complex, calling a helper routine adapted to define the key.
36. The method of Claim 35, further comprising calling a helper routine operable to search the assigned complex keys .
37. The method of Claim 34, further comprising maintaining a listener operable to receive data access requests from a plurality of application services and connect the requesting application services to respective data container query threads .
38. The method of Claim 37, further comprising initiating, by the listener, additional data container query threads .
39. The method of Claim 34, further comprising sorting the data in the data storage vectors.
40. The method of Claim 39, further comprising searching the sorted data storage vectors using a binary split algorithm.
41. The method of Claim 34, further comprising searching the data in the data storage vectors according to assigned key.
42. The method of Claim 34, further comprising restructuring data remaining in the data storage vectors according to the first storage method in response to the performance parameter falling below the first threshold.
PCT/US2004/000186 2003-01-08 2004-01-07 A system and method for processing hardware or service usage and intelligent data caching WO2004063866A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US10/338,560 2003-01-08
US10/338,172 US7080060B2 (en) 2003-01-08 2003-01-08 System and method for intelligent data caching
US10/338,560 US7827282B2 (en) 2003-01-08 2003-01-08 System and method for processing hardware or service usage data
US10/338,172 2003-01-08

Publications (2)

Publication Number Publication Date
WO2004063866A2 true WO2004063866A2 (en) 2004-07-29
WO2004063866A3 WO2004063866A3 (en) 2004-09-30

Family

ID=32716887

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/000186 WO2004063866A2 (en) 2003-01-08 2004-01-07 A system and method for processing hardware or service usage and intelligent data caching

Country Status (1)

Country Link
WO (1) WO2004063866A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117234759A (en) * 2023-11-13 2023-12-15 长沙时代跳动科技有限公司 Data processing method and system of APP service platform
US11922026B2 (en) 2022-02-16 2024-03-05 T-Mobile Usa, Inc. Preventing data loss in a filesystem by creating duplicates of data in parallel, such as charging data in a wireless telecommunications network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5913164A (en) * 1995-11-30 1999-06-15 Amsc Subsidiary Corporation Conversion system used in billing system for mobile satellite system
US6178331B1 (en) * 1997-06-17 2001-01-23 Bulletin.Net, Inc. System and process for allowing wireless messaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5913164A (en) * 1995-11-30 1999-06-15 Amsc Subsidiary Corporation Conversion system used in billing system for mobile satellite system
US6178331B1 (en) * 1997-06-17 2001-01-23 Bulletin.Net, Inc. System and process for allowing wireless messaging

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11922026B2 (en) 2022-02-16 2024-03-05 T-Mobile Usa, Inc. Preventing data loss in a filesystem by creating duplicates of data in parallel, such as charging data in a wireless telecommunications network
CN117234759A (en) * 2023-11-13 2023-12-15 长沙时代跳动科技有限公司 Data processing method and system of APP service platform
CN117234759B (en) * 2023-11-13 2024-02-23 长沙时代跳动科技有限公司 Data processing method and system of APP service platform

Also Published As

Publication number Publication date
WO2004063866A3 (en) 2004-09-30

Similar Documents

Publication Publication Date Title
US20060259485A1 (en) System and method for intelligent data caching
US11366797B2 (en) System and method for large-scale data processing using an application-independent framework
US11593403B2 (en) Multi-cluster warehouse
CN112162865B (en) Scheduling method and device of server and server
US5924097A (en) Balanced input/output task management for use in multiprocessor transaction processing system
JP3944154B2 (en) Method and system for dynamically adjusting a thread pool in a multi-threaded server
US8627322B2 (en) System and method of active risk management to reduce job de-scheduling probability in computer clusters
JP4294879B2 (en) Transaction processing system having service level control mechanism and program therefor
US9390130B2 (en) Workload management in a parallel database system
US20060069761A1 (en) System and method for load balancing virtual machines in a computer network
US8024744B2 (en) Method and system for off-loading user queries to a task manager
JP2004213625A (en) Response-time basis workload distribution technique based on program
EP2689329A1 (en) Data backup prioritization
CN104834558A (en) Method and system for processing data
US7827282B2 (en) System and method for processing hardware or service usage data
Gill et al. Dynamic cost-aware re-replication and rebalancing strategy in cloud system
US7890758B2 (en) Apparatus and method for generating keys in a network computing environment
CN116501783A (en) Distributed database data importing method and system
CN110209693A (en) High concurrent data query method, apparatus, system, equipment and readable storage medium storing program for executing
EP3084603B1 (en) System and method for supporting adaptive busy wait in a computing environment
CN112363812B (en) Database connection queue management method based on task classification and storage medium
WO2004063866A2 (en) A system and method for processing hardware or service usage and intelligent data caching
US9110823B2 (en) Adaptive and prioritized replication scheduling in storage clusters
Fazul et al. Automation and prioritization of replica balancing in hdfs
Huang et al. Qos-based resource discovery in intermittently available environments

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase