AU2010219408A1 - Method of managing and summarising events in an event processing network - Google Patents

Method of managing and summarising events in an event processing network Download PDF

Info

Publication number
AU2010219408A1
AU2010219408A1 AU2010219408A AU2010219408A AU2010219408A1 AU 2010219408 A1 AU2010219408 A1 AU 2010219408A1 AU 2010219408 A AU2010219408 A AU 2010219408A AU 2010219408 A AU2010219408 A AU 2010219408A AU 2010219408 A1 AU2010219408 A1 AU 2010219408A1
Authority
AU
Australia
Prior art keywords
events
key
summaries
summarized
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2010219408A
Other versions
AU2010219408B2 (en
Inventor
Matthew Cooper
Baden Hughes
David Tucker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Event Zero Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2009904480A external-priority patent/AU2009904480A0/en
Application filed by Event Zero Pty Ltd filed Critical Event Zero Pty Ltd
Priority to AU2010219408A priority Critical patent/AU2010219408B2/en
Publication of AU2010219408A1 publication Critical patent/AU2010219408A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION Request for Assignment Assignors: EVENT ZERO PTY LTD
Application granted granted Critical
Publication of AU2010219408B2 publication Critical patent/AU2010219408B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC Request for Assignment Assignors: MICROSOFT CORPORATION
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0604Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Description

- 1 METHOD OF MANAGING AND SUMMARISING EVENTS IN AN EVENT PROCESSING NETWORK FIELD OF INVENTION 5 The present invention relates to event processing computer networks and the management and storage of event data. BACKGROUND OF THE INVENTION Event processing networks can currently process millions of events per 0 second. However transferring larger and larger amounts of events across increasing network infrastructure is causing problems. Events collected at the edge of an event processing network needs to be transferred to other locations for collation and processing and when the number and size of those events exceeds the limit of the available bandwidth between locations, 5 events can be significantly delayed or even lost due to processing constraints. Event processing networks take this into account by managing a local spool of events that is written to local storage (ram, disk or other). However even local storage can become filled with events while those events are waiting to be transferred to another location over limited bandwidth. 20 Alternately only a sample of events have been taken and considered, but the use of sampling leads to issues of whether the sample is a true representation of the events.
-2 OBJECT OF THE INVENTION It is an object of the present invention to provide an alternate method of dealing with and processing large volumes of events that overcomes at least in part one or more of the above mentioned problems. 5 SUMMARY OF THE INVENTION In one aspect the present invention broadly resides in a method of managing event data for an event processing network including detecting events with data collectors; 0 processing the events by filtering the events with a key to identify events that satisfy the requirements of the key; summarizing and batching of identified events that satisfy the requirements of the key; said summarizing and batching is performed in accordance with predetermined rules associated with the key to form summarized and batched data. 5 Preferably the summarized and batched data from the identified events is transferred or shipped to a processor for programmed analysis and presentation. The summarized and batched data can be further summarized and a second level of summarized data can be generated. In this way a hierarchy of summarized data can be formed. 20 The key is preferably a summarizing algorithm with a plurality of field values. Data collectors are preferably edge collectors at the edge of the event processing network. Edge collectors include sensors, scanners, card readers, printers, phones and any other device that generates events. As events are received by the edge collectors, they are preferably 25 summarized according to rules configured by the event processing network -3 administrator. Events are preferably processed in batches to improve throughput and events are filtered out so that only targeted events are summarized. In a preferred embodiment values on events are summarized per key. The key can be a composite key that is made up of many event field values. Summaries 5 of scalar values are preferably calculated for each key as each batch is processed and events are generated using these summarized values and passed into the rest of the event processing network. The local summaries are then flushed from the system. This often results in summary events being emitted for the same keys. The summary events are adjustments to the initial event for that key. Thus a user id (UI) 0 can display summary values by key and have those values updated with adjustment summary events. Additionally, keys can include time, so that summaries may be generated for time slices. For example, we can write a rule to emit summaries for every 1 minute slice of time counting each user's HTTP requests (composite key here is 1 minute time slice 5 and user id). As each http event is received, the summary engine calculates the composite key by turning the event timestamp into a 1 minute slice index and combining that with the user id. It examines it's cached summaries and either creates a new summary or updates an old one. At the end of the flush period all the summaries are emitted as events and the cache cleared. As events can arrive in 20 any order, events can effectively be late and so the same time slice / user id key summaries may need to be updated after the first flush. These will be values that are deltas to those in the first set of emitted events. At the end of this second period, the deltas are emitted along with any new summaries (as the engine sees no difference between them). A recipient of the stream of summaries may aggregate -4 the summaries to give a current set of summaries accurate to approximately within the flush time. The recipient may be a summary engine too. By having a hierarchy of summary engines, events can be received across an array of edge collectors which produce local summaries. These are then preferably 5 summarized by a common parent to give a consolidated (or rolled up) set of summaries across all the edged collectors (important for load balancing and failover in event collection). Further aggregation can occur, either in one summary engine, or in a common parent, across time slices. For example, the edged collectors may summarize by the minute, and a parent may summarize that up to hours and days. o All granularity of time summaries may be used, or finer levels of details dropped. Quality of service levels may be set to control the reliability and currency of the summaries. Summaries may be committed to persistent storage in between flush periods to ensure that no data is lost if the service fails, e.g. due to a power failure. Upon restarting, then engine re-reads its state from persistent storage and 5 carries on where it left off. The end user would see a lack of updates summary events followed by a sudden set of updates with large deltas as the summary engine will accumulate a whole stream of changes within the flush period (assuming events come in typically slower than the engine can process them so that when it gets the backed up events it will effectively process a whole period of events in one or 2 20 flushes). Optimizations are possible. For example if the engine is summarizing the same keys but at different level of time granularity, e.g. 1 minute, 1 hour and 1 day summaries for the user's http requests, then when it restarts it can deduce the higher values, e.g. hours from minutes, for the lower values and so does not need to persist higher values.
-5 For historical access, the summary events can preferably be stored. However, to obtain a historical aggregated view across time, for example an all summary events for a particular hour, it would need to be summed up from the database. For improved performance a summary engine can also determine the 5 aggregated values. A common parent for example can receive the summaries and determine an aggregated equivalent of the summary for each key, thereby effectively keeping an up to date figure from all the delta summary events. If someone wishes to view historical summaries, they can request say 1 minute summaries for the last hour. Since events can arrive late, and deltas can arrive late, the historical view may 0 need constant updating. So the parent engine must send the historical data to the client but know which deltas had been applied so that it can stream accurate deltas for the historical data, to ensure a consistent view for the client by avoiding doubling up or missing delta summaries. In another aspect the invention broadly resides in a method of managing 5 event data for an event processing network including detecting events with data collectors including sensors, scanners, card readers, printers, phones and any other device that generates events; processing by a processor the detected events by filtering the events with a key to identify events that satisfy the requirements of the key, said identified events 20 that can satisfy the requirements of the key are summarized and batched in accordance with predetermined rules associated with the key to form summarized and batched data, wherein said processing provides a consolidated set of summaries of identified events across all the data collectors using the common key to provide perspective and relativity to the detected events from the data collectors.
-6 The summarized and batched data can preferably be further processed in accordance with other keys. The features and options discussed with other aspects apply to this aspect of the invention. 5 Techniques can be used to deploy the summarizing processes throughout the event processing network to maximize event throughput and minimize network bandwidth. What gets summarized and how it can be selected by the user, or an algorithm may be used to calculate the best way to summarize the data given the network topology and summary requirements. For example, the user need only 0 specify what time and field summaries are required at each point in the network, and the algorithm determines the most efficient place to perform which calculations. BRIEF DESCRIPTION OF THE DRAWINGS In order that the present invention can be more readily understood reference 5 will now be made to the accompanying drawings which illustrate a preferred embodiment of the invention and wherein: Figure 1 shows a typical event processing network deployed in a commercial operation; and Figure 2 shows a preferred embodiment of the invention of an event 20 processing network with a summary engine. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Using a summary algorithm at the point at which events are collected can significantly reduce the number of events that need to be processed, therefore 25 reducing the amount of CPU time, memory, storage and bandwidth that is required.
-7 Figure 1 shows a typical event processing network that would be deployed in a commercial installation. Events generated outside the event processing network are fed into the event processing network via edge collectors that are located at the edge of the event processing network. Events are collected and forwarded to a 5 central location for processing. The more individual events that are collected at the event processing network edge, the more bandwidth that is required to ship those events to a central location for processing. For example if a network router device was sending events that represented the number of bytes passing through it's interfaces every second and 0 the size of an event was 1 kb in size, and you were receiving 60 events per minute, 60kb of bandwidth would be required to send the events between two sites. Figure 2 shows the same implementation with a summary engine located on the collection servers at the outer edge of the event processing network. When events are collected from the event source by the edge collectors; the collectors 5 examine the event to see if it, or the particular stream that the event is part of or have some type of relationship that would suit being summarized before being sent to the central location for processing. Using the same example above; if a router was generating the same 1 kb event every second and sending 60 per second, the summary engine could deduce 20 that it was really only necessary to send one event per minute. By collecting all the events over a 1 minute period and adding up the number of bytes, the summary engine would then send just a single event every minute for a total bandwidth requirement of just 1 kb and a saving of 59kb. In another example, a stream of 1,000,000 similar events can be summarized 25 into just 10 summary events that represent the same net result of information. Any -8 further processing of that information on those 10 summary events requires far less resources in general than it would take to process the 1,000,000 events. Example 1: Countinq and Processing Data About the Number of Cars Travellinq on a 5 Freeway Over a Defined Period of Time To count the total number of times cars travel on a four lane freeway, a sensor was embedded in the roadway so that each and every car that passes over the sensor is recorded and the total number of cars that pass over the sensor each day can be determined. The passage of a car over the sensor generates an event. 0 Each event is collected and sent to a central location for traffic flow analysis in a data warehouse system with the ultimate goal of comparing all major freeways in a capital city to find the busiest freeway. For each car that passes over the sensor in each lane of the freeway, an event is generated and captured into an event processing network by an edge 5 collector. By way of example, if the cars pass over the sensors at a steady rate of 1 every 4 seconds in each lane, a total number of 86,400 events (that is 4 lanes of 15 x 60 x 24 events/ 24 hour day) are sent to the central system to be summed and compared with other freeways. 20 If on the other hand, the edge collector applied a summary algorithm or key to the events when they arrived at the edge collector where the algorithm or key coded for the counting of the total number of events in an hour and then sent the summed total as an event to the central system, then only 24 events would be sent in a day as compared to the previous total of 86,400 events.
-9 By applying different algorithms to the summary, that number of events could be decreased, or the resolution of the information increased by creating a greater number of summary events. 5 Example 2: Processinq Data About the Utilisation of Computers in an Orcanisation over a Defined Period of Time A software agent is installed on a PC to record the amount of time that the PC is left on but unused. On a regular interval (every 5 minutes for example) the agent generates an event with the amount of time the PC has been idle since the last 0 report. These events are collected and analysed to identify long periods where the PC is idle (such as overnight and on weekends) and could have been powered down. This information can then be compared with historical usage for the same PC and with other PCs made available to users to promote environmental awareness and behavioural change and to reduce carbon footprint across the organisation. 5 To provide this information, every event generated by the agent is sent to a central location for analysis and storage. If each PC in the organisation is reporting every 5 minutes, this amounts to (1,440 minutes per day / 5 minutes per report) equalling 288 events per PC per day, and all of which must be transported to the central system. For an organisation with 1,000 PCs distributed across 40 offices of 20 25 PCs each, the total number of events per day is (288 events per day x 1,000 PCs) equalling 288,000. If, however, for the same organisation, a summary aggregation was instead applied at an edge collector local to each office, with the total amount of idle time per PC totalled by hour, then each edge collector would only need to accept (25 PCs x 25 288 events) equalling 7,200 events per day, and the central system would only need - 10 to accept and analyse (25 PCs x 24 hourly summary events x 40 offices) equalling 24,000 events per day, compared to the previous total of 288,000. As in the previous example, the number of events transported from the edge collectors to the centre could be further decreased by using a coarser granularity of 5 edge collector summaries. Alternately the resolution of the information could be improved by using summaries with a finer granularity. This example demonstrates how multiple tiers of summarisation can be layered on top of each other: The initial events generated by the agent are, in effect, 5-minute summary events, which are further summarised by an edge collector, and 0 could then also be further summarised by the central system, efficiently suppressing details in excess of requirements. ADVANTAGES The advantages of the preferred embodiment of the present invention 5 includes applying a summary algorithm or key to event collection on the outer edge of an event processing network to increase the efficiency of event processing by reducing the total number of events that need to be processed; reducing the CPU cycles required to do the same processing; reducing the memory requirements by using less memory to store events; reducing the amount of bandwidth required to 20 transfer events between locations; and enabling faster feedback to graphical user interface applications. VARIATIONS It will of course be realised that while the foregoing has been given by way of 25 illustrative example of this invention, all such and other modifications and variations -11 thereto as would be apparent to persons skilled in the art are deemed to fall within the broad scope and ambit of this invention as is herein set forth. Throughout the description and claims this specification the word "comprise" and variations of that word such as "comprises" and "comprising", are not intended to 5 exclude other additives, components, integers or steps.

Claims (31)

1. A method of managing event data for an event processing network including detecting events with data collectors; 5 processing the events by filtering the events with a key to identify events that satisfy the requirements of the key; summarizing and batching of identified events that satisfy the requirements of the key; said summarizing and batching is performed in accordance with predetermined rules associated with the key to form summarized and batched data. 0
2. A method of managing event data for an event processing network including detecting events with data collectors including sensors, scanners, card readers, printers, phones and any other device that generates events; processing by a processor the detected events by filtering the events with a 5 key to identify events that satisfy the requirements of the key, said identified events that can satisfy the requirements of the key are summarized and batched in accordance with predetermined rules associated with the key to form summarized and batched data, wherein said processing provides a consolidated set of summaries of identified events across all the data collectors using the common key 20 to provide perspective and relativity to the detected events from the data collectors.
3. A method as claimed in claim 1 or 2, wherein the summarized and batched data is further processed in accordance with other keys. - 13
4. A method as claimed in claim 1 or 2, wherein the summarized and batched data from the identified events is transferred or shipped to a processor for programmed analysis and presentation.
5 5. A method as claimed in claim 1 or 2, wherein the summarized and batched data can be further summarized and a second level of summarized data can be generated, thereby forming a hierarchy of summarized and batched data.
6. A method as claimed in any one of the abovementioned claims, wherein the 0 key is a summarizing algorithm with a plurality of field values.
7. A method as claimed in claim 1, in which data collectors are edge collectors at the edge of the event processing network and the edge collectors include sensors, scanners, card readers, printers, phones and any other device that generates 5 events.
8. A method as claimed in claim 1 or 2, in which events are processed in batches to improve throughput and events are filtered out so that only targeted events are summarized. 20
9. A method as claimed in claim 8, in which values on events are summarized per key.
10. A method as claimed in claim 9, in which the key is a composite key that is 25 made up of many event field values. - 14
11. A method as claimed in claim 10, in which summaries of scalar values are calculated for each key as each batch is processed and events are generated using these summarized values and passed into the rest of the event processing network. 5
12. A method as claimed in claim 11, in which the local summaries are then flushed from the system.
13. A method as claimed in claim 12, in which the summary events are 0 adjustments to the initial event for that key.
14. A method as claimed in claim 13, in which a user id (UI) can display summary values by key and have those values updated with adjustment summary events. 5
15. A method as claimed in claim 14, in which the keys include time, so that summaries may be generated for time slices.
16. A method as claimed in claim 15, which includes a rule to emit summaries for every 1 minute slice of time counting each user's HTTP requests where composite 20 key is 1 minute time slice and user id, and as each http event is received, the summary engine calculates the composite key by turning the event timestamp into a 1 minute slice index and combining that with the user id.
17. A method as claimed in claim 16, in which it examines it's cached summaries 25 and either creates a new summary or updates an old one. - 15
18. A method as claimed in claim 17, in which at the end of the flush period all the summaries are emitted as events and the cache cleared. 5
19. A method as claimed in claim 18, in which events arrive in any order, and events can effectively be late, and so the same time slice / user id key summaries is updated after the first flush.
20. A method as claimed in claim 19, in which the values that are deltas to those 0 in the first set of emitted events, and at the end of this second period, and in which the deltas are emitted along with any new summaries, as the engine sees no difference between them.
21. A method as claimed in claim 20, in which a recipient of the stream of 5 summaries aggregate the summaries to give a current set of summaries accurate to approximately within the flush time.
22. A method as claimed in claim 21, in which the recipient is a summary engine too. 20
23. A method as claimed in claim 22, in which, by having a hierarchy of summary engines, events can be received across an array of data collectors which produce local summaries, and in which these are then summarized by a common parent to give a consolidated or rolled up set of summaries across all the edged collectors 25 which is important for load balancing and failover in event collection. - 16
24. A method as claimed in claim 23, in which further aggregation can occur, either in one summary engine, or in a common parent, across time slices. 5
25. A method as claimed in claim 25, in which the data collectors summarize by the minute, and a parent summarize that up to hours and days.
26. A method as claimed in claim 25, in which all granularity of time summaries is used, or finer levels of details dropped. 0
27. A method as claimed in claim 26, in which quality of service levels are set to control the reliability and currency of the summaries.
28. A method as claimed in claim 27, in which summaries are committed to 5 persistent storage in between flush periods to ensure that no data is lost if the service fails, and upon restarting, then engine re-reads its state from persistent storage and carries on where it left off.
29. A method as claimed in claim 28, in which the end user would see a lack of 20 updates summary events followed by a sudden set of updates with large deltas as the summary engine will accumulate a whole stream of changes within the flush period, in the case that events come in typically slower than the engine can process them so that when it gets the backed up events it will effectively process a whole period of events in one or 2 flushes. 25 - 17
30. A method as claimed in claim 28, in which for historical access, the summary events are stored.
31. A method as substantially described herein with reference to and as illustrated 5 by the accompanying drawings.
AU2010219408A 2009-09-16 2010-09-13 Method of managing and summarising events in an event processing network Active AU2010219408B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2010219408A AU2010219408B2 (en) 2009-09-16 2010-09-13 Method of managing and summarising events in an event processing network

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2009904480A AU2009904480A0 (en) 2009-09-16 Method of managing and summarising events in an event processing network
AU2009904480 2009-09-16
AU2010219408A AU2010219408B2 (en) 2009-09-16 2010-09-13 Method of managing and summarising events in an event processing network

Publications (2)

Publication Number Publication Date
AU2010219408A1 true AU2010219408A1 (en) 2011-03-31
AU2010219408B2 AU2010219408B2 (en) 2016-05-12

Family

ID=43806596

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2010219408A Active AU2010219408B2 (en) 2009-09-16 2010-09-13 Method of managing and summarising events in an event processing network

Country Status (1)

Country Link
AU (1) AU2010219408B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449161A (en) * 2020-03-26 2021-09-28 北京沃东天骏信息技术有限公司 Data collection method, device, system and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001257400A1 (en) * 2000-04-28 2001-11-12 Internet Security Systems, Inc. System and method for managing security events on a network
US8504397B2 (en) * 2005-02-21 2013-08-06 Infosys Technologies Limited Real time business event monitoring, tracking, and execution architecture
US7953713B2 (en) * 2006-09-14 2011-05-31 International Business Machines Corporation System and method for representing and using tagged data in a management system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449161A (en) * 2020-03-26 2021-09-28 北京沃东天骏信息技术有限公司 Data collection method, device, system and storage medium

Also Published As

Publication number Publication date
AU2010219408B2 (en) 2016-05-12

Similar Documents

Publication Publication Date Title
US11169967B2 (en) Selective deduplication
US7958095B1 (en) Methods and apparatus for collecting and processing file system data
US11558270B2 (en) Monitoring a stale data queue for deletion events
EP2904495B1 (en) Locality aware, two-level fingerprint caching
US20150106578A1 (en) Systems, methods and devices for implementing data management in a distributed data storage system
US20170060769A1 (en) Systems, devices and methods for generating locality-indicative data representations of data streams, and compressions thereof
US8140791B1 (en) Techniques for backing up distributed data
US20060112155A1 (en) System and method for managing quality of service for a storage system
CN107193909A (en) Data processing method and system
CN102246156A (en) Managing event traffic in a network system
US10148531B1 (en) Partitioned performance: adaptive predicted impact
US20200125473A1 (en) Hybrid log viewer with thin memory usage
US10142195B1 (en) Partitioned performance tracking core resource consumption independently
US20160034504A1 (en) Efficient aggregation, storage and querying of large volume metrics
US20190028501A1 (en) Anomaly detection on live data streams with extremely low latencies
JP2007510231A (en) Tracking space usage in the database
US10033620B1 (en) Partitioned performance adaptive policies and leases
AU2010219408B2 (en) Method of managing and summarising events in an event processing network
CN112506926A (en) Monitoring data storage and query method and corresponding device, equipment and medium
JP2005063363A (en) Data backup device, data backup method and data backup program
CN114625805B (en) Return test configuration method, device, equipment and medium
CN113835613B (en) File reading method and device, electronic equipment and storage medium
Yan et al. Busy bee: how to use traffic information for better scheduling of background tasks
US8564820B2 (en) Information processing apparatus, image forming device, and system and method thereof
US11880577B2 (en) Time-series data deduplication (dedupe) caching

Legal Events

Date Code Title Description
PC1 Assignment before grant (sect. 113)

Owner name: MICROSOFT CORPORATION

Free format text: FORMER APPLICANT(S): EVENT ZERO PTY LTD

PC1 Assignment before grant (sect. 113)

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC

Free format text: FORMER APPLICANT(S): MICROSOFT CORPORATION

FGA Letters patent sealed or granted (standard patent)