US9658801B2 - System having in-memory buffer service, temporary events file storage system and backup events file uploader service - Google Patents

System having in-memory buffer service, temporary events file storage system and backup events file uploader service Download PDF

Info

Publication number
US9658801B2
US9658801B2 US14/976,125 US201514976125A US9658801B2 US 9658801 B2 US9658801 B2 US 9658801B2 US 201514976125 A US201514976125 A US 201514976125A US 9658801 B2 US9658801 B2 US 9658801B2
Authority
US
United States
Prior art keywords
events
uploader
file
application server
job
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/976,125
Other versions
US20170075627A1 (en
Inventor
Aakash Pradeep
Adam Torman
Alex Warshavsky
Samarpan Jain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Salesforce Inc
Original Assignee
Salesforce com Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Salesforce com Inc filed Critical Salesforce com Inc
Priority to US14/976,125 priority Critical patent/US9658801B2/en
Assigned to SALESFORCE.COM, INC. reassignment SALESFORCE.COM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PRADEEP, AAKASH, JAIN, SAMARPAN, TORMAN, ADAM, WARSHAVSKY, ALEX
Publication of US20170075627A1 publication Critical patent/US20170075627A1/en
Application granted granted Critical
Publication of US9658801B2 publication Critical patent/US9658801B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/2842
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • This disclosure relates to the field of multi-tenant server operation, and in particular to an in-memory buffer service for a server that provides improved reliability.
  • a data buffer is a region of a physical memory storage used to temporarily store data while it is being moved from one place to another.
  • data objects may be stored in a buffer as they are retrieved from a client device or application before they are processed or sent elsewhere for more permanent storage.
  • Buffers can be implemented in a fixed memory location in hardware, or by using a virtual data buffer in software, pointing at a location in the physical memory. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case where these rates are variable.
  • a buffer can be used to adjust timing by implementing a queue algorithm in memory, simultaneously writing data into the queue at one rate and reading it at another rate.
  • High-throughput, scalable, continuous events require transportation and queuing to a large data store, such as a non-relational, distributed database like Hbase.
  • IMBS In-Memory Buffer Service
  • U.S. patent application Ser. No. 14/705,750 filed May 6, 2015, entitled “In-Memory Buffer Service,” and assigned to the assignee of the present invention, which is incorporated herein by reference in its entirety.
  • the IMBS is a store and forward based service for objects.
  • the IMBS is designed to provide a very low latency API to write objects to data store 130 (e.g., currently it takes 0.08 microseconds to write an object).
  • the IMBS keeps events in memory until the number of objects reaches a configured limit and for a given time (e.g., 10 seconds).
  • the IMBS can include a capture service running on an application server that receives events from a client application running on an application server to be stored in a data store.
  • the IMBS stores the events in an in-memory bounded buffer on the application server.
  • the in-memory bounded buffer is responsible to support a concurrently writable API, which stores events in memory and includes a buffer flush regulator that manages flushing of this buffer.
  • the in-memory bounded buffer includes a plurality of single-threaded segments, and the capture service can write events to each segment in parallel.
  • the in-memory bounded buffer When a number of events stored in the in-memory bounded buffer reaches a predefined limit, the in-memory bounded buffer provides a notification to the buffer flush regulator.
  • the in-memory bounded buffer receives a request to flush the events in the in-memory bounded buffer from a consumer executor service.
  • the consumer executor service is responsible for consuming the events in the in-memory bounded buffer using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer in parallel to allow the events to be written directly to the data store. Events can be consumed in parallel fashion to allow for high throughput.
  • FIG. 1 is a block diagram illustrating a computing environment with an in-memory buffer service, according to an embodiment.
  • FIG. 2 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment.
  • FIG. 3 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment.
  • FIG. 4 is a block diagram illustrating an application server with dual in-memory bounded buffers, according to an embodiment.
  • FIG. 5 is a flow diagram illustrating a method for in-memory buffering of event data, according to an embodiment.
  • FIG. 6 is a block diagram illustrating an application server with an in-memory buffer service according to another embodiment.
  • FIG. 7A is a block diagram illustrating one instance of a consumer thread and an event capture thread of FIG. 6 according to an embodiment.
  • FIG. 7B is a block diagram illustrating an events file uploader service of FIG. 6 according to an embodiment.
  • FIG. 8 is a block diagram illustrating an exemplary implementation of the temporary events file storage system (TEFSS) of FIG. 6 according to an embodiment.
  • TEFSS temporary events file storage system
  • FIG. 9 is a block diagram illustrating an exemplary implementation of the data store of FIG. 6 according to an embodiment.
  • FIG. 10 is a data structure diagram illustrating a schema used for each uploader job record that is stored at the events uploader job detail table according to an embodiment.
  • FIG. 11A is a flow diagram illustrating a method for writing events files to a temporary events file storage system (TEFSS) according to another embodiment.
  • TEFSS temporary events file storage system
  • FIG. 11B is a flow diagram illustrating a method performed by an events file uploader service for transferring events files from the temporary events file storage system (TEFSS) to a data store according to another embodiment.
  • TEFSS temporary events file storage system
  • FIG. 12A is a flow diagram illustrating a method performed by the events file uploader service when operating in regular mode for transferring events files from the temporary events file storage system (TEFSS) to the data store according to another embodiment.
  • TEFSS temporary events file storage system
  • FIG. 12B is a flow diagram illustrating a method performed by the events file uploader service when operating in regular mode according to another embodiment.
  • FIG. 12C is a flow diagram illustrating a method performed by the events file uploader service when operating in backup mode according to another embodiment.
  • FIG. 13 shows a block diagram of an example environment in which an on-demand database service can be used in accordance with some embodiments.
  • FIG. 14 shows a block diagram of example implementations of elements of FIG. 13 and example interconnections between these elements according to some embodiments.
  • FIG. 15A shows a system diagram of example architectural components of an on-demand database service environment according to some embodiments.
  • FIG. 15B shows a system diagram further illustrating example architectural components of an on-demand database service environment according to some embodiments.
  • FIG. 16 is a block diagram illustrating an exemplary computer system, according to an embodiment.
  • an event refers to a tuple (or list of values or elements) which represents something that happened.
  • An event is immutable (or irreversible) and timestamped.
  • the “something” that happened can be anything.
  • an event can be a tuple that characterizes an action taken by a user with respect to data, such as action taken on data that describes an object.
  • an event can refer to data that characterizes an action performed by interaction with respect to a client application being executed at the application server.
  • Events can be monitored and recorded (or logged) to capture user actions such as the use of applications, windows opened, system commands executed, check boxes clicked, text entered/edited, URLs visited and nearly every other on-screen event.
  • Events can include standard events, and custom events.
  • processing of events may include ingesting, processing, and persisting potentially millions of events over the span of an hour or billions of events over the span of a week.
  • a certain system may store up to a week's worth of details from sets of operations that are executed as a single unit (e.g., “Apex transactions”) in an open source, non-relational, distributed database such as Apache HBaseTM, or other data store. This may be accomplished by pointing a debug log event handler that supplies limit usage information, or any other program or component that generates a high volume of events or other data elements to the data store.
  • some other data store may be used, such as a NoSQL database, a non-opensource database, a relational database, a non-distributed database, or other type of data store.
  • an in-memory buffer service running on a multi-tenant application server provides a low latency application program interface (API) that writes objects, such as event records, to the data store.
  • API application program interface
  • the application server may not server multiple clients, but rather is utilized by a single client that generates significant a volume of event data.
  • the in-memory buffer service provides a “store and forward” based service for data objects, where those objects (e.g., events) are temporarily stored in an in-memory bounded buffer before being forward to a data store for longer term storage.
  • a capture service can store events in a bounded buffer, where they may be kept in memory until the number of objects reaches a predefined limit, or the events have been in memory for a predetermined period of time (e.g., 5 seconds, 10 seconds, 30 seconds, 1 minute, etc.).
  • the in-memory buffer provides a very low latency API to write objects and can be done very quickly. For instance, in some examples, writing the bounded buffer can be done in as little as 0.08 microseconds.
  • the in-memory buffer may be responsible for supporting a concurrently writable API, which enables the storing of events in memory and manages flushing of the buffer.
  • the in-memory buffer may include a plurality of buffer segments, where each segment is single threaded. Each segment can support high read consistency by waiting until all (or most) write threads are complete before read threads are initiated. Thus, each segment may only be read once.
  • the concurrent bounded buffer comprised of segments, may be used to store events concurrently from various threads.
  • the bounded buffer may also maintain a notion of load factor and may support two types of overflow policies.
  • the bounded buffer keeps the latest object by replacing an oldest object with a recent object (i.e., first in first out). In another embodiment, the bounded buffer drops the latest object if the buffer is full.
  • a buffer flush regulator may further be used to regulate the flushing of the buffer. The regulating may be based on size and time, which will queue up the event for consumption and writing to the data store. For example, a buffer flush may be initiated when the buffer load reaches some predefined limit (e.g., 80% capacity). In one embodiment, the concurrent bounded buffer may provide a notification when the number of objects reaches a predefined limit that triggers buffer flushing.
  • a consumer executor service is responsible for consuming the in-memory buffer and uses a dynamically sized thread pool to consume (i.e., process) the objects in parallel fashion in order to maximize throughput.
  • the consumer executor service may include a service thread that initiates automatic restarts if a main thread is interrupted.
  • the extension may also include an asynchronous API for starting and stopping a thread.
  • the service thread further may use, in one implementation, Java's ThreadPool to get worker to run the consumer tasks concurrently and in a reliable manner.
  • the service thread also may iteratively call blockedGetAndReset( ) API of the buffer and may assign a set of data to the consumer task.
  • the consumer task may be eventually run by the ThreadPool's worker thread.
  • a consumer factory of the consumer executor service allows a user to customize a consumer task.
  • a consumer task may first try to write to the data store (e.g., HBaseTM, BigtableTM, MongoDBTM, etc.) directly in a given time in order to reduce the load on an asynchronous message delivery mechanism (e.g., a message queue (MQ)) and to make data available instantaneously. If writing to the data store fails, however, the consumer task may enqueue objects in the MQ, which eventually writes the objects to the data store via an MQ Handler.
  • a shutdown hook is used to close the service properly when a shutdown of the application server is requested.
  • a log of statistics may also be kept, and the consumer executor service may be restarted if it was terminated for unknown reasons.
  • FIG. 1 is a block diagram illustrating a computing environment with an in-memory buffer service, according to an embodiment.
  • network environment 100 includes application server 110 , client devices 120 , 122 , 124 and data store 130 .
  • Application server 110 and client devices 120 , 122 , 124 may be connected through a series of one or more networks 140 , which may be, for example, a local area network (LAN), a wide area network (WAN), a global area network (GAN) such as the Internet, or a combination of such networks.
  • LAN local area network
  • WAN wide area network
  • GAN global area network
  • application server 110 and client devices 120 , 122 , 124 may have a direct connection to each of the other devices in the network.
  • the illustrated embodiment shows one application server 110 , and three client devices 120 , 122 , 124 ; however, in other embodiments, there may be any number of servers or client devices, and environment 100 may include additional and/or different components.
  • Each of client devices 120 , 122 , 124 may be, for example, a personal computer (PC), workstation, laptop computer, tablet computer, mobile phone, smartphone, personal digital assistant (PDA) or the like.
  • Client devices 120 , 122 , 124 may communicate with application server 110 to access resources on application server, such as client application 112 .
  • application server such as client application 112 .
  • client application 112 For example a user may access client application 112 through a web browser or other HTTP client application on the client device.
  • application server 110 may be any computing device, such as computing system 800 , described below with respect to FIG. 16 .
  • application server 110 may be a multi-tenant application server designed to provide access to a number of client applications, such as client application 112 , to one more client devices, such as client devices 120 , 122 , 124 .
  • application server 110 may be a single-tenant application server design to service a single client.
  • Client application 112 and other resources provided by application server 110 such as processing resources, storage resources, etc., may be maintained by application server 110 and made available to the users of client devices 120 , 122 , 124 as needed (i.e., “on-demand”).
  • This application server 110 can include various elements of hardware and software of a database system may be shared by one or more customers or tenants. For example, application server 110 may simultaneously process requests for a great number of customers.
  • Application server 110 may include an application platform including a framework that allows the applications to execute, such as the hardware or software infrastructure of the system.
  • the application platform enables the creation, management and execution of one or more applications, such as client application 122 , developed by the provider of the application server, customers accessing the application server 110 via client devices 120 , 122 , 124 , or third party application developers.
  • application server 110 includes in-memory buffer service 114 .
  • In-memory buffer service 114 can ingest and process events generated by client application 112 , buffer those events and eventually store the events in data store 130 .
  • data store 130 provides an application programming interface (API) which can be called by the in-memory buffer service 114 in order to store the events in data store 130 .
  • API application programming interface
  • data store 130 may be an open source, non-relational, distributed database such as Apache HBaseTM, BigtableTM, MongoDBTM, or other data store. Examples of events generated by client application 112 may include errors, exceptions, faults, failures, crashes, incidents, or other occurrences.
  • client application 112 may include a user interface layer that presents a user interface visible on one of client devices 120 , 122 , 124 .
  • the user may initiate some processing operation in a logical layer of the client application that hits some hard limit defined by the application server 110 (e.g., number of processing cycles consumed per day, amount of storage resources consumed) and page rendering is stopped.
  • some hard limit defined by the application server 110 (e.g., number of processing cycles consumed per day, amount of storage resources consumed) and page rendering is stopped.
  • the reaching of this hard limit may trigger the creation of an event by client application 112 which is recorded for possible future review.
  • the volume at which such events are potentially generated and conventional means for recording and storing these events may result in an unacceptable level of latency.
  • in-memory buffer service 114 can ingest and process the events, buffer the events and eventually store the events in data store 130 .
  • the buffering of the events in memory before storage in data store 130 can allow a high volume of events to be processed in near real-time with minimal latency and without adversely affecting performance of the application server 110 or client application 112 . Additional details of the in-memory buffer service 114 are provided below.
  • FIG. 2 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment.
  • application server 110 may include client application 112 , in-memory buffer service 114 and data store 130 .
  • data store 130 may be internal to application server 110 .
  • data store 130 may be external to application server 110 and may be connected to application server 110 over a network or other connection.
  • application server 110 may include different and/or additional components which are not shown in order to simplify the description.
  • data store 130 may be embodied by one or more mass storage devices which can include, for example, flash memory, magnetic or optical disks, or tape drives; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or any other type of storage medium.
  • mass storage devices can include, for example, flash memory, magnetic or optical disks, or tape drives; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or any other type of storage medium.
  • Client application 112 may be any type of computer application program that generates events.
  • client application 112 may be an entertainment application, productivity application, business application, social networking application or other types of application.
  • in-memory buffer 114 processes events for storage in data store 130 .
  • in-memory buffer 114 may process any other type of data object for storage in data store 130 or elsewhere.
  • a capture service 202 running on application server 110 receives events from client application 112 that are to be stored in data store 130 .
  • Capture service 202 temporarily stores the received events in bounded buffer 204 .
  • the bounded buffer 204 may include a plurality of single-threaded segments, to which the capture service 202 can write the events in parallel.
  • bounded buffer 204 may include 16 single-threaded segments, each of which can be written in parallel with a different event generated by client application 112 .
  • the size of bounded buffer 204 is configurable according to the particular implementation. In one embodiment, the buffer size may be approximately 10 megabytes (MB) to 20 MB. In other embodiments, the buffer may have a different size, such as 1 MB, 50-100 MB, 1 terabyte (TB), etc.).
  • in-memory buffer service 114 further includes a buffer flush regulator 206 .
  • Buffer flush regulator 206 controls when bounded buffer 204 is emptied (i.e., “flushed”) for consumption by consumer executor service 208 and storage in data store 130 .
  • logic associated with bounded buffer 204 monitors the load on bounded buffer 204 and provides a notification to the buffer flush regulator 206 when the number of events stored in the bounded buffer 204 reaches a predefined limit (e.g., 80% full) or when a predefined amount of time has passed since a contents of the bounded buffer 204 was written to data store 130 (e.g., 10 seconds).
  • consumer executor service 208 periodically sends a request for buffer flushing to buffer flush regulator 206 .
  • Buffer flush regulator 206 determines whether a notification has been received from bounded buffer 204 indicating that either the predefined size limit or the predefined time limit has been reached. If not, buffer flush regulator 206 delays the request. This way, if the buffer flush regulator 206 has not received any notification from single-threaded segments 302 , then buffer flush regulator 206 make the request for buffer flushing from the consumer executor service 208 to wait until it either gets notification or the time delay is over.
  • buffer flush regulator 206 grants the request and consumer executor service 208 may consume the events in the bounded buffer 204 using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer 204 in parallel. By consuming the events, consumer executor service 208 reads the events from bounded buffer 204 and writes the events to data store 130 .
  • bounded buffer 204 may handle this situation in a number of different ways according to a defined overflow policy.
  • bounded buffer 204 may implement a “keep latest” overflow policy where the oldest event in bounded buffer 204 is overwritten with the newly received event from client application 112 .
  • bounded buffer 204 may implement a “drop latest” overflow policy, where the newly received event is prevented from being stored in bounded buffer 204 .
  • FIG. 3 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment.
  • application server 110 includes in-memory buffer service 114 and consumer executor service 208 .
  • consumer executor service 208 may be part of in-memory buffer service 114 ; however, in another embodiment, consumer executor service 208 may be a separate, standalone, service.
  • an in-memory bounded buffer utilized by the in-memory buffer service 114 includes a plurality of single-threaded segments 302 , to which capture service 202 can write the events in parallel.
  • the bounded buffer may include 16 single-threaded segments 302 , each of which can be written in parallel with a different event or other data object. In other embodiments, the bounded buffer may have some other number of segments, including more or less than 16 segments.
  • in-memory buffer service 114 further includes buffer flush regulator 206 .
  • Buffer flush regulator 206 controls when bounded buffer 204 is emptied (i.e., “flushed”) for consumption by consumer executor service 208 and storage in data store 130 .
  • in-memory buffer service 114 monitors the load on the buffer segments 302 and provides a notification to the buffer flush regulator 206 when a certain portion or percentage of the buffer segments 302 are full (e.g., 80% full) or when a predefined amount of time has passed since a contents of the buffer segments 302 were flushed (e.g., 10 seconds).
  • a main service thread 304 in consumer executor service 208 may periodically send a request for buffer flushing to buffer flush regulator 206 .
  • Buffer flush regulator 206 may determine whether a notification has been received from bounded buffer 204 indicating that either the predefined size limit or the predefined time limit have been reached. If not, buffer flush regulator 206 delays the request. If the notification has been received, buffer flush regulator 206 grants the request and consumer executor service 208 may consume the events in the bounded buffer 204 using a dynamically sized thread pool 306 of consumer threads 308 to read the segments of the bounded buffer 204 in parallel.
  • Dynamically sized thread pool 306 can add or remove consumer threads 308 so that the number of consumer threads 308 matches the number of buffer segments 302 being consumed. For example, if all 16 segments 302 are being read, dynamically sized thread pool 306 can have 16 consumer threads 308 . If, however, only 8 buffer segments 302 contain events, then thread pool 306 need only include 8 threads 308 . The consumer threads 308 read the events from buffer segments 302 in parallel and write the events to data store 130 .
  • FIG. 4 is a block diagram illustrating an application server with dual in-memory bounded buffers, according to an embodiment.
  • application server 110 includes in-memory buffer service 114 and consumer executor service 208 .
  • in-memory buffer service 114 utilizes two or more bounded buffers 204 , 404 to buffer events received from a client application.
  • each bounded buffer 204 , 404 may include a plurality of single-threaded segments, each of which can be written in parallel with a different event or other data object.
  • the dual bounded buffers 204 , 404 can be used alternately to prevent a service interruption.
  • Bounded buffer 404 can function as the current buffer, storing new events until it is ready to be flushed.
  • Buffer flush regulator 206 can initiate a flush of bounded buffer 404 using the same conditions described above with respect to bounded buffer 204 .
  • the process can repeat, alternating between the available buffers to prevent service interruption and the loss of any event data.
  • there may be more than two bounded buffers which are cycled through in a similar fashion.
  • a given server may have three, four, or more buffers available.
  • a server may initially have some number of buffers (e.g., two), but additional buffers may be allocated as the need arises.
  • the in-memory buffer service or other logic may allocate additional buffers in order to accommodate the higher volume of events.
  • the threads of the dynamically sized thread pool 306 in consumer executor service 208 may write the events read from either bounded buffer 204 or bounded buffer 404 directly to data store 130 .
  • the data store 130 may be a distributed system and possibly take a significant period of time to be written. In such a case, or if all or a portion of the data store 130 is temporarily unavailable, consumer executor service 208 may enqueue the events from the bounded buffer 204 or 404 in a message queue 406 for subsequent writing to data store 130 after data store 130 becomes available.
  • consumer executor service 208 may first try to write directly to data store 130 , but if data store 130 does not respond within some period of time, message queue 406 may be utilized.
  • Message queue 406 may be any available in-memory or out-of-memory data structure that can temporarily accommodate the events before they are stored in data store 130 .
  • message queue 406 may be large enough to store the contents of one bounded buffer 204 at a time. In another embodiment, message queue 406 may be large enough to store the contents of multiple buffers concurrently.
  • FIG. 5 is a flow diagram illustrating a method for in-memory buffering of event data, according to an embodiment.
  • the method 500 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processor to perform hardware simulation), firmware, or a combination thereof.
  • the processing logic is configured to receive event data for storage in a data store and buffer the event data in an in-memory bounded buffer prior to storage in the data store.
  • method 500 may be performed by in-memory buffer service, as shown in FIGS. 1-4 .
  • method 500 receives events from client application 112 .
  • a capture service 202 running on application server 110 receives events from client application 112 that are to be stored in data store 130 .
  • capture service 202 may receive generic data objects from some other source for in-memory buffering.
  • method 500 determines whether in-memory bounded buffer 204 is full.
  • bounded buffer 204 may include 16 single-threaded segments, each of which can be written in parallel with a different event generated by client application 112 .
  • in-memory buffer service 114 determines whether each of the buffer segments has been written with an event. If each segment contains an event, then in-memory buffer service 114 determines that the bounded buffer 204 is full. If there are one or more segments of the bounded buffer 204 that do not contain events, then in-memory buffer service 114 determines that the bounded buffer 204 is not full.
  • method 500 stores the received events in the in-memory bounded buffer 204 .
  • capture service 202 writes the received events to one or more of the segments 302 of in-memory bounded buffer 204 in parallel.
  • method 500 applies a buffer overflow policy prior to storing the events.
  • bounded buffer 204 may implement a “keep latest” overflow policy where the oldest event in bounded buffer 204 is overwritten with the newly received event from client application 112 .
  • bounded buffer 204 may implement a “drop latest” overflow policy, where the newly received event is prevented from being stored in bounded buffer 204 .
  • method 500 determines if a predefined limit has been reached.
  • in-memory buffer service 114 monitors the load on bounded buffer 204 and determines when the number of events stored in the bounded buffer 204 reaches a predefined limit (e.g., 80% full) or when a predefined amount of time has passed since a contents of the bounded buffer 204 was written to data store 130 (e.g., 10 seconds). If the predefined limit has been reached, at block 530 , method 500 provides a notification to buffer flush regulator 206 . In one embodiment, in-memory buffer service 114 provides the notification to buffer flush regulator 206 .
  • method 500 receives a buffer flush request from consumer executor service 208 .
  • consumer executor service 208 may periodically send a request for buffer flushing to buffer flush regulator 206 .
  • the period with which the request is sent can be configurable depending on the particular implementation.
  • method 500 determines whether the in-memory bounded buffer 204 is ready to be flushed.
  • Buffer flush regulator 206 may determine whether a notification has been received from bounded buffer 204 , at block 530 , indicating that either the predefined size limit or the predefined time limit have been reached. If not, buffer flush regulator 206 denies the request.
  • method 500 determines whether the data store 130 is available.
  • the data store 130 may be a distributed system and possibly take a significant period of time to be written. In such a case, all or a portion of the data store 130 may be temporarily unavailable.
  • method 500 consumes the events from the in-memory bounded buffer 204 by writing the events directly to the data store 130 .
  • consumer executor service 208 may consume the events in the bounded buffer 204 using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer 204 in parallel. By consuming the events, consumer executor service 208 reads the events from bounded buffer 204 and writes the events to data store 130 .
  • method 500 enqueues the events from in-memory bounded buffer 204 in a message queue 406 for subsequent writing to the data store 130 after the data store 130 becomes available.
  • Message queue 406 may be any available in-memory or out-of-memory data structure that can temporarily accommodate the events before they are stored in data store 130 .
  • message queue 406 may be large enough to store the contents of one bounded buffer 204 at a time. In another embodiment, message queue 406 may be large enough to store the contents of multiple buffers concurrently.
  • the in-memory buffer service is implemented in memory, it is inherently lossy. For example, if the application server 110 crashes, then objects/events that the server 110 is holding in memory (and that have not been written to the data store 130 ) can be lost. To help address this issue, the IMBS can implement the message queue 310 to improve reliability as described above.
  • the MQ 310 operates as a message queue backed by a large relational/transactional data store 130 with guaranteed message delivery. Enqueuing events in the MQ 310 can have relatively high latency. This can be problematic when the number of events has a high-volume (e.g., on the order of 200+ millions events per day). Because of high latency during high load time, the buffer can starts to overflow since worker threads of consumer executor service are busy with events enqueuing. For instance, when the consumer 208 does not finish in a given time and the buffer 114 starts to overflow.
  • the data store 130 may not be available.
  • the data store 130 might undergo upgrades or other maintenance tasks that make it unavailable such that events cannot be written, which reduces reliability of the system.
  • IMBS could potentially drop up to 80% of events.
  • FIG. 6 is a block diagram illustrating an application server 110 with an in-memory buffer service according to another embodiment.
  • FIG. 6 also illustrates a data store 130 that is configured to store events in an event table 920 , and a temporary events file storage system (TEFSS) 620 that are external to the application server 110 .
  • FIG. 6 includes three additional elements namely an indirect events writer 610 , an events file uploader service 720 and the TEFSS 620 that can help improve reliability of the IMBS.
  • the TEFSS 620 can temporarily store groups of events (generated by a client application 120 that runs at the application server 110 ) as events files for subsequent writing of the events to the data store 130 .
  • minor modifications are also made at the data store 130 as will be described below.
  • the indirect events writer 610 can be invoked.
  • the indirect events writer 610 includes a dynamically sized thread pool of event capture threads 615 .
  • Each event capture thread 615 can create an events file that includes events from a corresponding consumer thread 308 .
  • Each event capture thread 615 can temporarily write the events, in batches, as events files, to the TEFSS 620 .
  • each event capture thread 615 can serialize multiple events into events files that can then be temporarily stored at the TEFSS 620 .
  • the events file uploader service 720 is a separate daemon thread running in the application server 110 .
  • the events file uploader service 720 can retrieve, based on job detail information metadata stored in an events uploader job detail table (not illustrated) at the data store 130 , the events files from the TEFSS 620 , and then write/upload the events from each of those retrieved events files to an event table (not illustrated) at the data store 130 at a later time.
  • the TEFSS 620 By adding a secondary storage option, the TEFSS 620 , the event capture capability of the IMBS can be improved by significantly reducing loss of events without creating additional latency. For instance, in one implementation, it has been observed that the improved IMBS can reduce the 80% loss (described above) down to 0.02% loss without introducing additional latency.
  • the temporary events file storage system (TEFSS) 620 can help solve the scalability problem with MQ 310 because the TEFSS 620 is not backed by a large relational/transactional data store 130 , and is not transactional in nature. Rather, the TEFSS 620 operates as a separate file storage system that provides much higher throughput and lower latency.
  • the TEFSS 620 is a data store that can store unstructured data as files (e.g., that contain arbitrary and possibly unrelated data with no ordering, or organizing structure required). In contrast to the MQ 310 , the TEFSS 620 provides much higher throughput, and has much lower latency. Moreover, the TEFSS 620 is available to write regardless of whether the tenant database is in read-only mode, which increases reliability of the system. To explain further, the MQ relies on the tenant database to persist its metadata and data. If the tenant database is in read only mode (e.g., during an upgrade), events cannot be enqueued. By contrast, the TEFSS 620 does not go into a read-only mode so events can be enqueued anytime.
  • FIG. 7A is a block diagram illustrating one instance of a consumer thread 308 and an event capture thread 615 of FIG. 6 according to an embodiment. Due to space constraints in FIG. 7 , only one instance of the consumer thread 308 and the event capture thread 615 are illustrated; however, it is to be appreciated from FIG. 6 that there can be multiple instances of each that execute in parallel at any particular time. FIG. 7 will be described with reference to FIGS. 8-12C . As illustrated in FIG. 7A , the consumer thread 308 includes an events reader 704 and a direct events writer 708 . The event capture thread 615 includes an invoker 709 and a TEFSS events file writer 710 .
  • the events reader 704 , direct events writer 708 and the invoker 709 are consumers in a “consumer framework” of the consumer executor service 208 .
  • the consumer executor service 208 is an execution engine that is responsible for reading events from in-memory buffer 114 , and managing how many worker threads will run.
  • the consumer framework provides the logic to determine what to do with events read from in-memory buffer 114 .
  • the consumer executor service 208 passes the events read from in-memory buffer 114 , in batches, to the consumer framework, and worker threads provide an execution engine to the consumer framework.
  • the consumer framework is a management system for the consumers and specifies the order in which each of the consumers will be executed to process the events.
  • the capture service 202 receive events from the client application 120 running on the application server, and provides the events to the in-memory buffer service 114 .
  • the in-memory buffer service 114 temporarily holds the events in a bounded buffer 204 that is implemented in physical memory at the application server.
  • the bounded buffer includes a plurality of single-threaded buffer segments 302 that are each configured to temporarily hold some of the events received from the capture service 202 until a flush operation is performed.
  • the buffer flush regulator 206 will perform a flush operation at regular intervals, for example, once every 10 seconds to flush the in-memory buffer 114 .
  • the consumer executor service 208 includes a dynamically sized thread pool of consumer threads that execute in parallel. Each of consumer threads 308 corresponds to a particular single-threaded buffer segment 302 . When the flush operation occurs, events stored in one of the buffer segments 302 are then passed to a corresponding consumer thread 308 .
  • Each consumer thread 308 can include an instance of an events reader 704 that reads events flushed from a corresponding one of the single-threaded buffer segments 302 ( FIG. 6 ), assigns a unique identifier to each event that will be used at the data store 130 , adds that unique identifier to each event, and then passes a result to the direct events writer 708 .
  • Each consumer thread 308 can include an instance of a direct events writer 708 is designed to write the events directly to the data store 130 . In most scenarios, the direct events writer 708 can successfully write the events received from the events reader 704 directly to the data store 130 . However, in some cases, the direct events writer 708 cannot write the events directly to the data store 130 .
  • the data store 130 can have periods of lower responsiveness. For instance, in some cases, as the data gets written to the data store 130 , there are maintenance tasks (e.g., such as splitting storage units (regions) as they get too large for a given storage unit (region)). Another example would be periodic re-organization of data (or compaction) which takes care of things like deleted records.
  • maintenance tasks e.g., such as splitting storage units (regions) as they get too large for a given storage unit (region)
  • Another example would be periodic re-organization of data (or compaction) which takes care of things like deleted records.
  • the indirect events writer 610 provides a number of event capture threads 615 corresponding to each of the consumer threads 308 .
  • Each event capture thread 615 includes an instance of an invoker 709 and an instance of an events file writer 710 .
  • Each invoker 709 can determine if a corresponding instance of the direct events writer 708 was able to successfully write events (that it read from a particular single-threaded buffer segment 302 ) to an event table at the data store 130 .
  • Each invoker 709 also receives the events from a particular single-threaded buffer segment 302 via the direct events writer 708 .
  • invoker 709 determines that the corresponding instance of the direct events writer 708 was unable to successfully write events directly to the data store 130 (e.g., invoker 709 receives an indication that the direct events writer 708 is unable or has failed to write events to the data store 130 )
  • the invoker 709 invokes an instance of the TEFSS events file writer 710 at that event capture thread 615 .
  • the invoker 709 will then communicate the events to the TEFSS events file writer 710 , which serializes events from the particular single-threaded buffer segment 302 into a single events file, and then writes the events file to the TEFSS 620 . Examples of events files stored at the TEFSS 620 are illustrated in FIG. 8 .
  • FIG. 8 is a block diagram illustrating an exemplary implementation of the TEFSS 620 of FIG. 6 according to an embodiment. As illustrated in FIG. 8 , the TEFSS 620 can store a plurality of events files 810 - 1 . . . 810 - 3 .
  • Each events file 810 written to the TEFSS 620 by the TEFSS events file writer 710 can include a group identifier (/group ID/) that uniquely identifies a group of events files that were generated during a particular time window, a file name (not explicitly labeled in FIG. 8 ), and file information (not illustrated in FIG. 8 ).
  • group ID/ group identifier
  • the file name can include an application server name ( ⁇ app_serverX_name> of the application server that generated the events in the events file, a creation date ( ⁇ create_date>) for each of the events in the events file, where the creation date is specified as events for Mmddyyyy-HHmm;SSS (where MM is the month, dd is the day, yyyy is the year, HH is the hour, mm is the minutes and SSS is the second when the events file was created), and a thread identifier (thread_ID) that specifies the name of the event capture thread writing the events file.
  • the group ID identifies the group that the events file uploader service 720 inspects to identify an events file to process.
  • a new group identifier (/group ID/) for each events file 810 can be generated at regular intervals, for example, once every 10 minutes in an implementation where separate events files 810 are generated once every ten minutes.
  • the application server name ( ⁇ app_serverX_name>, the creation date ( ⁇ create_date>) and the thread identifier (thread_ID) make up the file name for the particular events file.
  • the file information in each events file 810 can include a file type (e.g., Gzip File) and events that are serialized using standard approaches for serializing objects or a collection of objects, for example, such as a JSON format, Avro, or protocol buffer in accordance with some exemplary implementations.
  • FIG. 7B is a block diagram illustrating an events file uploader service 720 of FIG. 6 according to an embodiment.
  • the events file uploader 720 is responsible for retrieving the events file from the TEFSS 620 and uploading the events to the data store 130 in a reliable manner.
  • the events file uploader 720 includes an events uploader manager 722 and a plurality of worker threads 724 . Although only two instances of worker threads 724 are illustrated in FIG. 7 due to space constraints, it should be appreciated that a practical implementation the events file uploader 720 can be multi-threaded and that the events uploader manager 722 can instantiate any number of worker thread 724 instances needed to process any number of events files 810 .
  • the events uploader manager 722 creates and maintains uploader job records that are stored in the events uploader job detail table at the data store 130 . Each uploader job record points to a particular events file 810 stored at the TEFSS 620 . As will be described in greater detail below, the events uploader manager 722 instantiates one instance of a worker thread 724 for each events file name that the events uploader manager 722 retrieves from the TEFSS 620 .
  • FIG. 9 is a block diagram illustrating an exemplary implementation of the data store 130 of FIG. 6 according to an embodiment.
  • the data store 130 includes an events uploader job detail table 910 and an event table 920 .
  • the events uploader job detail table 910 includes a number of rows (only two rows are shown in FIG. 9 for simplicity of illustration). Each row of the events uploader job detail table 910 is used to store an uploader job record that includes job detail information that points to a particular events file 810 .
  • the events uploader manager 722 creates and maintains each of the uploader job records that are stored in the uploader job detail table 910 .
  • the events file uploader service 720 can use job detail information from the uploader job records to retrieve particular events files associated with those particular uploader job records.
  • the event table 920 is the destination where events are eventually stored after they are retrieved by the events file uploader service 720 from the TEFSS 620 , and uploaded by the events file uploader service 720 to the event table 920 of the data store 130 .
  • An example of the events uploader job detail table 910 is illustrated in FIG. 10 .
  • FIG. 10 is a data structure diagram illustrating a schema used for each uploader job record that is stored at the events uploader job detail table 910 according to an embodiment.
  • information in each row of the events uploader job detail table 910 ( FIG. 9 ) is referred to herein as an “uploader job record.”
  • a single row 1010 is illustrated in FIG. 10 for simplicity.
  • Each uploader job record 1010 stores job detail information related to a particular events file that the TEFSS events file writer 710 has written to the TEFSS 620 .
  • an uploader job record 1010 for an events file includes various columns of metadata that include, but are not limited to: a cluster identifier ( ⁇ Cluster ID>) that uniquely identifies a collection of instances that a grouped as a cluster of application servers, an instance identifier ( ⁇ instance_id>) that uniquely identifies a collection of application servers grouped as an instance, an application server name ( ⁇ app_server_name>) of a particular application server, a group identifier ( ⁇ group ID>) that uniquely identifies a group of events files that were generated during a particular time window, a file name ( ⁇ file_name>) of the particular events file, a job started date ( ⁇ job_started_date>) that specifies when the particular events file was created, a number of events ( ⁇ Num_Events>) that specifies a number of events for this particular events file, a number of events that specifies a number of events for this particular events file, a last updated
  • An events file uploader service 720 runs on each application server of a cloud computing service provider.
  • a particular instance of the events file uploader service 720 that runs on a particular application server of that cloud computing service provider can be identified by the combination of:
  • Each row of the events uploader job detail table 910 includes a row key that is used by an events uploader manager 722 to look up job detail information for a particular event file.
  • a row key for a row of the events uploader job detail table 910 is defined by the combination of the instance identifier ( ⁇ instance_id>), the application server name ( ⁇ app_server>), the group identifier ( ⁇ group ID>), the file name ( ⁇ file_name>), and the job started date ( ⁇ job_started_date>).
  • the events uploader manager 722 receives or retrieves at least some of the job detail information from the events uploader job detail table 910 that is stored at the data store 130 .
  • the events uploader manager 722 is invoked with the application server name ( ⁇ app_server>), the group identifier ( ⁇ group ID>), and the job type (e.g., either backup or regular).
  • the events uploader manager 722 reads all of the names of events files (available for a particular application server) from the TEFSS 620 . At 733 , the events uploader manager 722 then assigns each of the worker threads 724 one of the events file names (i.e., assigns a file name of specific events file to one of the worker threads 724 ).
  • Each worker thread 724 includes events file reader 726 that is responsible for reading an events file 810 from the TEFSS 620 , and event loader 728 that is responsible for uploading events from that events file 810 to the data store 130 .
  • Each events file reader 726 retrieves an events file 810 from the TEFSS 620 that corresponds to particular events file name and adds a record in the events uploader job detail table 910 with current time as job started time. Each events file reader 726 then provides that events file 810 to a corresponding instance of event loader 728 that writes each event in the events file 810 to the data store 130 . Specifically, event loader 728 writes each event in the events file 810 to the event table 920 of the data store 130 , and then updates the record in the events uploader job detail table 910 with a retry attempt.
  • FIG. 11A is a flow diagram illustrating a method 1100 for writing events files to a temporary events file storage system (TEFSS) 620 according to another embodiment.
  • the method 1100 of FIG. 11A describes a computer-implemented method performed by an application server comprising an indirect events writer 610 that is configured to instantiate a plurality of event capture threads 615 each comprising an instance of an events file writer 710 .
  • the computer-implemented method of FIG. 11A will be described with reference to a single instance of an indirect events writer 610 of a particular application server, but it will be appreciated that each application server in the system can perform similar steps.
  • FIG. 11A will be described with reference to FIGS. 1 through 10 .
  • capture service 202 receives events that are to be stored in a data store 130 from a client application 112 running on the application server 110 , and temporarily stores the events in a bounded buffer 204 that is part of the in-memory buffer service 114 .
  • the bounded buffer 204 is implemented in physical memory at the application server 110 , and comprises a plurality of single-threaded buffer segments 302 .
  • the single-threaded buffer segments 302 are each configured to temporarily store some of the events until a flush operation is performed by the buffer flush regulator 206 .
  • the buffer flush regulator 206 flushes the plurality of single-threaded buffer segments 302 of the bounded buffer 204 , and the consumer executor service 208 instantiates a plurality of consumer threads 308 that each corresponds to a particular single-threaded buffer segment 302 .
  • each consumer thread 308 includes an instance of an events reader 704 and an instance of a direct events writer 708 .
  • the The buffer flush regulator 206 can perform the flush operation at regular intervals, for example, once every 10 seconds to flush the in-memory buffer 114 , and events from one of the buffer segments 302 are then passed to the consumer thread 308 .
  • the events reader 704 of each consumer thread 308 reads the events flushed from the particular single-threaded buffer segment that corresponds to that consumer thread 308 , and each direct events writer 708 adds an identifier to each event, and then passes a result to a direct events writer 708 , which is designed to write the events directly (i.e., without any intermediate storage or processing steps) to the data store 130 .
  • each direct events writer 708 attempts to directly write each event it has received to the data store 130 .
  • each of the event capture threads 615 includes an instance of an invoker 709 and an instance of a TEFSS events file writer 710 .
  • each invoker 709 can determine if its corresponding direct events writer 708 was able to successfully write events (that were read from the particular single-threaded buffer segment 302 ) directly to the event table 920 at the data store 130 within constraints imposed. These constraints can be any combination of time-based, attempt-based, failure-based, error-based measures that indicate that the direct events writer 708 was unable to write events to the data store 130 .
  • the direct events writer 708 can successfully write an event directly to the event table 920 at the data store 130 . In this case, the method 1100 loops to 1110 . However, in some operational scenarios (described above), events cannot be written directly to the data store 130 , and additional mechanisms are invoked to provide additional reliability.
  • an instance of an invoker 709 at one of the event capture threads 615 determines that a corresponding instance of the direct events writer 708 was unable to successfully, directly write the events directly to an event table 920 at the data store 130 , the invoker invokes an instance of the TEFSS events file writer 710 at that event capture thread 615 . Then at 1140 , the invoker 709 communicates the events that are received from that corresponding instance of the direct events writer 708 to the TEFSS events file writer 710 .
  • the TEFSS events file writer 710 of each event capture thread 615 can generate a particular events file 810 that includes a plurality of events received from an in-memory buffer service 114 , and then write that particular events file 810 to the TEFSS 620 .
  • the TEFSS 620 is configured to temporarily store events files for subsequent writing to the data store 130 when the data store 130 becomes available.
  • Each instance of the TEFSS events file writer 710 can serialize the plurality of events received from the particular single-threaded buffer segment 302 into an events file.
  • an uploader job record at the uploader job detail table 910 ( FIGS. 9 and 10 ).
  • FIG. 11B is a flow diagram illustrating a method performed by an events file uploader service 720 for transferring events files from the temporary events file storage system (TEFSS) 620 to a data store 130 according to another embodiment.
  • the computer-implemented method of FIG. 11B will be described with reference to a described with reference to a single instance of an events file uploader service 720 of a particular application server, but it will be appreciated that each application server in the system can perform similar steps.
  • FIG. 11B will be described with reference to FIGS. 1-10 .
  • the events file uploader service 720 includes an events uploader manager 722 that maintains each of the uploader job records that are stored in the events uploader job detail table 910 , and a plurality of worker threads 724 .
  • the events uploader manager 722 can receive or retrieve job detail information from one or more particular uploader job records (stored at the events uploader job detail table 910 ). Based on the job detail information retrieved, the events uploader manager 722 determines at least one events file for the application server that needs to retrieved from the TEFSS 620 .
  • the events uploader manager 722 can read events file names for all of the events files that are available for a particular application server from the TEFSS 620 and need to be retrieved from the TEFSS 620 .
  • the events uploader manager 722 can then assign each worker thread 724 a name of a particular events file. This way, each events file name(s) retrieved from the TEFSS 620 can be assigned to one instance of the worker thread 724 . As described above, each worker thread 724 comprises a particular events file reader 726 and a particular event loader 728 .
  • each of the events file readers 726 can use the events file name to read/retrieve an appropriate events file 810 (that corresponds to particular events file name) from the TEFSS 620 , and then provide the retrieved events file to a corresponding instance of the event loader 728 .
  • the events uploader manager 722 creates an uploader job record that points to the particular events file 810 stored at the TEFSS 620 , and writes the uploader job record to the events uploader job detail table 910 maintained at the data store 130 .
  • This uploader job record can include the information indicated above.
  • each event loader 728 can upload/write each of the events from the particular events file 810 to the event table 920 of the data store 130 .
  • the events uploader manager 722 updates the uploader job record at the events uploader job detail table 910 with a retry attempt.
  • Each event loader 728 will attempt a certain number of retries in the event of a transport failure, and if the event loader 728 is unable to write events from the particular events file 810 to the event table 920 , the event loader 728 will wait for a time period before retrying to write events from the particular events file 810 to the event table 920 .
  • the period between retry attempts can be set using an exponential backoff algorithm (e.g., where the period is set per a formula such as 2 (retry-1) *60*1000 milliseconds).
  • the events file uploader service 720 can operate in a regular mode or a backup mode.
  • the regular events file uploader service 720 runs at an application server to transfer events files created by that application server from the TEFSS 620 to the data store 130 .
  • the regular events file uploader service 720 runs continuously at the application server and does not terminate.
  • the regular events file uploader service 720 is implemented using a continuously running dameon thread.
  • the backup events file uploader service 720 can run periodically or according to a schedule.
  • the backup events file uploader service 720 can be implemented as a time-based job scheduler utility that allows tasks/jobs to be automatically run at regular intervals (e.g., periodically at fixed times, dates, or intervals) by the backup daemon.
  • the backup events file uploader service 720 runs at an application server to transfer events files that were created by another application server from the TEFSS 620 to the data store 130 .
  • Embodiments of the regular events file uploader service 720 will be described below with reference to FIGS. 12A and 12B , and embodiments of the backup events file uploader service 720 will be described with reference to FIG. 12C
  • FIG. 12A is a flow diagram illustrating a method 1200 performed by the events file uploader service 720 when operating in regular mode for transferring events files from the temporary events file storage system (TEFSS) 620 to a data store 130 according to another embodiment.
  • TEFSS temporary events file storage system
  • the method 1200 begins at 1202 when the regular uploader service starts.
  • the events uploader manager 722 reads the most recent uploader job record for the current application server from the events uploader job detail table 910 .
  • the regular events uploader manager 722 searches the most recent uploader job record for the application server for a starting group identifier that uniquely identifies a group of events files generated by the application server during a particular time window.
  • each group identifier is associated with a time window of a certain duration (e.g., 10 minutes).
  • the group identifier has a format: Events-MMddYYYYHHmm.
  • the minute part (mm) represent a minute range of a certain duration. For example, if the time window is set to 10 minutes, then the group identifier can be 0, 10, 20, 30, 40 or 50.
  • a group identifier of 0 would represent that the events file was generated during a time window that spans anywhere between the start 0 up to, but not including the 10th minute
  • a group identifier of 20 would represent that the events file was generated during a time window that spans anywhere between the start of the 20th minute up to, but not including, the 30th minute
  • a group identifier of 50 would represent that the events file was generated during a time window that spans anywhere between the start of the 50th minute up to, but not including, the 60th minute.
  • the significance of this is that if a group identifier has a minute part (mm) that is the same as a current time, then that group identifier is the group identifier for the current time window.
  • One exemplary implementation of 1208 will be described in greater detail below with reference to FIG. 12B .
  • the regular events uploader manager 722 determines from the most recent uploader job record whether a status for the most recent uploader job record indicates running or completed.
  • a status for the most recent uploader job record indicates running or completed.
  • the status of the most recent uploader job record indicates running this means that the job is in a running state (e.g., the process which is consuming the events file has not reached the completed state, and the method 1200 proceeds from 1209 to 1220 as will be described below.
  • the method 1200 proceeds from 1209 to 1220 , where the regular events uploader manager 722 upserts the most recent uploader job record for the application server at the events uploader job detail table 910 with updated job detail information to indicate that the regular events uploader manager 722 has started to process and read the events file.
  • the file name is already available in uploader job record 1010 . The method then proceeds to 1222 as described below.
  • the method 1200 proceeds to 1210 , where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group identifier.
  • the events uploader manager 722 determines if an events file that is associated with the starting group identifier was able to be read from the TEFSS 620 .
  • the method proceeds to 1214 , where the events uploader manager 722 determines the next available group ID that uniquely identifies another group of events files generated by the application server during another particular time window.
  • the events uploader manager 722 determines whether this next possible group identifier is for a current time window (or “corresponds to” the current time window).
  • the method 1200 proceeds to 1218 where the events uploader manager 722 waits for a time period (e.g., 10 minutes) before the method 1200 loops back to 1210 , where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the next possible group identifier.
  • a time period e.g. 10 minutes
  • the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the next possible group identifier.
  • the regular events uploader manager 722 assigns a worker thread 724 for the particular events file that was read (at 1212 ).
  • the worker thread 724 includes a particular events file reader 726 and a particular event loader 728 .
  • the events file reader 726 upserts the most recent uploader job record for the application server at the events uploader job detail table 910 with updated job detail information.
  • the term “upserting” refers to a combination of updating (i.e., refreshing an existing record that already exists) and inserting (i.e., inserting a new record if a match is not found). In other words, one data set can be used to update existing records at the same time new records are inserted.
  • the method 1200 then proceeds to 1222 , where the events file reader 726 reads the events file, serializes the events from the events file and sends the events to the event loader 728 .
  • the event loader 728 loads and saves each of the events from the events file to a row of the event table 920 maintained at the data store 130 .
  • the events uploader manager 722 updates the status of the uploader job record that corresponds to the events file to indicate completed (in the events uploader job detail table 910 ).
  • the events uploader manager 722 deletes the events file from the TEFSS 620 .
  • the method 1200 then loops to 1210 , where the regular events uploader manager 722 searches the TFESS 620 for another events file that includes the group identifier.
  • more than one events file can have the same group identifier so the regular events uploader manager 722 searches the TFESS 620 for more events files having the group identifier. If the regular events uploader manager 722 determines that there is no events file available for a given application server for that group identifier and it is not for current time window, then the regular events uploader manager 722 will move to next possible group identifier based on next minute range window.
  • FIG. 12B is a flow diagram illustrating a method 1240 performed by the events file uploader service 720 for searching an uploader job record for a starting group identifier when operating in regular mode according to another embodiment.
  • the method 1240 illustrates one exemplary implementation for performing step 1208 of FIG. 12A .
  • the events uploader manager 722 reads the most recent uploader job record for the current application server (from the events uploader job detail table 910 ), and determines whether the most recent uploader job record for the application server is null (i.e., there is no uploader job record against this application server name in the events uploader job detail table 910 ).
  • method 1240 proceeds to 1247 , where the events uploader manager 722 reads the group identifier from the most recent uploader job record for the application server, and the method 1208 then proceeds to 1210 , where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group identifier.
  • the method 1240 proceeds to 1248 , where the events uploader manager 722 attempts to read a marker file for the application server from the TEFSS 620 , and then determines, at 1250 , whether the marker file for the application server was read from the TEFSS 620 .
  • the marker file is a simple text file which records the first group identifier when uploader service starts for the first time on an application server.
  • the method 1208 proceeds to 1255 , where the regular events uploader manager 722 reads a group identifier stored in the marker file. The method 1208 then proceeds to 1210 , where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group.
  • the method 1208 proceeds to 1252 , where the regular events uploader manager 722 reads the oldest group identifier stored in a cache at the application server that stores group identifier for that application server.
  • the method 1208 then proceeds to 1254 , where the regular events uploader manager 722 creates a marker file with the group identifier that was retrieved from the cache at the application server.
  • the method 1208 then proceeds to 1210 , where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group identifier.
  • FIG. 12C is a flow diagram illustrating a method 1270 performed by the events file uploader service 720 when operating in backup-job mode for transferring events files from the temporary events file storage system (TEFSS) 620 to a data store 130 according to another embodiment.
  • the backup job based uploader service is back-up service that is used when one or more of the uploader services 720 (running at another application server) is unable to operate in regular mode that is described above with respect to FIGS. 12A and 12B .
  • two application servers will be referenced that are part of a cluster of application servers.
  • first application server one application server that has created an events file and then become unavailable
  • second application server another application server that runs a backup events file uploader service 720
  • the first application server can generate a particular events file 810 and write that particular events file to the TEFSS 620 , which can temporarily store the events files for subsequent writing to the data store 130 .
  • the backup events file uploader service 720 when the backup events file uploader service 720 that runs at the second application server determines that the first application server is inactive (e.g., has crashed), the backup events file uploader service 720 transfer events files that were created by the first application server from the TEFSS 620 to the data store 130 .
  • the events uploader manager 722 of the backup events file uploader service 720 can retrieve job detail information from a particular uploader job record, and then, based on the job detail information retrieved, read one or more events files for the first application server from the TEFSS 620 .
  • the backup events file uploader service 720 can then write the events from each events file to an event table 920 maintained at the data store 130 .
  • the method 1270 begins at 1272 when the backup job-based uploader service starts.
  • the events uploader manager 722 of the second application server attempts to find an application server in the cluster that has been inactive for a certain duration (e.g., 30 minutes in one implementation). For example, in one embodiment, the events uploader manager 722 of the second application server searches the events uploader job detail table 910 for all inactive application servers (i.e., all application servers that do not have an uploader job record that was started or updated within the certain duration), and randomly selects one of the application servers.
  • a certain duration e.g. 30 minutes in one implementation.
  • the events uploader manager 722 determines whether an inactive application server in the cluster has been found. While this could be any application server in the cluster, for purposes of the following example, it will be assumed that the events uploader manager 722 has determined that the first application server is inactive. However, it is noted that if the events uploader manager 722 of the second application server cannot find an inactive application server, then method 1270 proceeds to 1298 where the backup events file uploader service 720 terminates.
  • the backup events uploader manager 722 reads the most recent uploader job record for the first application server from the events uploader job detail table 910 .
  • the backup events uploader manager 722 searches the most recent uploader job record for the first application server for a starting group identifier that uniquely identifies a group of events files generated by the first application server during a particular time window.
  • each group identifier is associated with a time window of a certain duration (e.g., 10 minutes). If a group identifier has a minute part (mm) that is the same as a current time, then that group identifier is the group identifier for the current time window.
  • backup events uploader manager 722 determines from the most recent uploader job record whether the status of the uploader job record indicates that it is running or completed.
  • the backup events uploader manager 722 determines (at 1280 ) that the status of the most recent uploader job record indicates that it has been completed this means the events file recorded in this uploader job record is processed successfully and it is time to search for and process next available events file. Therefore, when the status of the most recent uploader job record indicates completed, the method 1200 proceeds to 1282 , where the backup events uploader manager 722 of the second application server searches the TFESS 620 for an events file that includes the starting group identifier.
  • the backup events uploader manager 722 determines whether an events file that is associated with the starting group identifier was able to be read from the TEFSS 620 .
  • the method proceeds to 1294 , where the backup events uploader manager 722 determines a next possible group identifier that uniquely identifies another group of events files generated by the first application server during another particular time window, and the method 1270 proceeds to 1296 .
  • the backup events uploader manager 722 determines whether the next possible group identifier is for a current time window (or “corresponds to” the current time window).
  • the method 1200 proceeds to 1282 , where the backup events uploader manager 722 searches the TFESS 620 for an events file that includes the next possible group identifier.
  • the method 1200 proceeds to 1298 , where the backup events uploader manager 722 terminates the events file uploader service 720 at the second application server.
  • the method proceeds to 1285 .
  • the backup events uploader manager 722 assigns a worker thread 724 for the particular events file that was read (at 1284 ).
  • the worker thread 724 includes a particular events file reader 726 and a particular event loader 728 .
  • the backup events uploader manager 722 upserts the most recent uploader job record for the first application server at the events uploader job detail table 910 with updated job detail information.
  • the method 1200 then proceeds to 1286 , where the events file reader 726 reads the events file, serializes the events from the events file, and sends the events to the event loader 728 .
  • the event loader 728 loads and saves each of the events from the events file to a row of the event table 920 maintained at the data store 130 .
  • the backup events uploader manager 722 updates the status of the uploader job record (in the events uploader job detail table 910 ) that corresponds to the events file to indicate completed.
  • the events uploader manager 722 deletes the events file from the TEFSS 620 .
  • the method 1200 then loops to 1282 , where the backup events uploader manager 722 searches the TFESS 620 for another events file that includes the group identifier.
  • the backup events uploader manager 722 upserts the most recent uploader job record for the first application server at the events uploader job detail table 910 with updated job detail information to indicate that the backup events uploader manager 722 has started to process and read the events file. In this case, the file name is already available in uploader job record 1010 . The method then proceeds to 1286 as described above.
  • FIGS. 1-12 may be implemented in any other type of computing environment, such as one with multiple servers, one with a single server, a multi-tenant server environment, a single-tenant server environment, or some combination of the above.
  • FIG. 13 shows a block diagram of an example of an environment 1310 in which an on-demand database service can be used in accordance with some implementations.
  • the environment 1310 includes user systems 1312 , a network 1314 , a database system 1316 (also referred to herein as a “cloud-based system”), a processor system 1317 , an application platform 1318 , a network interface 1320 , tenant database 1322 for storing tenant data 1323 , system database 1324 for storing system data 1325 , program code 1326 for implementing various functions of the system 1316 , and process space 1328 for executing database system processes and tenant-specific processes, such as running applications as part of an application hosting service.
  • environment 1310 may not have all of these components or systems, or may have other components or systems instead of, or in addition to, those listed above.
  • the environment 1310 is an environment in which an on-demand database service exists.
  • An on-demand database service such as that which can be implemented using the system 1316 , is a service that is made available to users outside of the enterprise(s) that own, maintain or provide access to the system 1316 . As described above, such users generally do not need to be concerned with building or maintaining the system 1316 . Instead, resources provided by the system 1316 may be available for such users' use when the users need services provided by the system 1316 ; that is, on the demand of the users.
  • Some on-demand database services can store information from one or more tenants into tables of a common database image to form a multi-tenant database system (MTS).
  • MTS multi-tenant database system
  • multi-tenant database system can refer to those systems in which various elements of hardware and software of a database system may be shared by one or more customers or tenants. For example, a given application server may simultaneously process requests for a great number of customers, and a given database table may store rows of data such as feed items for a potentially much greater number of customers.
  • a database image can include one or more database objects.
  • RDBMS relational database management system
  • a relational database management system (RDBMS) or the equivalent can execute storage and retrieval of information against the database object(s).
  • Application platform 1318 can be a framework that allows the applications of system 1316 to execute, such as the hardware or software infrastructure of the system 1316 .
  • the application platform 1318 enables the creation, management and execution of one or more applications developed by the provider of the on-demand database service, users accessing the on-demand database service via user systems 1312 , or third party application developers accessing the on-demand database service via user systems 1312 .
  • the system 1316 implements a web-based customer relationship management (CRM) system.
  • CCM customer relationship management
  • the system 1316 includes application servers configured to implement and execute CRM software applications as well as provide related data, code, forms, renderable web pages and documents and other information to and from user systems 1312 and to store to, and retrieve from, a database system related data, objects, and Web page content.
  • data for multiple tenants may be stored in the same physical database object in tenant database 1322 .
  • tenant data is arranged in the storage medium(s) of tenant database 1322 so that data of one tenant is kept logically separate from that of other tenants so that one tenant does not have access to another tenant's data, unless such data is expressly shared.
  • the system 1316 also implements applications other than, or in addition to, a CRM application.
  • the system 1316 can provide tenant access to multiple hosted (standard and custom) applications, including a CRM application.
  • User (or third party developer) applications which may or may not include CRM, may be supported by the application platform 1318 .
  • the application platform 1318 manages the creation and storage of the applications into one or more database objects and the execution of the applications in one or more virtual machines in the process space of the system 1316 .
  • each system 1316 is configured to provide web pages, forms, applications, data and media content to user (client) systems 1312 to support the access by user systems 1312 as tenants of system 1316 .
  • system 1316 provides security mechanisms to keep each tenant's data separate unless the data is shared.
  • MTS Mobility Management Entity
  • they may be located in close proximity to one another (for example, in a server farm located in a single building or campus), or they may be distributed at locations remote from one another (for example, one or more servers located in city A and one or more servers located in city B).
  • each MTS could include one or more logically or physically connected servers distributed locally or across one or more geographic locations.
  • server is meant to refer to a computing device or system, including processing hardware and process space(s), an associated storage medium such as a memory device or database, and, in some instances, a database application (for example, OODBMS or RDBMS) as is well known in the art. It should also be understood that “server system” and “server” are often used interchangeably herein.
  • database objects described herein can be implemented as part of a single database, a distributed database, a collection of distributed databases, a database with redundant online or offline backups or other redundancies, etc., and can include a distributed database or storage network and associated processing intelligence.
  • the network 1314 can be or include any network or combination of networks of systems or devices that communicate with one another.
  • the network 1314 can be or include any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, cellular network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration.
  • the network 1314 can include a TCP/IP (Transfer Control Protocol and Internet Protocol) network, such as the global internetwork of networks often referred to as the “Internet” (with a capital “I”).
  • the Internet will be used in many of the examples herein. However, it should be understood that the networks that the disclosed implementations can use are not so limited, although TCP/IP is a frequently implemented protocol.
  • the user systems 1312 can communicate with system 1316 using TCP/IP and, at a higher network level, other common Internet protocols to communicate, such as HTTP, FTP, AFS, WAP, etc.
  • each user system 1312 can include an HTTP client commonly referred to as a “web browser” or simply a “browser” for sending and receiving HTTP signals to and from an HTTP server of the system 1316 .
  • HTTP server can be implemented as the sole network interface 1320 between the system 1316 and the network 1314 , but other techniques can be used in addition to or instead of these techniques.
  • the network interface 1320 between the system 1316 and the network 1314 includes load sharing functionality, such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a number of servers.
  • load sharing functionality such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a number of servers.
  • each of the servers can have access to the MTS data; however, other alternative configurations may be used instead.
  • the user systems 1312 can be implemented as any computing device(s) or other data processing apparatus or systems usable by users to access the database system 1316 .
  • any of user systems 1312 can be a desktop computer, a work station, a laptop computer, a tablet computer, a handheld computing device, a mobile cellular phone (for example, a “smartphone”), or any other Wi-Fi-enabled device, wireless access protocol (WAP)-enabled device, or other computing device capable of interfacing directly or indirectly to the Internet or other network.
  • WAP wireless access protocol
  • each user system 1312 typically executes an HTTP client, for example, a web browsing (or simply “browsing”) program, such as a web browser based on the WebKit platform, Microsoft's Internet Explorer browser, Netscape's Navigator browser, Opera's browser, Mozilla's Firefox browser, or a WAP-enabled browser in the case of a cellular phone, PDA or other wireless device, or the like, allowing a user (for example, a subscriber of on-demand services provided by the system 1316 ) of the user system 1312 to access, process and view information, pages and applications available to it from the system 1316 over the network 1314 .
  • HTTP client for example, a web browsing (or simply “browsing”) program, such as a web browser based on the WebKit platform, Microsoft's Internet Explorer browser, Netscape's Navigator browser, Opera's browser, Mozilla's Firefox browser, or a WAP-enabled browser in the case of a cellular phone, PDA or other wireless device, or the like.
  • Each user system 1312 also typically includes one or more user input devices, such as a keyboard, a mouse, a trackball, a touch pad, a touch screen, a pen or stylus or the like, for interacting with a graphical user interface (GUI) provided by the browser on a display (for example, a monitor screen, liquid crystal display (LCD), light-emitting diode (LED) display, among other possibilities) of the user system 1312 in conjunction with pages, forms, applications and other information provided by the system 1316 or other systems or servers.
  • GUI graphical user interface
  • the user interface device can be used to access data and applications hosted by system 1316 , and to perform searches on stored data, and otherwise allow a user to interact with various GUI pages that may be presented to a user.
  • implementations are suitable for use with the Internet, although other networks can be used instead of or in addition to the Internet, such as an intranet, an extranet, a virtual private network (VPN), a non-TCP/IP based network, any LAN or WAN or the like.
  • VPN virtual private network
  • non-TCP/IP based network any LAN or WAN or the like.
  • the users of user systems 1312 may differ in their respective capacities, and the capacity of a particular user system 1312 can be entirely determined by permissions (permission levels) for the current user of such user system. For example, where a salesperson is using a particular user system 1312 to interact with the system 1316 , that user system can have the capacities allotted to the salesperson. However, while an administrator is using that user system 1312 to interact with the system 1316 , that user system can have the capacities allotted to that administrator. Where a hierarchical role model is used, users at one permission level can have access to applications, data, and database information accessible by a lower permission level user, but may not have access to certain applications, database information, and data accessible by a user at a higher permission level. Thus, different users generally will have different capabilities with regard to accessing and modifying application and database information, depending on the users' respective security or permission levels (also referred to as “authorizations”).
  • permissions also referred to as “authorizations”.
  • each user system 1312 and some or all of its components are operator-configurable using applications, such as a browser, including computer code executed using a central processing unit (CPU) such as an Intel Pentium® processor or the like.
  • a central processing unit such as an Intel Pentium® processor or the like.
  • the system 1316 (and additional instances of an MTS, where more than one is present) and all of its components can be operator-configurable using application(s) including computer code to run using the processor system 1317 , which may be implemented to include a CPU, which may include an Intel Pentium® processor or the like, or multiple CPUs.
  • the system 1316 includes tangible computer-readable media having non-transitory instructions stored thereon/in that are executable by or used to program a server or other computing system (or collection of such servers or computing systems) to perform some of the implementation of processes described herein.
  • computer program code 1326 can implement instructions for operating and configuring the system 1316 to intercommunicate and to process web pages, applications and other data and media content as described herein.
  • the computer code 1326 can be downloadable and stored on a hard disk, but the entire program code, or portions thereof, also can be stored in any other volatile or non-volatile memory medium or device as is well known, such as a ROM or RAM, or provided on any media capable of storing program code, such as any type of rotating media including floppy disks, optical discs, digital versatile disks (DVD), compact disks (CD), microdrives, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any other type of computer-readable medium or device suitable for storing instructions or data.
  • any other volatile or non-volatile memory medium or device such as a ROM or RAM
  • any media capable of storing program code such as any type of rotating media including floppy disks, optical discs, digital versatile disks (DVD), compact disks (CD), microdrives, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any other
  • program code may be transmitted and downloaded from a software source over a transmission medium, for example, over the Internet, or from another server, as is well known, or transmitted over any other existing network connection as is well known (for example, extranet, VPN, LAN, etc.) using any communication medium and protocols (for example, TCP/IP, HTTP, HTTPS, Ethernet, etc.) as are well known.
  • computer code for the disclosed implementations can be realized in any programming language that can be executed on a server or other computing system such as, for example, C, C++, HTML, any other markup language, JavaTM, JavaScript, ActiveX, any other scripting language, such as VBScript, and many other programming languages as are well known may be used.
  • JavaTM is a trademark of Sun Microsystems, Inc.
  • FIG. 14 shows a block diagram of example implementations of elements of FIG. 13 and example interconnections between these elements according to some implementations. That is, FIG. 14 also illustrates environment 1310 , but FIG. 14 , various elements of the system 1316 and various interconnections between such elements are shown with more specificity according to some more specific implementations. Elements from FIG. 13 that are also shown in FIG. 14 will use the same reference numbers in FIG. 14 as were used in FIG. 13 . Additionally, in FIG. 14 , the user system 1312 includes a processor system 1412 A, a memory system 1412 B, an input system 1412 C, and an output system 1412 D. The processor system 1412 A can include any suitable combination of one or more processors.
  • the memory system 1412 B can include any suitable combination of one or more memory devices.
  • the input system 1412 C can include any suitable combination of input devices, such as one or more touchscreen interfaces, keyboards, mice, trackballs, scanners, cameras, or interfaces to networks.
  • the output system 1412 D can include any suitable combination of output devices, such as one or more display devices, printers, or interfaces to networks.
  • the network interface 1320 of FIG. 13 is implemented as a set of HTTP application servers 1400 1 - 1400 N .
  • Each application server 1400 also referred to herein as an “app server,” is configured to communicate with tenant database 1322 and the tenant data 1423 therein, as well as system database 1324 and the system data 1425 therein, to serve requests received from the user systems 1412 .
  • the tenant data 1423 can be divided into individual tenant storage spaces 1413 , which can be physically or logically arranged or divided.
  • tenant data 1414 and application metadata 1416 can similarly be allocated for each user. For example, a copy of a user's most recently used (MRU) items can be stored to user storage 1414 . Similarly, a copy of MRU items for an entire organization that is a tenant can be stored to tenant storage space 1413 .
  • MRU most recently used
  • the process space 1328 includes system process space 1402 , individual tenant process spaces 1404 and a tenant management process space 1410 .
  • the application platform 1318 includes an application setup mechanism 1438 that supports application developers' creation and management of applications. Such applications and others can be saved as metadata into tenant database 1322 by save routines 1436 for execution by subscribers as one or more tenant process spaces 1404 managed by tenant management process 1410 , for example. Invocations to such applications can be coded using PL/SOQL 1434 , which provides a programming language style interface extension to API 1432 . A detailed description of some PL/SOQL language implementations is discussed in commonly assigned U.S. Pat. No.
  • the system 1316 of FIG. 14 also includes a user interface (UI) 1430 and an application programming interface (API) 1432 to system 1316 resident processes to users or developers at user systems 1412 .
  • UI user interface
  • API application programming interface
  • the environment 1310 may not have the same elements as those listed above or may have other elements instead of, or in addition to, those listed above.
  • Each application server 1400 can be communicably coupled with tenant database 1322 and system database 1324 , for example, having access to tenant data 1423 and system data 1425 , respectively, via a different network connection.
  • one application server 1400 1 can be coupled via the network 1314 (for example, the Internet)
  • another application server 1400 N can be coupled via a direct network link
  • another application server (not illustrated) can be coupled by yet a different network connection.
  • Transfer Control Protocol and Internet Protocol are examples of typical protocols that can be used for communicating between application servers 1400 and the system 1316 .
  • TCP/IP Transfer Control Protocol and Internet Protocol
  • each application server 1400 is configured to handle requests for any user associated with any organization that is a tenant of the system 1316 . Because it can be desirable to be able to add and remove application servers 1400 from the server pool at any time and for various reasons, in some implementations there is no server affinity for a user or organization to a specific application server 1400 .
  • an interface system implementing a load balancing function (for example, an F 5 Big-IP load balancer) is communicably coupled between the application servers 1400 and the user systems 1412 to distribute requests to the application servers 1400 .
  • the load balancer uses a least-connections algorithm to route user requests to the application servers 1400 .
  • system 1316 can be a multi-tenant system in which system 1316 handles storage of, and access to, different objects, data and applications across disparate users and organizations.
  • one tenant can be a company that employs a sales force where each salesperson uses system 1316 to manage aspects of their sales.
  • a user can maintain contact data, leads data, customer follow-up data, performance data, goals and progress data, etc., all applicable to that user's personal sales process (for example, in tenant database 1322 ).
  • tenant database 1322 the data and the applications to access, view, modify, report, transmit, calculate, etc., can be maintained and accessed by a user system 1412 having little more than network access, the user can manage his or her sales efforts and cycles from any of many different user systems. For example, when a salesperson is visiting a customer and the customer has Internet access in their lobby, the salesperson can obtain critical updates regarding that customer while waiting for the customer to arrive in the lobby.
  • the user systems 1412 (which also can be client systems) communicate with the application servers 1400 to request and update system-level and tenant-level data from the system 1316 .
  • Such requests and updates can involve sending one or more queries to tenant database 1322 or system database 1324 .
  • the system 1316 (for example, an application server 1400 in the system 1316 ) can automatically generate one or more SQL statements (for example, one or more SQL queries) designed to access the desired information.
  • System database 1324 can generate query plans to access the requested data from the database.
  • the term “query plan” generally refers to one or more operations used to access information in a database system.
  • Each database can generally be viewed as a collection of objects, such as a set of logical tables, containing data fitted into predefined or customizable categories.
  • a “table” is one representation of a data object, and may be used herein to simplify the conceptual description of objects and custom objects according to some implementations. It should be understood that “table” and “object” may be used interchangeably herein.
  • Each table generally contains one or more data categories logically arranged as columns or fields in a viewable schema. Each row or element of a table can contain an instance of data for each category defined by the fields.
  • a CRM database can include a table that describes a customer with fields for basic contact information such as name, address, phone number, fax number, etc.
  • Another table can describe a purchase order, including fields for information such as customer, product, sale price, date, etc.
  • standard entity tables can be provided for use by all tenants.
  • such standard entities can include tables for case, account, contact, lead, and opportunity data objects, each containing pre-defined fields.
  • entity also may be used interchangeably with “object” and “table.”
  • tenants are allowed to create and store custom objects, or may be allowed to customize standard entities or objects, for example by creating custom fields for standard objects, including custom index fields.
  • custom objects Commonly assigned U.S. Pat. No. 7,779,039, titled CUSTOM ENTITIES AND FIELDS IN A MULTI-TENANT DATABASE SYSTEM, by Weissman et al., issued on Aug. 17, 2010, and hereby incorporated by reference in its entirety and for all purposes, teaches systems and methods for creating custom objects as well as customizing standard objects in a multi-tenant database system.
  • all custom entity data rows are stored in a single multi-tenant physical table, which may contain multiple logical tables per organization. It is transparent to customers that their multiple “tables” are in fact stored in one large table or that their data may be stored in the same table as the data of other customers.
  • FIG. 15A shows a system diagram illustrating example architectural components of an on-demand database service environment 1500 according to some implementations.
  • a client machine communicably connected with the cloud 1504 can communicate with the on-demand database service environment 1500 via one or more edge routers 1508 and 1512 .
  • a client machine can be any of the examples of user systems 12 described above.
  • the edge routers can communicate with one or more core switches 1520 and 1524 through a firewall 1516 .
  • the core switches can communicate with a load balancer 1528 , which can distribute server load over different pods, such as the pods 1540 and 1544 .
  • the pods 1540 and 1544 which can each include one or more servers or other computing resources, can perform data processing and other operations used to provide on-demand services. Communication with the pods can be conducted via pod switches 1532 and 1536 . Components of the on-demand database service environment can communicate with database storage 1556 through a database firewall 1548 and a database switch 1552 .
  • accessing an on-demand database service environment can involve communications transmitted among a variety of different hardware or software components.
  • the on-demand database service environment 1500 is a simplified representation of an actual on-demand database service environment. For example, while only one or two devices of each type are shown in FIGS. 15A and 15B , some implementations of an on-demand database service environment can include anywhere from one to several devices of each type. Also, the on-demand database service environment need not include each device shown in FIGS. 15A and 15B , or can include additional devices not shown in FIGS. 15A and 15B .
  • one or more of the devices in the on-demand database service environment 1500 can be implemented on the same physical device or on different hardware. Some devices can be implemented using hardware or a combination of hardware and software. Thus, terms such as “data processing apparatus,” “machine,” “server” and “device” as used herein are not limited to a single hardware device, rather references to these terms can include any suitable combination of hardware and software configured to provide the described functionality.
  • the cloud 1504 is intended to refer to a data network or multiple data networks, often including the Internet.
  • Client machines communicably connected with the cloud 1504 can communicate with other components of the on-demand database service environment 1500 to access services provided by the on-demand database service environment.
  • client machines can access the on-demand database service environment to retrieve, store, edit, or process information.
  • the edge routers 1508 and 1512 route packets between the cloud 1504 and other components of the on-demand database service environment 1500 .
  • the edge routers 1508 and 1512 can employ the Border Gateway Protocol (BGP).
  • the BGP is the core routing protocol of the Internet.
  • the edge routers 1508 and 1512 can maintain a table of IP networks or ‘prefixes’, which designate network reachability among autonomous systems on the Internet.
  • the firewall 1516 can protect the inner components of the on-demand database service environment 1500 from Internet traffic.
  • the firewall 1516 can block, permit, or deny access to the inner components of the on-demand database service environment 1500 based upon a set of rules and other criteria.
  • the firewall 1516 can act as one or more of a packet filter, an application gateway, a stateful filter, a proxy server, or any other type of firewall.
  • the core switches 1520 and 1524 are high-capacity switches that transfer packets within the on-demand database service environment 1500 .
  • the core switches 1520 and 1524 can be configured as network bridges that quickly route data between different components within the on-demand database service environment.
  • the use of two or more core switches 1520 and 1524 can provide redundancy or reduced latency.
  • the pods 1540 and 1544 perform the core data processing and service functions provided by the on-demand database service environment. Each pod can include various types of hardware or software computing resources. An example of the pod architecture is discussed in greater detail with reference to FIG. 15B .
  • communication between the pods 1540 and 1544 is conducted via the pod switches 1532 and 1536 .
  • the pod switches 1532 and 1536 can facilitate communication between the pods 1540 and 1544 and client machines communicably connected with the cloud 1504 , for example via core switches 1520 and 1524 .
  • the pod switches 1532 and 1536 may facilitate communication between the pods 1540 and 1544 and the database storage 1556 .
  • the load balancer 1528 can distribute workload between the pods 1540 and 1544 . Balancing the on-demand service requests between the pods can assist in improving the use of resources, increasing throughput, reducing response times, or reducing overhead.
  • the load balancer 1528 may include multilayer switches to analyze and forward traffic.
  • the database firewall 1548 can act as a computer application firewall operating at the database application layer of a protocol stack.
  • the database firewall 1548 can protect the database storage 1556 from application attacks such as structure query language (SQL) injection, database rootkits, and unauthorized information disclosure.
  • the database firewall 1548 includes a host using one or more forms of reverse proxy services to proxy traffic before passing it to a gateway router.
  • the database firewall 1548 can inspect the contents of database traffic and block certain content or database requests.
  • the database firewall 1548 can work on the SQL application level atop the TCP/IP stack, managing applications' connection to the database or SQL management interfaces as well as intercepting and enforcing packets traveling to or from a database network or application interface.
  • communication with the database storage 1556 is conducted via the database switch 1552 .
  • the multi-tenant database storage 1556 can include more than one hardware or software components for handling database queries. Accordingly, the database switch 1552 can direct database queries transmitted by other components of the on-demand database service environment (for example, the pods 1540 and 1544 ) to the correct components within the database storage 1556 .
  • the database storage 1556 is an on-demand database system shared by many different organizations as described above with reference to FIG. 13 and FIG. 14 .
  • FIG. 15B shows a system diagram further illustrating example architectural components of an on-demand database service environment according to some implementations.
  • the pod 1544 can be used to render services to a user of the on-demand database service environment 1500 .
  • each pod includes a variety of servers or other systems.
  • the pod 1544 includes one or more content batch servers 1564 , content search servers 1568 , query servers 1582 , file force servers 1586 , access control system (ACS) servers 1580 , batch servers 1584 , and app servers 1588 .
  • the pod 1544 also can include database instances 1590 , quick file systems (QFS) 1592 , and indexers 1594 . In some implementations, some or all communication between the servers in the pod 1544 can be transmitted via the switch 1536 .
  • QFS quick file systems
  • the app servers 1588 include a hardware or software framework dedicated to the execution of procedures (for example, programs, routines, scripts) for supporting the construction of applications provided by the on-demand database service environment 1500 via the pod 1544 .
  • the hardware or software framework of an app server 1588 is configured to execute operations of the services described herein, including performance of the blocks of various methods or processes described herein.
  • two or more app servers 1588 can be included and cooperate to perform such methods, or one or more other servers described herein can be configured to perform the disclosed methods.
  • the content batch servers 1564 can handle requests internal to the pod. Some such requests can be long-running or not tied to a particular customer. For example, the content batch servers 1564 can handle requests related to log mining, cleanup work, and maintenance tasks.
  • the content search servers 1568 can provide query and indexer functions. For example, the functions provided by the content search servers 1568 can allow users to search through content stored in the on-demand database service environment.
  • the file force servers 1586 can manage requests for information stored in the File force storage 1598 .
  • the File force storage 1598 can store information such as documents, images, and basic large objects (BLOBs). By managing requests for information using the file force servers 1586 , the image footprint on the database can be reduced.
  • the query servers 1582 can be used to retrieve information from one or more file storage systems. For example, the query system 1582 can receive requests for information from the app servers 1588 and transmit information queries to the NFS 1596 located outside the pod.
  • the pod 1544 can share a database instance 1590 configured as a multi-tenant environment in which different organizations share access to the same database. Additionally, services rendered by the pod 1544 may call upon various hardware or software resources.
  • the ACS servers 1580 control access to data, hardware resources, or software resources.
  • the batch servers 1584 process batch jobs, which are used to run tasks at specified times. For example, the batch servers 1584 can transmit instructions to other servers, such as the app servers 1588 , to trigger the batch jobs.
  • the QFS 1592 is an open source file storage system available from Sun Microsystems® of Santa Clara, Calif.
  • the QFS can serve as a rapid-access file storage system for storing and accessing information available within the pod 1544 .
  • the QFS 1592 can support some volume management capabilities, allowing many disks to be grouped together into a file storage system. File storage system metadata can be kept on a separate set of disks, which can be useful for streaming applications where long disk seeks cannot be tolerated.
  • the QFS system can communicate with one or more content search servers 1568 or indexers 1594 to identify, retrieve, move, or update data stored in the network file storage systems 1596 or other storage systems.
  • one or more query servers 1582 communicate with the NFS 1596 to retrieve or update information stored outside of the pod 1544 .
  • the NFS 1596 can allow servers located in the pod 1544 to access information to access files over a network in a manner similar to how local storage is accessed.
  • queries from the query servers 1582 are transmitted to the NFS 1596 via the load balancer 1528 , which can distribute resource requests over various resources available in the on-demand database service environment.
  • the NFS 1596 also can communicate with the QFS 1592 to update the information stored on the NFS 1596 or to provide information to the QFS 1592 for use by servers located within the pod 1544 .
  • the pod includes one or more database instances 1590 .
  • the database instance 1590 can transmit information to the QFS 1592 . When information is transmitted to the QFS, it can be available for use by servers within the pod 1544 without using an additional database call.
  • database information is transmitted to the indexer 1594 .
  • Indexer 1594 can provide an index of information available in the database 1590 or QFS 1592 . The index information can be provided to file force servers 1586 or the QFS 1592 .
  • FIG. 16 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 1600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the system 1600 may be in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet.
  • the machine may operate in the capacity of a server machine in client-server network environment.
  • the machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • STB set-top box
  • server server
  • network router switch or bridge
  • computer system 1600 may represent application server 110 , as shown in FIGS. 2-4 and 6 .
  • the exemplary computer system 1600 includes a processing device (processor) 1602 , a main memory 1604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1606 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 1618 , which communicate with each other via a bus 1630 .
  • processor processing device
  • main memory 1604 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • static memory 1606 e.g., flash memory, static random access memory (SRAM)
  • SRAM static random access memory
  • Processing device 1602 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 1602 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
  • the processing device 1602 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • the computer system 1600 may further include a network interface device 1608 .
  • the computer system 1600 also may include a video display unit 1610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1612 (e.g., a keyboard), a cursor control device 1614 (e.g., a mouse), and a signal generation device 1616 (e.g., a speaker).
  • a video display unit 1610 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 1612 e.g., a keyboard
  • a cursor control device 1614 e.g., a mouse
  • a signal generation device 1616 e.g., a speaker
  • the data storage device 1618 may include a computer-readable medium 1628 on which is stored one or more sets of instructions 1622 (e.g., instructions of in-memory buffer service 114 ) embodying any one or more of the methodologies or functions described herein.
  • the instructions 1622 may also reside, completely or at least partially, within the main memory 1604 and/or within processing logic 1626 of the processing device 1602 during execution thereof by the computer system 1600 , the main memory 1604 and the processing device 1602 also constituting computer-readable media.
  • the instructions may further be transmitted or received over a network 1620 via the network interface device 1608 .
  • While the computer-readable storage medium 1628 is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention.
  • the term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • Embodiments of the invention also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.

Abstract

Computer-implemented methods and systems are provided. The system includes a data store that is configured to store events in an event table, a temporary events file storage system (TEFSS), and a cluster of application servers. The cluster includes a first application server that generates events, and a second application server that includes an events file uploader service. When the first application server is unable to directly write events to the data store, an indirect events writer generates events file(s), and writes the events file(s) to the TEFSS. Each events file includes a plurality of events flushed from an in-memory buffer service at the first application server. When the events file uploader service determines that the first application server is inactive, it reads the events file(s) from the TEFSS, and writes the events from each of the events files to the data store.

Description

RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application No. 62/218,871, filed Sep. 15, 2015 the entire contents of which are hereby incorporated by reference herein.
TECHNICAL FIELD
This disclosure relates to the field of multi-tenant server operation, and in particular to an in-memory buffer service for a server that provides improved reliability.
BACKGROUND
A data buffer is a region of a physical memory storage used to temporarily store data while it is being moved from one place to another. In an application server implementing a data buffer, data objects may be stored in a buffer as they are retrieved from a client device or application before they are processed or sent elsewhere for more permanent storage. Buffers can be implemented in a fixed memory location in hardware, or by using a virtual data buffer in software, pointing at a location in the physical memory. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case where these rates are variable. A buffer can be used to adjust timing by implementing a queue algorithm in memory, simultaneously writing data into the queue at one rate and reading it at another rate.
High-throughput, scalable, continuous events require transportation and queuing to a large data store, such as a non-relational, distributed database like Hbase.
To address this issue and provide that high-throughput, an In-Memory Buffer Service (IMBS) was recently developed as described, for example, in U.S. patent application Ser. No. 14/705,750, filed May 6, 2015, entitled “In-Memory Buffer Service,” and assigned to the assignee of the present invention, which is incorporated herein by reference in its entirety. The IMBS is a store and forward based service for objects. The IMBS is designed to provide a very low latency API to write objects to data store 130 (e.g., currently it takes 0.08 microseconds to write an object). The IMBS keeps events in memory until the number of objects reaches a configured limit and for a given time (e.g., 10 seconds). In one implementation, the IMBS can include a capture service running on an application server that receives events from a client application running on an application server to be stored in a data store. The IMBS stores the events in an in-memory bounded buffer on the application server. The in-memory bounded buffer is responsible to support a concurrently writable API, which stores events in memory and includes a buffer flush regulator that manages flushing of this buffer. The in-memory bounded buffer includes a plurality of single-threaded segments, and the capture service can write events to each segment in parallel. When a number of events stored in the in-memory bounded buffer reaches a predefined limit, the in-memory bounded buffer provides a notification to the buffer flush regulator. The in-memory bounded buffer receives a request to flush the events in the in-memory bounded buffer from a consumer executor service. The consumer executor service is responsible for consuming the events in the in-memory bounded buffer using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer in parallel to allow the events to be written directly to the data store. Events can be consumed in parallel fashion to allow for high throughput.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the present invention, which, however, should not be taken to limit the present invention to the specific embodiments, but are for explanation and understanding only.
FIG. 1 is a block diagram illustrating a computing environment with an in-memory buffer service, according to an embodiment.
FIG. 2 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment.
FIG. 3 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment.
FIG. 4 is a block diagram illustrating an application server with dual in-memory bounded buffers, according to an embodiment.
FIG. 5 is a flow diagram illustrating a method for in-memory buffering of event data, according to an embodiment.
FIG. 6 is a block diagram illustrating an application server with an in-memory buffer service according to another embodiment.
FIG. 7A is a block diagram illustrating one instance of a consumer thread and an event capture thread of FIG. 6 according to an embodiment.
FIG. 7B is a block diagram illustrating an events file uploader service of FIG. 6 according to an embodiment.
FIG. 8 is a block diagram illustrating an exemplary implementation of the temporary events file storage system (TEFSS) of FIG. 6 according to an embodiment.
FIG. 9 is a block diagram illustrating an exemplary implementation of the data store of FIG. 6 according to an embodiment.
FIG. 10 is a data structure diagram illustrating a schema used for each uploader job record that is stored at the events uploader job detail table according to an embodiment.
FIG. 11A is a flow diagram illustrating a method for writing events files to a temporary events file storage system (TEFSS) according to another embodiment.
FIG. 11B is a flow diagram illustrating a method performed by an events file uploader service for transferring events files from the temporary events file storage system (TEFSS) to a data store according to another embodiment.
FIG. 12A is a flow diagram illustrating a method performed by the events file uploader service when operating in regular mode for transferring events files from the temporary events file storage system (TEFSS) to the data store according to another embodiment.
FIG. 12B is a flow diagram illustrating a method performed by the events file uploader service when operating in regular mode according to another embodiment.
FIG. 12C is a flow diagram illustrating a method performed by the events file uploader service when operating in backup mode according to another embodiment.
FIG. 13 shows a block diagram of an example environment in which an on-demand database service can be used in accordance with some embodiments.
FIG. 14 shows a block diagram of example implementations of elements of FIG. 13 and example interconnections between these elements according to some embodiments.
FIG. 15A shows a system diagram of example architectural components of an on-demand database service environment according to some embodiments.
FIG. 15B shows a system diagram further illustrating example architectural components of an on-demand database service environment according to some embodiments.
FIG. 16 is a block diagram illustrating an exemplary computer system, according to an embodiment.
DETAILED DESCRIPTION
Embodiments are described for an in-memory buffer service. In certain high volume event systems, it may be beneficial to ingest and process events as quickly and efficient as possible (i.e., in real-time). As used herein, the term “event” refers to a tuple (or list of values or elements) which represents something that happened. An event is immutable (or irreversible) and timestamped. The “something” that happened can be anything. In one context, an event can be a tuple that characterizes an action taken by a user with respect to data, such as action taken on data that describes an object. For instance, an event can refer to data that characterizes an action performed by interaction with respect to a client application being executed at the application server. Events can be monitored and recorded (or logged) to capture user actions such as the use of applications, windows opened, system commands executed, check boxes clicked, text entered/edited, URLs visited and nearly every other on-screen event.
Events can include standard events, and custom events. Depending on the implementation, processing of events may include ingesting, processing, and persisting potentially millions of events over the span of an hour or billions of events over the span of a week. For instance, a certain system may store up to a week's worth of details from sets of operations that are executed as a single unit (e.g., “Apex transactions”) in an open source, non-relational, distributed database such as Apache HBase™, or other data store. This may be accomplished by pointing a debug log event handler that supplies limit usage information, or any other program or component that generates a high volume of events or other data elements to the data store. In other embodiments, some other data store may be used, such as a NoSQL database, a non-opensource database, a relational database, a non-distributed database, or other type of data store.
In an implementation that averages 7.3 billion Apex executions per month (or 243 million per day) being stored in the data store, storing one row per transaction limit with 8 types of limits would require storing 58.4 billion rows per month or about 1.8 billion rows per day across an entire service. Thus, if only raw event data were kept around for a week, that would require 13.1 billion rows at a time before being deleted. In other implementations, there may be up to 15 types of limits, which would result in even more rows being stored. This high volume situation could benefit significantly from a new way for handling event data at scale with high throughput.
In one embodiment, an in-memory buffer service running on a multi-tenant application server provides a low latency application program interface (API) that writes objects, such as event records, to the data store. In other embodiments, the application server may not server multiple clients, but rather is utilized by a single client that generates significant a volume of event data. The in-memory buffer service provides a “store and forward” based service for data objects, where those objects (e.g., events) are temporarily stored in an in-memory bounded buffer before being forward to a data store for longer term storage. A capture service can store events in a bounded buffer, where they may be kept in memory until the number of objects reaches a predefined limit, or the events have been in memory for a predetermined period of time (e.g., 5 seconds, 10 seconds, 30 seconds, 1 minute, etc.). The in-memory buffer provides a very low latency API to write objects and can be done very quickly. For instance, in some examples, writing the bounded buffer can be done in as little as 0.08 microseconds.
The in-memory buffer may be responsible for supporting a concurrently writable API, which enables the storing of events in memory and manages flushing of the buffer. In some embodiments, the in-memory buffer may include a plurality of buffer segments, where each segment is single threaded. Each segment can support high read consistency by waiting until all (or most) write threads are complete before read threads are initiated. Thus, each segment may only be read once. Additionally, the concurrent bounded buffer, comprised of segments, may be used to store events concurrently from various threads. The bounded buffer may also maintain a notion of load factor and may support two types of overflow policies. In some embodiments, the bounded buffer keeps the latest object by replacing an oldest object with a recent object (i.e., first in first out). In another embodiment, the bounded buffer drops the latest object if the buffer is full. A buffer flush regulator may further be used to regulate the flushing of the buffer. The regulating may be based on size and time, which will queue up the event for consumption and writing to the data store. For example, a buffer flush may be initiated when the buffer load reaches some predefined limit (e.g., 80% capacity). In one embodiment, the concurrent bounded buffer may provide a notification when the number of objects reaches a predefined limit that triggers buffer flushing.
Additionally, a consumer executor service is responsible for consuming the in-memory buffer and uses a dynamically sized thread pool to consume (i.e., process) the objects in parallel fashion in order to maximize throughput. The consumer executor service may include a service thread that initiates automatic restarts if a main thread is interrupted. The extension may also include an asynchronous API for starting and stopping a thread. The service thread further may use, in one implementation, Java's ThreadPool to get worker to run the consumer tasks concurrently and in a reliable manner. The service thread also may iteratively call blockedGetAndReset( ) API of the buffer and may assign a set of data to the consumer task. The consumer task may be eventually run by the ThreadPool's worker thread.
A consumer factory of the consumer executor service allows a user to customize a consumer task. A consumer task may first try to write to the data store (e.g., HBase™, Bigtable™, MongoDB™, etc.) directly in a given time in order to reduce the load on an asynchronous message delivery mechanism (e.g., a message queue (MQ)) and to make data available instantaneously. If writing to the data store fails, however, the consumer task may enqueue objects in the MQ, which eventually writes the objects to the data store via an MQ Handler. In some embodiments, a shutdown hook is used to close the service properly when a shutdown of the application server is requested. A log of statistics may also be kept, and the consumer executor service may be restarted if it was terminated for unknown reasons.
I. In-Memory Buffer Service
FIG. 1 is a block diagram illustrating a computing environment with an in-memory buffer service, according to an embodiment. In one embodiment, network environment 100 includes application server 110, client devices 120, 122, 124 and data store 130. Application server 110 and client devices 120, 122, 124 may be connected through a series of one or more networks 140, which may be, for example, a local area network (LAN), a wide area network (WAN), a global area network (GAN) such as the Internet, or a combination of such networks. In another embodiment, application server 110 and client devices 120, 122, 124 may have a direct connection to each of the other devices in the network. The illustrated embodiment shows one application server 110, and three client devices 120, 122, 124; however, in other embodiments, there may be any number of servers or client devices, and environment 100 may include additional and/or different components.
Each of client devices 120, 122, 124 may be, for example, a personal computer (PC), workstation, laptop computer, tablet computer, mobile phone, smartphone, personal digital assistant (PDA) or the like. Client devices 120, 122, 124 may communicate with application server 110 to access resources on application server, such as client application 112. For example a user may access client application 112 through a web browser or other HTTP client application on the client device.
In one embodiment, application server 110 may be any computing device, such as computing system 800, described below with respect to FIG. 16. In one embodiment, application server 110 may be a multi-tenant application server designed to provide access to a number of client applications, such as client application 112, to one more client devices, such as client devices 120, 122, 124. In another embodiment, application server 110 may be a single-tenant application server design to service a single client. Client application 112 and other resources provided by application server 110, such as processing resources, storage resources, etc., may be maintained by application server 110 and made available to the users of client devices 120, 122, 124 as needed (i.e., “on-demand”). This application server 110 can include various elements of hardware and software of a database system may be shared by one or more customers or tenants. For example, application server 110 may simultaneously process requests for a great number of customers. Application server 110 may include an application platform including a framework that allows the applications to execute, such as the hardware or software infrastructure of the system. In one embodiment, the application platform enables the creation, management and execution of one or more applications, such as client application 122, developed by the provider of the application server, customers accessing the application server 110 via client devices 120, 122, 124, or third party application developers.
In one embodiment, application server 110 includes in-memory buffer service 114. In-memory buffer service 114 can ingest and process events generated by client application 112, buffer those events and eventually store the events in data store 130. In one embodiment, data store 130 provides an application programming interface (API) which can be called by the in-memory buffer service 114 in order to store the events in data store 130. In one embodiment, data store 130 may be an open source, non-relational, distributed database such as Apache HBase™, Bigtable™, MongoDB™, or other data store. Examples of events generated by client application 112 may include errors, exceptions, faults, failures, crashes, incidents, or other occurrences. For example, client application 112 may include a user interface layer that presents a user interface visible on one of client devices 120, 122, 124. Through selection of a user interface element, the user may initiate some processing operation in a logical layer of the client application that hits some hard limit defined by the application server 110 (e.g., number of processing cycles consumed per day, amount of storage resources consumed) and page rendering is stopped. The reaching of this hard limit may trigger the creation of an event by client application 112 which is recorded for possible future review. The volume at which such events are potentially generated and conventional means for recording and storing these events may result in an unacceptable level of latency. As such, in one embodiment, in-memory buffer service 114 can ingest and process the events, buffer the events and eventually store the events in data store 130. The buffering of the events in memory before storage in data store 130 can allow a high volume of events to be processed in near real-time with minimal latency and without adversely affecting performance of the application server 110 or client application 112. Additional details of the in-memory buffer service 114 are provided below.
FIG. 2 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment. As described above, application server 110 may include client application 112, in-memory buffer service 114 and data store 130. In one embodiment, data store 130 may be internal to application server 110. In another embodiment, data store 130 may be external to application server 110 and may be connected to application server 110 over a network or other connection. In other embodiments, application server 110 may include different and/or additional components which are not shown in order to simplify the description. In one embodiment, data store 130 may be embodied by one or more mass storage devices which can include, for example, flash memory, magnetic or optical disks, or tape drives; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or any other type of storage medium.
Client application 112 may be any type of computer application program that generates events. For example, client application 112 may be an entertainment application, productivity application, business application, social networking application or other types of application. In one embodiment, in-memory buffer 114 processes events for storage in data store 130. In other embodiments, in-memory buffer 114 may process any other type of data object for storage in data store 130 or elsewhere. In one embodiment, a capture service 202 running on application server 110 receives events from client application 112 that are to be stored in data store 130. Capture service 202 temporarily stores the received events in bounded buffer 204. The bounded buffer 204 may include a plurality of single-threaded segments, to which the capture service 202 can write the events in parallel. In one embodiment, bounded buffer 204 may include 16 single-threaded segments, each of which can be written in parallel with a different event generated by client application 112. The size of bounded buffer 204 is configurable according to the particular implementation. In one embodiment, the buffer size may be approximately 10 megabytes (MB) to 20 MB. In other embodiments, the buffer may have a different size, such as 1 MB, 50-100 MB, 1 terabyte (TB), etc.).
In one embodiment, in-memory buffer service 114 further includes a buffer flush regulator 206. Buffer flush regulator 206 controls when bounded buffer 204 is emptied (i.e., “flushed”) for consumption by consumer executor service 208 and storage in data store 130. In one embodiment, logic associated with bounded buffer 204 monitors the load on bounded buffer 204 and provides a notification to the buffer flush regulator 206 when the number of events stored in the bounded buffer 204 reaches a predefined limit (e.g., 80% full) or when a predefined amount of time has passed since a contents of the bounded buffer 204 was written to data store 130 (e.g., 10 seconds). In one embodiment, consumer executor service 208 periodically sends a request for buffer flushing to buffer flush regulator 206. Buffer flush regulator 206 determines whether a notification has been received from bounded buffer 204 indicating that either the predefined size limit or the predefined time limit has been reached. If not, buffer flush regulator 206 delays the request. This way, if the buffer flush regulator 206 has not received any notification from single-threaded segments 302, then buffer flush regulator 206 make the request for buffer flushing from the consumer executor service 208 to wait until it either gets notification or the time delay is over. If the notification has been received, buffer flush regulator 206 grants the request and consumer executor service 208 may consume the events in the bounded buffer 204 using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer 204 in parallel. By consuming the events, consumer executor service 208 reads the events from bounded buffer 204 and writes the events to data store 130.
In some situations, all of the segments of bounded buffer 204 may be filled with events and new events are received by capture service 202 before buffer flush regulator 206 can empty the contents of bounded buffer 204. Bounded buffer 204 may handle this situation in a number of different ways according to a defined overflow policy. In one embodiment, bounded buffer 204 may implement a “keep latest” overflow policy where the oldest event in bounded buffer 204 is overwritten with the newly received event from client application 112. In another embodiment, bounded buffer 204 may implement a “drop latest” overflow policy, where the newly received event is prevented from being stored in bounded buffer 204.
FIG. 3 is a block diagram illustrating an application server with an in-memory buffer service, according to an embodiment. In one embodiment, application server 110 includes in-memory buffer service 114 and consumer executor service 208. In one embodiment, consumer executor service 208 may be part of in-memory buffer service 114; however, in another embodiment, consumer executor service 208 may be a separate, standalone, service. In one embodiment, an in-memory bounded buffer utilized by the in-memory buffer service 114 includes a plurality of single-threaded segments 302, to which capture service 202 can write the events in parallel. In one embodiment, the bounded buffer may include 16 single-threaded segments 302, each of which can be written in parallel with a different event or other data object. In other embodiments, the bounded buffer may have some other number of segments, including more or less than 16 segments.
In one embodiment, in-memory buffer service 114 further includes buffer flush regulator 206. Buffer flush regulator 206 controls when bounded buffer 204 is emptied (i.e., “flushed”) for consumption by consumer executor service 208 and storage in data store 130. In one embodiment, in-memory buffer service 114 monitors the load on the buffer segments 302 and provides a notification to the buffer flush regulator 206 when a certain portion or percentage of the buffer segments 302 are full (e.g., 80% full) or when a predefined amount of time has passed since a contents of the buffer segments 302 were flushed (e.g., 10 seconds). In one embodiment, a main service thread 304 in consumer executor service 208 may periodically send a request for buffer flushing to buffer flush regulator 206. Buffer flush regulator 206 may determine whether a notification has been received from bounded buffer 204 indicating that either the predefined size limit or the predefined time limit have been reached. If not, buffer flush regulator 206 delays the request. If the notification has been received, buffer flush regulator 206 grants the request and consumer executor service 208 may consume the events in the bounded buffer 204 using a dynamically sized thread pool 306 of consumer threads 308 to read the segments of the bounded buffer 204 in parallel. Dynamically sized thread pool 306 can add or remove consumer threads 308 so that the number of consumer threads 308 matches the number of buffer segments 302 being consumed. For example, if all 16 segments 302 are being read, dynamically sized thread pool 306 can have 16 consumer threads 308. If, however, only 8 buffer segments 302 contain events, then thread pool 306 need only include 8 threads 308. The consumer threads 308 read the events from buffer segments 302 in parallel and write the events to data store 130.
FIG. 4 is a block diagram illustrating an application server with dual in-memory bounded buffers, according to an embodiment. In one embodiment, application server 110 includes in-memory buffer service 114 and consumer executor service 208. In one embodiment, in-memory buffer service 114 utilizes two or more bounded buffers 204, 404 to buffer events received from a client application. In one embodiment, each bounded buffer 204, 404 may include a plurality of single-threaded segments, each of which can be written in parallel with a different event or other data object. The dual bounded buffers 204, 404 can be used alternately to prevent a service interruption. For example, once bounded buffer 204 has been filled with event data and is in the process of being flushed by consumer executor service 208, additional events may be received from the client application or other source. Since bounded buffer 204 is not available, those new events cannot be stored in bounded buffer 204. Instead, bounded buffer 404 may be utilized to store the newly received events. Bounded buffer 404 can function as the current buffer, storing new events until it is ready to be flushed. Buffer flush regulator 206 can initiate a flush of bounded buffer 404 using the same conditions described above with respect to bounded buffer 204. Once bounded buffer 404 is flushed, in-memory buffer service 114 can return to using bounded buffer 204 as the current buffer. The process can repeat, alternating between the available buffers to prevent service interruption and the loss of any event data. In other embodiments, there may be more than two bounded buffers which are cycled through in a similar fashion. For example, a given server, may have three, four, or more buffers available. In one embodiment, a server may initially have some number of buffers (e.g., two), but additional buffers may be allocated as the need arises. Thus, if event data is being received at a rate that cannot be serviced using the current number of buffers, the in-memory buffer service or other logic may allocate additional buffers in order to accommodate the higher volume of events.
Under normal operation the threads of the dynamically sized thread pool 306 in consumer executor service 208 may write the events read from either bounded buffer 204 or bounded buffer 404 directly to data store 130. Depending on the implementation, however, the data store 130 may be a distributed system and possibly take a significant period of time to be written. In such a case, or if all or a portion of the data store 130 is temporarily unavailable, consumer executor service 208 may enqueue the events from the bounded buffer 204 or 404 in a message queue 406 for subsequent writing to data store 130 after data store 130 becomes available. In one embodiment, consumer executor service 208 may first try to write directly to data store 130, but if data store 130 does not respond within some period of time, message queue 406 may be utilized. Message queue 406 may be any available in-memory or out-of-memory data structure that can temporarily accommodate the events before they are stored in data store 130. In one embodiment, message queue 406 may be large enough to store the contents of one bounded buffer 204 at a time. In another embodiment, message queue 406 may be large enough to store the contents of multiple buffers concurrently.
FIG. 5 is a flow diagram illustrating a method for in-memory buffering of event data, according to an embodiment. The method 500 may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processor to perform hardware simulation), firmware, or a combination thereof. The processing logic is configured to receive event data for storage in a data store and buffer the event data in an in-memory bounded buffer prior to storage in the data store. In one embodiment, method 500 may be performed by in-memory buffer service, as shown in FIGS. 1-4.
Referring to FIG. 5, at block 505, method 500 receives events from client application 112. In one embodiment, a capture service 202 running on application server 110 receives events from client application 112 that are to be stored in data store 130. In another embodiment, capture service 202 may receive generic data objects from some other source for in-memory buffering.
At block 510, method 500 determines whether in-memory bounded buffer 204 is full. In one embodiment, bounded buffer 204 may include 16 single-threaded segments, each of which can be written in parallel with a different event generated by client application 112. In one embodiment, in-memory buffer service 114 determines whether each of the buffer segments has been written with an event. If each segment contains an event, then in-memory buffer service 114 determines that the bounded buffer 204 is full. If there are one or more segments of the bounded buffer 204 that do not contain events, then in-memory buffer service 114 determines that the bounded buffer 204 is not full.
If in-memory bounded buffer 204 is not full, at block 515, method 500 stores the received events in the in-memory bounded buffer 204. In one embodiment, capture service 202 writes the received events to one or more of the segments 302 of in-memory bounded buffer 204 in parallel.
If in-memory bounded buffer 204 is full, at block 520, method 500 applies a buffer overflow policy prior to storing the events. In one embodiment, bounded buffer 204 may implement a “keep latest” overflow policy where the oldest event in bounded buffer 204 is overwritten with the newly received event from client application 112. In another embodiment, bounded buffer 204 may implement a “drop latest” overflow policy, where the newly received event is prevented from being stored in bounded buffer 204.
At block 525, method 500 determines if a predefined limit has been reached. In one embodiment, in-memory buffer service 114 monitors the load on bounded buffer 204 and determines when the number of events stored in the bounded buffer 204 reaches a predefined limit (e.g., 80% full) or when a predefined amount of time has passed since a contents of the bounded buffer 204 was written to data store 130 (e.g., 10 seconds). If the predefined limit has been reached, at block 530, method 500 provides a notification to buffer flush regulator 206. In one embodiment, in-memory buffer service 114 provides the notification to buffer flush regulator 206.
At block 535, method 500 receives a buffer flush request from consumer executor service 208. In one embodiment, consumer executor service 208 may periodically send a request for buffer flushing to buffer flush regulator 206. The period with which the request is sent can be configurable depending on the particular implementation.
At block 540, method 500 determines whether the in-memory bounded buffer 204 is ready to be flushed. Buffer flush regulator 206 may determine whether a notification has been received from bounded buffer 204, at block 530, indicating that either the predefined size limit or the predefined time limit have been reached. If not, buffer flush regulator 206 denies the request.
If the in-memory bounded buffer 204 is ready to be flushed, at block 545, method 500 determines whether the data store 130 is available. Depending on the implementation, the data store 130 may be a distributed system and possibly take a significant period of time to be written. In such a case, all or a portion of the data store 130 may be temporarily unavailable.
If the data store 130 is available, at block 550, method 500 consumes the events from the in-memory bounded buffer 204 by writing the events directly to the data store 130. In one embodiment, consumer executor service 208 may consume the events in the bounded buffer 204 using a dynamically sized thread pool of consumer threads to read the segments of the bounded buffer 204 in parallel. By consuming the events, consumer executor service 208 reads the events from bounded buffer 204 and writes the events to data store 130.
If the data store is not available, at block 555, method 500 enqueues the events from in-memory bounded buffer 204 in a message queue 406 for subsequent writing to the data store 130 after the data store 130 becomes available. Message queue 406 may be any available in-memory or out-of-memory data structure that can temporarily accommodate the events before they are stored in data store 130. In one embodiment, message queue 406 may be large enough to store the contents of one bounded buffer 204 at a time. In another embodiment, message queue 406 may be large enough to store the contents of multiple buffers concurrently.
Because the in-memory buffer service is implemented in memory, it is inherently lossy. For example, if the application server 110 crashes, then objects/events that the server 110 is holding in memory (and that have not been written to the data store 130) can be lost. To help address this issue, the IMBS can implement the message queue 310 to improve reliability as described above.
However, due to limits in scalability, the reliability of the message queue 310 can be somewhat limited. The MQ 310 operates as a message queue backed by a large relational/transactional data store 130 with guaranteed message delivery. Enqueuing events in the MQ 310 can have relatively high latency. This can be problematic when the number of events has a high-volume (e.g., on the order of 200+ millions events per day). Because of high latency during high load time, the buffer can starts to overflow since worker threads of consumer executor service are busy with events enqueuing. For instance, when the consumer 208 does not finish in a given time and the buffer 114 starts to overflow.
Further, in some situations, the data store 130 may not be available. For example, the data store 130 might undergo upgrades or other maintenance tasks that make it unavailable such that events cannot be written, which reduces reliability of the system. According to one estimate, when the large data store 130 is unavailable, IMBS could potentially drop up to 80% of events.
These issues can be problematic and unacceptable in certain use cases or environments, for instance, in compliance use cases where auditors want to know about access (e.g., where it is necessary to track events such as who exported the customer list). Thus, although IMBS has many benefits, it would be desirable to improve reliability of the IMBS. To address these issues, the in memory buffer service illustrated in FIG. 3 can be modified as shown in FIG. 6.
FIG. 6 is a block diagram illustrating an application server 110 with an in-memory buffer service according to another embodiment. FIG. 6 also illustrates a data store 130 that is configured to store events in an event table 920, and a temporary events file storage system (TEFSS) 620 that are external to the application server 110. In contrast FIG. 3, FIG. 6 includes three additional elements namely an indirect events writer 610, an events file uploader service 720 and the TEFSS 620 that can help improve reliability of the IMBS. As will be explained in greater detail below, the TEFSS 620 can temporarily store groups of events (generated by a client application 120 that runs at the application server 110) as events files for subsequent writing of the events to the data store 130. In addition, minor modifications are also made at the data store 130 as will be described below.
When the consumer threads 308 are unable to write events directly to the event table 920 at the data store 130 within a certain time, the indirect events writer 610 can be invoked. The indirect events writer 610 includes a dynamically sized thread pool of event capture threads 615. Each event capture thread 615 can create an events file that includes events from a corresponding consumer thread 308. Each event capture thread 615 can temporarily write the events, in batches, as events files, to the TEFSS 620. For example, each event capture thread 615 can serialize multiple events into events files that can then be temporarily stored at the TEFSS 620.
The events file uploader service 720 is a separate daemon thread running in the application server 110. The events file uploader service 720 can retrieve, based on job detail information metadata stored in an events uploader job detail table (not illustrated) at the data store 130, the events files from the TEFSS 620, and then write/upload the events from each of those retrieved events files to an event table (not illustrated) at the data store 130 at a later time.
By adding a secondary storage option, the TEFSS 620, the event capture capability of the IMBS can be improved by significantly reducing loss of events without creating additional latency. For instance, in one implementation, it has been observed that the improved IMBS can reduce the 80% loss (described above) down to 0.02% loss without introducing additional latency. The temporary events file storage system (TEFSS) 620 can help solve the scalability problem with MQ 310 because the TEFSS 620 is not backed by a large relational/transactional data store 130, and is not transactional in nature. Rather, the TEFSS 620 operates as a separate file storage system that provides much higher throughput and lower latency. To explain further, the TEFSS 620 is a data store that can store unstructured data as files (e.g., that contain arbitrary and possibly unrelated data with no ordering, or organizing structure required). In contrast to the MQ 310, the TEFSS 620 provides much higher throughput, and has much lower latency. Moreover, the TEFSS 620 is available to write regardless of whether the tenant database is in read-only mode, which increases reliability of the system. To explain further, the MQ relies on the tenant database to persist its metadata and data. If the tenant database is in read only mode (e.g., during an upgrade), events cannot be enqueued. By contrast, the TEFSS 620 does not go into a read-only mode so events can be enqueued anytime.
FIG. 7A is a block diagram illustrating one instance of a consumer thread 308 and an event capture thread 615 of FIG. 6 according to an embodiment. Due to space constraints in FIG. 7, only one instance of the consumer thread 308 and the event capture thread 615 are illustrated; however, it is to be appreciated from FIG. 6 that there can be multiple instances of each that execute in parallel at any particular time. FIG. 7 will be described with reference to FIGS. 8-12C. As illustrated in FIG. 7A, the consumer thread 308 includes an events reader 704 and a direct events writer 708. The event capture thread 615 includes an invoker 709 and a TEFSS events file writer 710. The events reader 704, direct events writer 708 and the invoker 709 are consumers in a “consumer framework” of the consumer executor service 208. To explain further, as described above, the consumer executor service 208 is an execution engine that is responsible for reading events from in-memory buffer 114, and managing how many worker threads will run. The consumer framework provides the logic to determine what to do with events read from in-memory buffer 114. To explain further, the consumer executor service 208 passes the events read from in-memory buffer 114, in batches, to the consumer framework, and worker threads provide an execution engine to the consumer framework. As will be described in greater detail below, the consumer framework is a management system for the consumers and specifies the order in which each of the consumers will be executed to process the events.
As described above with reference to FIGS. 1-5, the capture service 202 receive events from the client application 120 running on the application server, and provides the events to the in-memory buffer service 114. The in-memory buffer service 114 temporarily holds the events in a bounded buffer 204 that is implemented in physical memory at the application server. The bounded buffer includes a plurality of single-threaded buffer segments 302 that are each configured to temporarily hold some of the events received from the capture service 202 until a flush operation is performed. The buffer flush regulator 206 will perform a flush operation at regular intervals, for example, once every 10 seconds to flush the in-memory buffer 114. The consumer executor service 208 includes a dynamically sized thread pool of consumer threads that execute in parallel. Each of consumer threads 308 corresponds to a particular single-threaded buffer segment 302. When the flush operation occurs, events stored in one of the buffer segments 302 are then passed to a corresponding consumer thread 308.
Each consumer thread 308 can include an instance of an events reader 704 that reads events flushed from a corresponding one of the single-threaded buffer segments 302 (FIG. 6), assigns a unique identifier to each event that will be used at the data store 130, adds that unique identifier to each event, and then passes a result to the direct events writer 708. Each consumer thread 308 can include an instance of a direct events writer 708 is designed to write the events directly to the data store 130. In most scenarios, the direct events writer 708 can successfully write the events received from the events reader 704 directly to the data store 130. However, in some cases, the direct events writer 708 cannot write the events directly to the data store 130. For example, the data store 130 can have periods of lower responsiveness. For instance, in some cases, as the data gets written to the data store 130, there are maintenance tasks (e.g., such as splitting storage units (regions) as they get too large for a given storage unit (region)). Another example would be periodic re-organization of data (or compaction) which takes care of things like deleted records.
To provide additional reliability in situations where the direct events writer 708 is unable to successfully write events to the data store 130, the indirect events writer 610 provides a number of event capture threads 615 corresponding to each of the consumer threads 308. Each event capture thread 615 includes an instance of an invoker 709 and an instance of an events file writer 710. Each invoker 709 can determine if a corresponding instance of the direct events writer 708 was able to successfully write events (that it read from a particular single-threaded buffer segment 302) to an event table at the data store 130. Each invoker 709 also receives the events from a particular single-threaded buffer segment 302 via the direct events writer 708. When an invoker 709 determines that the corresponding instance of the direct events writer 708 was unable to successfully write events directly to the data store 130 (e.g., invoker 709 receives an indication that the direct events writer 708 is unable or has failed to write events to the data store 130), the invoker 709 invokes an instance of the TEFSS events file writer 710 at that event capture thread 615. The invoker 709 will then communicate the events to the TEFSS events file writer 710, which serializes events from the particular single-threaded buffer segment 302 into a single events file, and then writes the events file to the TEFSS 620. Examples of events files stored at the TEFSS 620 are illustrated in FIG. 8.
FIG. 8 is a block diagram illustrating an exemplary implementation of the TEFSS 620 of FIG. 6 according to an embodiment. As illustrated in FIG. 8, the TEFSS 620 can store a plurality of events files 810-1 . . . 810-3.
Each events file 810 written to the TEFSS 620 by the TEFSS events file writer 710 can include a group identifier (/group ID/) that uniquely identifies a group of events files that were generated during a particular time window, a file name (not explicitly labeled in FIG. 8), and file information (not illustrated in FIG. 8). The file name can include an application server name (<app_serverX_name> of the application server that generated the events in the events file, a creation date (<create_date>) for each of the events in the events file, where the creation date is specified as events for Mmddyyyy-HHmm;SSS (where MM is the month, dd is the day, yyyy is the year, HH is the hour, mm is the minutes and SSS is the second when the events file was created), and a thread identifier (thread_ID) that specifies the name of the event capture thread writing the events file. The group ID identifies the group that the events file uploader service 720 inspects to identify an events file to process. A new group identifier (/group ID/) for each events file 810 can be generated at regular intervals, for example, once every 10 minutes in an implementation where separate events files 810 are generated once every ten minutes. The application server name (<app_serverX_name>, the creation date (<create_date>) and the thread identifier (thread_ID) make up the file name for the particular events file. The file information in each events file 810 can include a file type (e.g., Gzip File) and events that are serialized using standard approaches for serializing objects or a collection of objects, for example, such as a JSON format, Avro, or protocol buffer in accordance with some exemplary implementations.
FIG. 7B is a block diagram illustrating an events file uploader service 720 of FIG. 6 according to an embodiment. The events file uploader 720 is responsible for retrieving the events file from the TEFSS 620 and uploading the events to the data store 130 in a reliable manner. The events file uploader 720 includes an events uploader manager 722 and a plurality of worker threads 724. Although only two instances of worker threads 724 are illustrated in FIG. 7 due to space constraints, it should be appreciated that a practical implementation the events file uploader 720 can be multi-threaded and that the events uploader manager 722 can instantiate any number of worker thread 724 instances needed to process any number of events files 810. The events uploader manager 722 creates and maintains uploader job records that are stored in the events uploader job detail table at the data store 130. Each uploader job record points to a particular events file 810 stored at the TEFSS 620. As will be described in greater detail below, the events uploader manager 722 instantiates one instance of a worker thread 724 for each events file name that the events uploader manager 722 retrieves from the TEFSS 620.
FIG. 9 is a block diagram illustrating an exemplary implementation of the data store 130 of FIG. 6 according to an embodiment. The data store 130 includes an events uploader job detail table 910 and an event table 920. The events uploader job detail table 910 includes a number of rows (only two rows are shown in FIG. 9 for simplicity of illustration). Each row of the events uploader job detail table 910 is used to store an uploader job record that includes job detail information that points to a particular events file 810. The events uploader manager 722 creates and maintains each of the uploader job records that are stored in the uploader job detail table 910. As will be explained in more detail below, the events file uploader service 720 can use job detail information from the uploader job records to retrieve particular events files associated with those particular uploader job records. On the other hand, the event table 920 is the destination where events are eventually stored after they are retrieved by the events file uploader service 720 from the TEFSS 620, and uploaded by the events file uploader service 720 to the event table 920 of the data store 130. An example of the events uploader job detail table 910 is illustrated in FIG. 10.
FIG. 10 is a data structure diagram illustrating a schema used for each uploader job record that is stored at the events uploader job detail table 910 according to an embodiment. As noted above, information in each row of the events uploader job detail table 910 (FIG. 9) is referred to herein as an “uploader job record.” A single row 1010 is illustrated in FIG. 10 for simplicity. Each uploader job record 1010 stores job detail information related to a particular events file that the TEFSS events file writer 710 has written to the TEFSS 620.
In the non-limiting embodiment that is illustrated in FIG. 10, an uploader job record 1010 for an events file includes various columns of metadata that include, but are not limited to: a cluster identifier (<Cluster ID>) that uniquely identifies a collection of instances that a grouped as a cluster of application servers, an instance identifier (<instance_id>) that uniquely identifies a collection of application servers grouped as an instance, an application server name (<app_server_name>) of a particular application server, a group identifier (<group ID>) that uniquely identifies a group of events files that were generated during a particular time window, a file name (<file_name>) of the particular events file, a job started date (<job_started_date>) that specifies when the particular events file was created, a number of events (<Num_Events>) that specifies a number of events for this particular events file, a number of events that specifies a number of events for this particular events file, a last updated time that indicates when the particular events file was last updated, a number of retry attempts that specifies how many times the events file uploader service 720 has attempted to process the particular events file, the job type (<job_type>) that indicates either regular or backup (, and a status that indicates either running or completed. Although not illustrated, it should be appreciated that the uploader job record 1010 for each events file can include other fields such as file size information regarding the file size of the particular events file, a time-to-live (TTL) value, etc.
An events file uploader service 720 runs on each application server of a cloud computing service provider. A particular instance of the events file uploader service 720 that runs on a particular application server of that cloud computing service provider can be identified by the combination of:
(<Cluster ID>)/(<instance_id>)/(<app_server_name>)/(<job_type>).
Each row of the events uploader job detail table 910 includes a row key that is used by an events uploader manager 722 to look up job detail information for a particular event file. A row key for a row of the events uploader job detail table 910 is defined by the combination of the instance identifier (<instance_id>), the application server name (<app_server>), the group identifier (<group ID>), the file name (<file_name>), and the job started date (<job_started_date>).
Referring again to FIG. 7B, at communication 730, the events uploader manager 722 receives or retrieves at least some of the job detail information from the events uploader job detail table 910 that is stored at the data store 130. The events uploader manager 722 is invoked with the application server name (<app_server>), the group identifier (<group ID>), and the job type (e.g., either backup or regular).
At communication 732, the events uploader manager 722 reads all of the names of events files (available for a particular application server) from the TEFSS 620. At 733, the events uploader manager 722 then assigns each of the worker threads 724 one of the events file names (i.e., assigns a file name of specific events file to one of the worker threads 724).
Each worker thread 724 includes events file reader 726 that is responsible for reading an events file 810 from the TEFSS 620, and event loader 728 that is responsible for uploading events from that events file 810 to the data store 130.
Each events file reader 726 retrieves an events file 810 from the TEFSS 620 that corresponds to particular events file name and adds a record in the events uploader job detail table 910 with current time as job started time. Each events file reader 726 then provides that events file 810 to a corresponding instance of event loader 728 that writes each event in the events file 810 to the data store 130. Specifically, event loader 728 writes each event in the events file 810 to the event table 920 of the data store 130, and then updates the record in the events uploader job detail table 910 with a retry attempt.
FIG. 11A is a flow diagram illustrating a method 1100 for writing events files to a temporary events file storage system (TEFSS) 620 according to another embodiment. The method 1100 of FIG. 11A describes a computer-implemented method performed by an application server comprising an indirect events writer 610 that is configured to instantiate a plurality of event capture threads 615 each comprising an instance of an events file writer 710. The computer-implemented method of FIG. 11A will be described with reference to a single instance of an indirect events writer 610 of a particular application server, but it will be appreciated that each application server in the system can perform similar steps. FIG. 11A will be described with reference to FIGS. 1 through 10.
Prior to describing FIG. 11A, it is noted that, as described above with reference to FIGS. 1-6, capture service 202 receives events that are to be stored in a data store 130 from a client application 112 running on the application server 110, and temporarily stores the events in a bounded buffer 204 that is part of the in-memory buffer service 114. The bounded buffer 204 is implemented in physical memory at the application server 110, and comprises a plurality of single-threaded buffer segments 302. The single-threaded buffer segments 302 are each configured to temporarily store some of the events until a flush operation is performed by the buffer flush regulator 206.
At 1110, the buffer flush regulator 206 flushes the plurality of single-threaded buffer segments 302 of the bounded buffer 204, and the consumer executor service 208 instantiates a plurality of consumer threads 308 that each corresponds to a particular single-threaded buffer segment 302. As shown in FIG. 7A, each consumer thread 308 includes an instance of an events reader 704 and an instance of a direct events writer 708. the The buffer flush regulator 206 can perform the flush operation at regular intervals, for example, once every 10 seconds to flush the in-memory buffer 114, and events from one of the buffer segments 302 are then passed to the consumer thread 308.
At 1115, the events reader 704 of each consumer thread 308 reads the events flushed from the particular single-threaded buffer segment that corresponds to that consumer thread 308, and each direct events writer 708 adds an identifier to each event, and then passes a result to a direct events writer 708, which is designed to write the events directly (i.e., without any intermediate storage or processing steps) to the data store 130.
At 1120, each direct events writer 708 attempts to directly write each event it has received to the data store 130.
As shown in FIG. 7A, each of the event capture threads 615 includes an instance of an invoker 709 and an instance of a TEFSS events file writer 710.
At 1130, each invoker 709 can determine if its corresponding direct events writer 708 was able to successfully write events (that were read from the particular single-threaded buffer segment 302) directly to the event table 920 at the data store 130 within constraints imposed. These constraints can be any combination of time-based, attempt-based, failure-based, error-based measures that indicate that the direct events writer 708 was unable to write events to the data store 130.
In many “normal” operational scenarios, the direct events writer 708 can successfully write an event directly to the event table 920 at the data store 130. In this case, the method 1100 loops to 1110. However, in some operational scenarios (described above), events cannot be written directly to the data store 130, and additional mechanisms are invoked to provide additional reliability.
Whenever an instance of an invoker 709 at one of the event capture threads 615 determines that a corresponding instance of the direct events writer 708 was unable to successfully, directly write the events directly to an event table 920 at the data store 130, the invoker invokes an instance of the TEFSS events file writer 710 at that event capture thread 615. Then at 1140, the invoker 709 communicates the events that are received from that corresponding instance of the direct events writer 708 to the TEFSS events file writer 710.
At 1150, the TEFSS events file writer 710 of each event capture thread 615 can generate a particular events file 810 that includes a plurality of events received from an in-memory buffer service 114, and then write that particular events file 810 to the TEFSS 620. The TEFSS 620 is configured to temporarily store events files for subsequent writing to the data store 130 when the data store 130 becomes available. Each instance of the TEFSS events file writer 710 can serialize the plurality of events received from the particular single-threaded buffer segment 302 into an events file. In addition, at 1150, an uploader job record at the uploader job detail table 910 (FIGS. 9 and 10).
FIG. 11B is a flow diagram illustrating a method performed by an events file uploader service 720 for transferring events files from the temporary events file storage system (TEFSS) 620 to a data store 130 according to another embodiment. The computer-implemented method of FIG. 11B will be described with reference to a described with reference to a single instance of an events file uploader service 720 of a particular application server, but it will be appreciated that each application server in the system can perform similar steps. FIG. 11B will be described with reference to FIGS. 1-10. As illustrated in FIG. 7B, the events file uploader service 720 includes an events uploader manager 722 that maintains each of the uploader job records that are stored in the events uploader job detail table 910, and a plurality of worker threads 724.
At 1160, the events uploader manager 722 can receive or retrieve job detail information from one or more particular uploader job records (stored at the events uploader job detail table 910). Based on the job detail information retrieved, the events uploader manager 722 determines at least one events file for the application server that needs to retrieved from the TEFSS 620.
At 1170, the events uploader manager 722 can read events file names for all of the events files that are available for a particular application server from the TEFSS 620 and need to be retrieved from the TEFSS 620.
At 1180, the events uploader manager 722 can then assign each worker thread 724 a name of a particular events file. This way, each events file name(s) retrieved from the TEFSS 620 can be assigned to one instance of the worker thread 724. As described above, each worker thread 724 comprises a particular events file reader 726 and a particular event loader 728.
At 1185, each of the events file readers 726 can use the events file name to read/retrieve an appropriate events file 810 (that corresponds to particular events file name) from the TEFSS 620, and then provide the retrieved events file to a corresponding instance of the event loader 728. In addition, at 1185, the events uploader manager 722 creates an uploader job record that points to the particular events file 810 stored at the TEFSS 620, and writes the uploader job record to the events uploader job detail table 910 maintained at the data store 130. This uploader job record can include the information indicated above.
At 1190, each event loader 728 can upload/write each of the events from the particular events file 810 to the event table 920 of the data store 130. In addition, at 1190, the events uploader manager 722 updates the uploader job record at the events uploader job detail table 910 with a retry attempt. Each event loader 728 will attempt a certain number of retries in the event of a transport failure, and if the event loader 728 is unable to write events from the particular events file 810 to the event table 920, the event loader 728 will wait for a time period before retrying to write events from the particular events file 810 to the event table 920. In one, non-limiting implementation, the period between retry attempts can be set using an exponential backoff algorithm (e.g., where the period is set per a formula such as 2(retry-1)*60*1000 milliseconds).
The events file uploader service 720 can operate in a regular mode or a backup mode. The regular events file uploader service 720 runs at an application server to transfer events files created by that application server from the TEFSS 620 to the data store 130. The regular events file uploader service 720 runs continuously at the application server and does not terminate. For instance, in one embodiment, the regular events file uploader service 720 is implemented using a continuously running dameon thread.
By contrast, the backup events file uploader service 720 can run periodically or according to a schedule. For instance, in one embodiment, the backup events file uploader service 720 can be implemented as a time-based job scheduler utility that allows tasks/jobs to be automatically run at regular intervals (e.g., periodically at fixed times, dates, or intervals) by the backup daemon. The backup events file uploader service 720 runs at an application server to transfer events files that were created by another application server from the TEFSS 620 to the data store 130. This way, when an application server that is part of the cluster is unavailable (e.g., has crashed) and is unable to transfer its own events files from the TEFSS 620 to the data store 130, another application server in that cluster that is available can run the backup events file uploader service 720 to transfer events files that were created by the unavailable application server from the TEFSS 620 to the data store 130.
Embodiments of the regular events file uploader service 720 will be described below with reference to FIGS. 12A and 12B, and embodiments of the backup events file uploader service 720 will be described with reference to FIG. 12C
FIG. 12A is a flow diagram illustrating a method 1200 performed by the events file uploader service 720 when operating in regular mode for transferring events files from the temporary events file storage system (TEFSS) 620 to a data store 130 according to another embodiment.
The method 1200 begins at 1202 when the regular uploader service starts.
At 1204, the events uploader manager 722 reads the most recent uploader job record for the current application server from the events uploader job detail table 910.
At 1208, the regular events uploader manager 722 searches the most recent uploader job record for the application server for a starting group identifier that uniquely identifies a group of events files generated by the application server during a particular time window. To explain further, each group identifier is associated with a time window of a certain duration (e.g., 10 minutes). In one embodiment, the group identifier has a format: Events-MMddYYYYHHmm. The minute part (mm) represent a minute range of a certain duration. For example, if the time window is set to 10 minutes, then the group identifier can be 0, 10, 20, 30, 40 or 50. For example, a group identifier of 0 would represent that the events file was generated during a time window that spans anywhere between the start 0 up to, but not including the 10th minute, whereas a group identifier of 20 would represent that the events file was generated during a time window that spans anywhere between the start of the 20th minute up to, but not including, the 30th minute, whereas a group identifier of 50 would represent that the events file was generated during a time window that spans anywhere between the start of the 50th minute up to, but not including, the 60th minute. The significance of this is that if a group identifier has a minute part (mm) that is the same as a current time, then that group identifier is the group identifier for the current time window. One exemplary implementation of 1208 will be described in greater detail below with reference to FIG. 12B.
At 1209, the regular events uploader manager 722 determines from the most recent uploader job record whether a status for the most recent uploader job record indicates running or completed. When the status of the most recent uploader job record indicates running this means that the job is in a running state (e.g., the process which is consuming the events file has not reached the completed state, and the method 1200 proceeds from 1209 to 1220 as will be described below.
When the status of the most recent uploader job record indicates running this means that the job is in a running state (e.g., the process which is consuming the events file has not reached the completed state, and the method 1200 proceeds from 1209 to 1220, where the regular events uploader manager 722 upserts the most recent uploader job record for the application server at the events uploader job detail table 910 with updated job detail information to indicate that the regular events uploader manager 722 has started to process and read the events file. In this case, the file name is already available in uploader job record 1010. The method then proceeds to 1222 as described below.
When the status of the most recent uploader job record indicates completed this means the events file recorded in this uploader job record is processed successfully and it is time to search for and process next available events file. Therefore, when the status of the most recent uploader job record indicates completed, the method 1200 proceeds to 1210, where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group identifier.
At 1212, the events uploader manager 722 determines if an events file that is associated with the starting group identifier was able to be read from the TEFSS 620. When the events uploader manager 722 determines (at 1212) that an events file that is associated with the starting group identifier was not able to be read from the TEFSS 620, the method proceeds to 1214, where the events uploader manager 722 determines the next available group ID that uniquely identifies another group of events files generated by the application server during another particular time window. At 1216, the events uploader manager 722 determines whether this next possible group identifier is for a current time window (or “corresponds to” the current time window).
When the events uploader manager 722 determines (at 1216) that this next possible group ID is for a current time window, then the method 1200 proceeds to 1218 where the events uploader manager 722 waits for a time period (e.g., 10 minutes) before the method 1200 loops back to 1210, where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the next possible group identifier. When the events uploader manager 722 determines (at 1216) that this next possible group ID is not for the current time window, then the method 1200 loops to 1210, where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the next possible group identifier.
When the events file that includes the starting group identifier was able to be read from the TEFSS 620 (at 1212), the regular events uploader manager 722 assigns a worker thread 724 for the particular events file that was read (at 1212). The worker thread 724 includes a particular events file reader 726 and a particular event loader 728.
At 1220, the events file reader 726 upserts the most recent uploader job record for the application server at the events uploader job detail table 910 with updated job detail information. As used herein, the term “upserting” refers to a combination of updating (i.e., refreshing an existing record that already exists) and inserting (i.e., inserting a new record if a match is not found). In other words, one data set can be used to update existing records at the same time new records are inserted. The method 1200 then proceeds to 1222, where the events file reader 726 reads the events file, serializes the events from the events file and sends the events to the event loader 728.
At 1224, the event loader 728 loads and saves each of the events from the events file to a row of the event table 920 maintained at the data store 130. At 1226, the events uploader manager 722 updates the status of the uploader job record that corresponds to the events file to indicate completed (in the events uploader job detail table 910). At 1228, the events uploader manager 722 deletes the events file from the TEFSS 620.
The method 1200 then loops to 1210, where the regular events uploader manager 722 searches the TFESS 620 for another events file that includes the group identifier. To explain further, more than one events file can have the same group identifier so the regular events uploader manager 722 searches the TFESS 620 for more events files having the group identifier. If the regular events uploader manager 722 determines that there is no events file available for a given application server for that group identifier and it is not for current time window, then the regular events uploader manager 722 will move to next possible group identifier based on next minute range window.
FIG. 12B is a flow diagram illustrating a method 1240 performed by the events file uploader service 720 for searching an uploader job record for a starting group identifier when operating in regular mode according to another embodiment. The method 1240 illustrates one exemplary implementation for performing step 1208 of FIG. 12A.
At 1246, the events uploader manager 722 reads the most recent uploader job record for the current application server (from the events uploader job detail table 910), and determines whether the most recent uploader job record for the application server is null (i.e., there is no uploader job record against this application server name in the events uploader job detail table 910).
When the events uploader manager 722 determines (at 1246) that the most recent uploader job record is not null (i.e., that an uploader job record for the current application server was successfully read), then method 1240 proceeds to 1247, where the events uploader manager 722 reads the group identifier from the most recent uploader job record for the application server, and the method 1208 then proceeds to 1210, where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group identifier.
When the events uploader manager 722 determines (at 1246) that the most recent uploader job record is null, the method 1240 proceeds to 1248, where the events uploader manager 722 attempts to read a marker file for the application server from the TEFSS 620, and then determines, at 1250, whether the marker file for the application server was read from the TEFSS 620. In one embodiment, the marker file is a simple text file which records the first group identifier when uploader service starts for the first time on an application server.
When the regular events uploader manager 722 determines that the marker file for the application server was read from the TEFSS 620, the method 1208 proceeds to 1255, where the regular events uploader manager 722 reads a group identifier stored in the marker file. The method 1208 then proceeds to 1210, where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group.
When the events uploader manager 722 determines that a marker file for the application server was not able to be read from the TEFSS 620, the method 1208 proceeds to 1252, where the regular events uploader manager 722 reads the oldest group identifier stored in a cache at the application server that stores group identifier for that application server.
The method 1208 then proceeds to 1254, where the regular events uploader manager 722 creates a marker file with the group identifier that was retrieved from the cache at the application server. The method 1208 then proceeds to 1210, where the regular events uploader manager 722 searches the TFESS 620 for an events file that includes the starting group identifier.
FIG. 12C is a flow diagram illustrating a method 1270 performed by the events file uploader service 720 when operating in backup-job mode for transferring events files from the temporary events file storage system (TEFSS) 620 to a data store 130 according to another embodiment. In contrast to FIG. 12A, the backup job based uploader service is back-up service that is used when one or more of the uploader services 720 (running at another application server) is unable to operate in regular mode that is described above with respect to FIGS. 12A and 12B. In the description of FIG. 12C, two application servers will be referenced that are part of a cluster of application servers. To distinguish between them, one application server that has created an events file and then become unavailable will be referred to as a “first” application server, and another application server that runs a backup events file uploader service 720 will be referred to as a “second” application server. As described above with reference to FIGS. 1-6, when the first application server is unable to directly write events to an event table 920 at the data store 130, the first application server can generate a particular events file 810 and write that particular events file to the TEFSS 620, which can temporarily store the events files for subsequent writing to the data store 130. As will be described in detail below, when the backup events file uploader service 720 that runs at the second application server determines that the first application server is inactive (e.g., has crashed), the backup events file uploader service 720 transfer events files that were created by the first application server from the TEFSS 620 to the data store 130. Among other things, the events uploader manager 722 of the backup events file uploader service 720 can retrieve job detail information from a particular uploader job record, and then, based on the job detail information retrieved, read one or more events files for the first application server from the TEFSS 620. The backup events file uploader service 720 can then write the events from each events file to an event table 920 maintained at the data store 130.
The method 1270 begins at 1272 when the backup job-based uploader service starts.
At 1274, the events uploader manager 722 of the second application server attempts to find an application server in the cluster that has been inactive for a certain duration (e.g., 30 minutes in one implementation). For example, in one embodiment, the events uploader manager 722 of the second application server searches the events uploader job detail table 910 for all inactive application servers (i.e., all application servers that do not have an uploader job record that was started or updated within the certain duration), and randomly selects one of the application servers.
At 1276, the events uploader manager 722 determines whether an inactive application server in the cluster has been found. While this could be any application server in the cluster, for purposes of the following example, it will be assumed that the events uploader manager 722 has determined that the first application server is inactive. However, it is noted that if the events uploader manager 722 of the second application server cannot find an inactive application server, then method 1270 proceeds to 1298 where the backup events file uploader service 720 terminates.
At 1277, the backup events uploader manager 722 reads the most recent uploader job record for the first application server from the events uploader job detail table 910. At 1278, the backup events uploader manager 722 searches the most recent uploader job record for the first application server for a starting group identifier that uniquely identifies a group of events files generated by the first application server during a particular time window. As explained above, each group identifier is associated with a time window of a certain duration (e.g., 10 minutes). If a group identifier has a minute part (mm) that is the same as a current time, then that group identifier is the group identifier for the current time window.
At 1280, backup events uploader manager 722 determines from the most recent uploader job record whether the status of the uploader job record indicates that it is running or completed.
When the status of the most recent uploader job record indicates running this means that the job is in a running state (e.g., the process which is consuming the events file has not reached the completed state), and the method 1200 proceeds from 1280 to 1285 as will be described below.
When the backup events uploader manager 722 determines (at 1280) that the status of the most recent uploader job record indicates that it has been completed this means the events file recorded in this uploader job record is processed successfully and it is time to search for and process next available events file. Therefore, when the status of the most recent uploader job record indicates completed, the method 1200 proceeds to 1282, where the backup events uploader manager 722 of the second application server searches the TFESS 620 for an events file that includes the starting group identifier.
At 1284, the backup events uploader manager 722 determines whether an events file that is associated with the starting group identifier was able to be read from the TEFSS 620.
When the events uploader manager 722 determines (at 1284) that an events file that is associated with the starting group identifier was not able to be read from the TEFSS 620, the method proceeds to 1294, where the backup events uploader manager 722 determines a next possible group identifier that uniquely identifies another group of events files generated by the first application server during another particular time window, and the method 1270 proceeds to 1296. At 1296, the backup events uploader manager 722 determines whether the next possible group identifier is for a current time window (or “corresponds to” the current time window). When the next possible group identifier is determined (at 1296) not to be for the current time window, the method 1200 proceeds to 1282, where the backup events uploader manager 722 searches the TFESS 620 for an events file that includes the next possible group identifier. By contrast, when the next possible group identifier is determined (at 1296) to be for the current time window, the method 1200 proceeds to 1298, where the backup events uploader manager 722 terminates the events file uploader service 720 at the second application server.
When the events uploader manager 722 determines (at 1284) that an events file that includes the starting group identifier was able to be read from the TEFSS 620, the method proceeds to 1285. The backup events uploader manager 722 assigns a worker thread 724 for the particular events file that was read (at 1284). The worker thread 724 includes a particular events file reader 726 and a particular event loader 728. At 1285, where the backup events uploader manager 722 upserts the most recent uploader job record for the first application server at the events uploader job detail table 910 with updated job detail information. The method 1200 then proceeds to 1286, where the events file reader 726 reads the events file, serializes the events from the events file, and sends the events to the event loader 728.
At 1288, the event loader 728 loads and saves each of the events from the events file to a row of the event table 920 maintained at the data store 130. At 1290, the backup events uploader manager 722 updates the status of the uploader job record (in the events uploader job detail table 910) that corresponds to the events file to indicate completed. At 1292, the events uploader manager 722 deletes the events file from the TEFSS 620.
Because more than one events file can have the same group identifier the method 1200 then loops to 1282, where the backup events uploader manager 722 searches the TFESS 620 for another events file that includes the group identifier.
When the status of the most recent uploader job record indicates running this means that the job is in a running state (e.g., the process which is consuming the events file has not reached the completed state), and the method 1200 proceeds from 1280 to 1285. At 1285, the backup events uploader manager 722 upserts the most recent uploader job record for the first application server at the events uploader job detail table 910 with updated job detail information to indicate that the backup events uploader manager 722 has started to process and read the events file. In this case, the file name is already available in uploader job record 1010. The method then proceeds to 1286 as described above.
II. Example System Overview
The following description is of one example of a system in which the features described above may be implemented. The components of the system described below are merely one example and should not be construed as limiting. The features described above with respect to FIGS. 1-12 may be implemented in any other type of computing environment, such as one with multiple servers, one with a single server, a multi-tenant server environment, a single-tenant server environment, or some combination of the above.
FIG. 13 shows a block diagram of an example of an environment 1310 in which an on-demand database service can be used in accordance with some implementations. The environment 1310 includes user systems 1312, a network 1314, a database system 1316 (also referred to herein as a “cloud-based system”), a processor system 1317, an application platform 1318, a network interface 1320, tenant database 1322 for storing tenant data 1323, system database 1324 for storing system data 1325, program code 1326 for implementing various functions of the system 1316, and process space 1328 for executing database system processes and tenant-specific processes, such as running applications as part of an application hosting service. In some other implementations, environment 1310 may not have all of these components or systems, or may have other components or systems instead of, or in addition to, those listed above.
In some implementations, the environment 1310 is an environment in which an on-demand database service exists. An on-demand database service, such as that which can be implemented using the system 1316, is a service that is made available to users outside of the enterprise(s) that own, maintain or provide access to the system 1316. As described above, such users generally do not need to be concerned with building or maintaining the system 1316. Instead, resources provided by the system 1316 may be available for such users' use when the users need services provided by the system 1316; that is, on the demand of the users. Some on-demand database services can store information from one or more tenants into tables of a common database image to form a multi-tenant database system (MTS). The term “multi-tenant database system” can refer to those systems in which various elements of hardware and software of a database system may be shared by one or more customers or tenants. For example, a given application server may simultaneously process requests for a great number of customers, and a given database table may store rows of data such as feed items for a potentially much greater number of customers. A database image can include one or more database objects. A relational database management system (RDBMS) or the equivalent can execute storage and retrieval of information against the database object(s).
Application platform 1318 can be a framework that allows the applications of system 1316 to execute, such as the hardware or software infrastructure of the system 1316. In some implementations, the application platform 1318 enables the creation, management and execution of one or more applications developed by the provider of the on-demand database service, users accessing the on-demand database service via user systems 1312, or third party application developers accessing the on-demand database service via user systems 1312.
In some implementations, the system 1316 implements a web-based customer relationship management (CRM) system. For example, in some such implementations, the system 1316 includes application servers configured to implement and execute CRM software applications as well as provide related data, code, forms, renderable web pages and documents and other information to and from user systems 1312 and to store to, and retrieve from, a database system related data, objects, and Web page content. In some MTS implementations, data for multiple tenants may be stored in the same physical database object in tenant database 1322. In some such implementations, tenant data is arranged in the storage medium(s) of tenant database 1322 so that data of one tenant is kept logically separate from that of other tenants so that one tenant does not have access to another tenant's data, unless such data is expressly shared. The system 1316 also implements applications other than, or in addition to, a CRM application. For example, the system 1316 can provide tenant access to multiple hosted (standard and custom) applications, including a CRM application. User (or third party developer) applications, which may or may not include CRM, may be supported by the application platform 1318. The application platform 1318 manages the creation and storage of the applications into one or more database objects and the execution of the applications in one or more virtual machines in the process space of the system 1316.
According to some implementations, each system 1316 is configured to provide web pages, forms, applications, data and media content to user (client) systems 1312 to support the access by user systems 1312 as tenants of system 1316. As such, system 1316 provides security mechanisms to keep each tenant's data separate unless the data is shared. If more than one MTS is used, they may be located in close proximity to one another (for example, in a server farm located in a single building or campus), or they may be distributed at locations remote from one another (for example, one or more servers located in city A and one or more servers located in city B). As used herein, each MTS could include one or more logically or physically connected servers distributed locally or across one or more geographic locations. Additionally, the term “server” is meant to refer to a computing device or system, including processing hardware and process space(s), an associated storage medium such as a memory device or database, and, in some instances, a database application (for example, OODBMS or RDBMS) as is well known in the art. It should also be understood that “server system” and “server” are often used interchangeably herein. Similarly, the database objects described herein can be implemented as part of a single database, a distributed database, a collection of distributed databases, a database with redundant online or offline backups or other redundancies, etc., and can include a distributed database or storage network and associated processing intelligence.
The network 1314 can be or include any network or combination of networks of systems or devices that communicate with one another. For example, the network 1314 can be or include any one or any combination of a LAN (local area network), WAN (wide area network), telephone network, wireless network, cellular network, point-to-point network, star network, token ring network, hub network, or other appropriate configuration. The network 1314 can include a TCP/IP (Transfer Control Protocol and Internet Protocol) network, such as the global internetwork of networks often referred to as the “Internet” (with a capital “I”). The Internet will be used in many of the examples herein. However, it should be understood that the networks that the disclosed implementations can use are not so limited, although TCP/IP is a frequently implemented protocol.
The user systems 1312 can communicate with system 1316 using TCP/IP and, at a higher network level, other common Internet protocols to communicate, such as HTTP, FTP, AFS, WAP, etc. In an example where HTTP is used, each user system 1312 can include an HTTP client commonly referred to as a “web browser” or simply a “browser” for sending and receiving HTTP signals to and from an HTTP server of the system 1316. Such an HTTP server can be implemented as the sole network interface 1320 between the system 1316 and the network 1314, but other techniques can be used in addition to or instead of these techniques. In some implementations, the network interface 1320 between the system 1316 and the network 1314 includes load sharing functionality, such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a number of servers. In MTS implementations, each of the servers can have access to the MTS data; however, other alternative configurations may be used instead.
The user systems 1312 can be implemented as any computing device(s) or other data processing apparatus or systems usable by users to access the database system 1316. For example, any of user systems 1312 can be a desktop computer, a work station, a laptop computer, a tablet computer, a handheld computing device, a mobile cellular phone (for example, a “smartphone”), or any other Wi-Fi-enabled device, wireless access protocol (WAP)-enabled device, or other computing device capable of interfacing directly or indirectly to the Internet or other network. The terms “user system” and “computing device” are used interchangeably herein with one another and with the term “computer.” As described above, each user system 1312 typically executes an HTTP client, for example, a web browsing (or simply “browsing”) program, such as a web browser based on the WebKit platform, Microsoft's Internet Explorer browser, Netscape's Navigator browser, Opera's browser, Mozilla's Firefox browser, or a WAP-enabled browser in the case of a cellular phone, PDA or other wireless device, or the like, allowing a user (for example, a subscriber of on-demand services provided by the system 1316) of the user system 1312 to access, process and view information, pages and applications available to it from the system 1316 over the network 1314.
Each user system 1312 also typically includes one or more user input devices, such as a keyboard, a mouse, a trackball, a touch pad, a touch screen, a pen or stylus or the like, for interacting with a graphical user interface (GUI) provided by the browser on a display (for example, a monitor screen, liquid crystal display (LCD), light-emitting diode (LED) display, among other possibilities) of the user system 1312 in conjunction with pages, forms, applications and other information provided by the system 1316 or other systems or servers. For example, the user interface device can be used to access data and applications hosted by system 1316, and to perform searches on stored data, and otherwise allow a user to interact with various GUI pages that may be presented to a user. As discussed above, implementations are suitable for use with the Internet, although other networks can be used instead of or in addition to the Internet, such as an intranet, an extranet, a virtual private network (VPN), a non-TCP/IP based network, any LAN or WAN or the like.
The users of user systems 1312 may differ in their respective capacities, and the capacity of a particular user system 1312 can be entirely determined by permissions (permission levels) for the current user of such user system. For example, where a salesperson is using a particular user system 1312 to interact with the system 1316, that user system can have the capacities allotted to the salesperson. However, while an administrator is using that user system 1312 to interact with the system 1316, that user system can have the capacities allotted to that administrator. Where a hierarchical role model is used, users at one permission level can have access to applications, data, and database information accessible by a lower permission level user, but may not have access to certain applications, database information, and data accessible by a user at a higher permission level. Thus, different users generally will have different capabilities with regard to accessing and modifying application and database information, depending on the users' respective security or permission levels (also referred to as “authorizations”).
According to some implementations, each user system 1312 and some or all of its components are operator-configurable using applications, such as a browser, including computer code executed using a central processing unit (CPU) such as an Intel Pentium® processor or the like. Similarly, the system 1316 (and additional instances of an MTS, where more than one is present) and all of its components can be operator-configurable using application(s) including computer code to run using the processor system 1317, which may be implemented to include a CPU, which may include an Intel Pentium® processor or the like, or multiple CPUs.
The system 1316 includes tangible computer-readable media having non-transitory instructions stored thereon/in that are executable by or used to program a server or other computing system (or collection of such servers or computing systems) to perform some of the implementation of processes described herein. For example, computer program code 1326 can implement instructions for operating and configuring the system 1316 to intercommunicate and to process web pages, applications and other data and media content as described herein. In some implementations, the computer code 1326 can be downloadable and stored on a hard disk, but the entire program code, or portions thereof, also can be stored in any other volatile or non-volatile memory medium or device as is well known, such as a ROM or RAM, or provided on any media capable of storing program code, such as any type of rotating media including floppy disks, optical discs, digital versatile disks (DVD), compact disks (CD), microdrives, and magneto-optical disks, and magnetic or optical cards, nanosystems (including molecular memory ICs), or any other type of computer-readable medium or device suitable for storing instructions or data. Additionally, the entire program code, or portions thereof, may be transmitted and downloaded from a software source over a transmission medium, for example, over the Internet, or from another server, as is well known, or transmitted over any other existing network connection as is well known (for example, extranet, VPN, LAN, etc.) using any communication medium and protocols (for example, TCP/IP, HTTP, HTTPS, Ethernet, etc.) as are well known. It will also be appreciated that computer code for the disclosed implementations can be realized in any programming language that can be executed on a server or other computing system such as, for example, C, C++, HTML, any other markup language, Java™, JavaScript, ActiveX, any other scripting language, such as VBScript, and many other programming languages as are well known may be used. (Java™ is a trademark of Sun Microsystems, Inc.).
FIG. 14 shows a block diagram of example implementations of elements of FIG. 13 and example interconnections between these elements according to some implementations. That is, FIG. 14 also illustrates environment 1310, but FIG. 14, various elements of the system 1316 and various interconnections between such elements are shown with more specificity according to some more specific implementations. Elements from FIG. 13 that are also shown in FIG. 14 will use the same reference numbers in FIG. 14 as were used in FIG. 13. Additionally, in FIG. 14, the user system 1312 includes a processor system 1412A, a memory system 1412B, an input system 1412C, and an output system 1412D. The processor system 1412A can include any suitable combination of one or more processors. The memory system 1412B can include any suitable combination of one or more memory devices. The input system 1412C can include any suitable combination of input devices, such as one or more touchscreen interfaces, keyboards, mice, trackballs, scanners, cameras, or interfaces to networks. The output system 1412D can include any suitable combination of output devices, such as one or more display devices, printers, or interfaces to networks.
In FIG. 14, the network interface 1320 of FIG. 13 is implemented as a set of HTTP application servers 1400 1-1400 N. Each application server 1400, also referred to herein as an “app server,” is configured to communicate with tenant database 1322 and the tenant data 1423 therein, as well as system database 1324 and the system data 1425 therein, to serve requests received from the user systems 1412. The tenant data 1423 can be divided into individual tenant storage spaces 1413, which can be physically or logically arranged or divided. Within each tenant storage space 1413, tenant data 1414 and application metadata 1416 can similarly be allocated for each user. For example, a copy of a user's most recently used (MRU) items can be stored to user storage 1414. Similarly, a copy of MRU items for an entire organization that is a tenant can be stored to tenant storage space 1413.
The process space 1328 includes system process space 1402, individual tenant process spaces 1404 and a tenant management process space 1410. The application platform 1318 includes an application setup mechanism 1438 that supports application developers' creation and management of applications. Such applications and others can be saved as metadata into tenant database 1322 by save routines 1436 for execution by subscribers as one or more tenant process spaces 1404 managed by tenant management process 1410, for example. Invocations to such applications can be coded using PL/SOQL 1434, which provides a programming language style interface extension to API 1432. A detailed description of some PL/SOQL language implementations is discussed in commonly assigned U.S. Pat. No. 7,730,478, titled METHOD AND SYSTEM FOR ALLOWING ACCESS TO DEVELOPED APPLICATIONS VIA A MULTI-TENANT ON-DEMAND DATABASE SERVICE, by Craig Weissman, issued on Jun. 1, 2010, and hereby incorporated by reference in its entirety and for all purposes. Invocations to applications can be detected by one or more system processes, which manage retrieving application metadata 816 for the subscriber making the invocation and executing the metadata as an application in a virtual machine.
The system 1316 of FIG. 14 also includes a user interface (UI) 1430 and an application programming interface (API) 1432 to system 1316 resident processes to users or developers at user systems 1412. In some other implementations, the environment 1310 may not have the same elements as those listed above or may have other elements instead of, or in addition to, those listed above.
Each application server 1400 can be communicably coupled with tenant database 1322 and system database 1324, for example, having access to tenant data 1423 and system data 1425, respectively, via a different network connection. For example, one application server 1400 1 can be coupled via the network 1314 (for example, the Internet), another application server 1400 N can be coupled via a direct network link, and another application server (not illustrated) can be coupled by yet a different network connection. Transfer Control Protocol and Internet Protocol (TCP/IP) are examples of typical protocols that can be used for communicating between application servers 1400 and the system 1316. However, it will be apparent to one skilled in the art that other transport protocols can be used to optimize the system 1316 depending on the network interconnections used.
In some implementations, each application server 1400 is configured to handle requests for any user associated with any organization that is a tenant of the system 1316. Because it can be desirable to be able to add and remove application servers 1400 from the server pool at any time and for various reasons, in some implementations there is no server affinity for a user or organization to a specific application server 1400. In some such implementations, an interface system implementing a load balancing function (for example, an F5 Big-IP load balancer) is communicably coupled between the application servers 1400 and the user systems 1412 to distribute requests to the application servers 1400. In one implementation, the load balancer uses a least-connections algorithm to route user requests to the application servers 1400. Other examples of load balancing algorithms, such as round robin and observed-response-time, also can be used. For example, in some instances, three consecutive requests from the same user could hit three different application servers 1400, and three requests from different users could hit the same application server 1400. In this manner, by way of example, system 1316 can be a multi-tenant system in which system 1316 handles storage of, and access to, different objects, data and applications across disparate users and organizations.
In one example storage use case, one tenant can be a company that employs a sales force where each salesperson uses system 1316 to manage aspects of their sales. A user can maintain contact data, leads data, customer follow-up data, performance data, goals and progress data, etc., all applicable to that user's personal sales process (for example, in tenant database 1322). In an example of a MTS arrangement, because all of the data and the applications to access, view, modify, report, transmit, calculate, etc., can be maintained and accessed by a user system 1412 having little more than network access, the user can manage his or her sales efforts and cycles from any of many different user systems. For example, when a salesperson is visiting a customer and the customer has Internet access in their lobby, the salesperson can obtain critical updates regarding that customer while waiting for the customer to arrive in the lobby.
While each user's data can be stored separately from other users' data regardless of the employers of each user, some data can be organization-wide data shared or accessible by several users or all of the users for a given organization that is a tenant. Thus, there can be some data structures managed by system 1316 that are allocated at the tenant level while other data structures can be managed at the user level. Because an MTS can support multiple tenants including possible competitors, the MTS can have security protocols that keep data, applications, and application use separate. Also, because many tenants may opt for access to an MTS rather than maintain their own system, redundancy, up-time, and backup are additional functions that can be implemented in the MTS. In addition to user-specific data and tenant-specific data, the system 1316 also can maintain system level data usable by multiple tenants or other data. Such system level data can include industry reports, news, postings, and the like that are sharable among tenants.
In some implementations, the user systems 1412 (which also can be client systems) communicate with the application servers 1400 to request and update system-level and tenant-level data from the system 1316. Such requests and updates can involve sending one or more queries to tenant database 1322 or system database 1324. The system 1316 (for example, an application server 1400 in the system 1316) can automatically generate one or more SQL statements (for example, one or more SQL queries) designed to access the desired information. System database 1324 can generate query plans to access the requested data from the database. The term “query plan” generally refers to one or more operations used to access information in a database system.
Each database can generally be viewed as a collection of objects, such as a set of logical tables, containing data fitted into predefined or customizable categories. A “table” is one representation of a data object, and may be used herein to simplify the conceptual description of objects and custom objects according to some implementations. It should be understood that “table” and “object” may be used interchangeably herein. Each table generally contains one or more data categories logically arranged as columns or fields in a viewable schema. Each row or element of a table can contain an instance of data for each category defined by the fields. For example, a CRM database can include a table that describes a customer with fields for basic contact information such as name, address, phone number, fax number, etc. Another table can describe a purchase order, including fields for information such as customer, product, sale price, date, etc. In some MTS implementations, standard entity tables can be provided for use by all tenants. For CRM database applications, such standard entities can include tables for case, account, contact, lead, and opportunity data objects, each containing pre-defined fields. As used herein, the term “entity” also may be used interchangeably with “object” and “table.”
In some MTS implementations, tenants are allowed to create and store custom objects, or may be allowed to customize standard entities or objects, for example by creating custom fields for standard objects, including custom index fields. Commonly assigned U.S. Pat. No. 7,779,039, titled CUSTOM ENTITIES AND FIELDS IN A MULTI-TENANT DATABASE SYSTEM, by Weissman et al., issued on Aug. 17, 2010, and hereby incorporated by reference in its entirety and for all purposes, teaches systems and methods for creating custom objects as well as customizing standard objects in a multi-tenant database system. In some implementations, for example, all custom entity data rows are stored in a single multi-tenant physical table, which may contain multiple logical tables per organization. It is transparent to customers that their multiple “tables” are in fact stored in one large table or that their data may be stored in the same table as the data of other customers.
FIG. 15A shows a system diagram illustrating example architectural components of an on-demand database service environment 1500 according to some implementations. A client machine communicably connected with the cloud 1504, generally referring to one or more networks in combination, as described herein, can communicate with the on-demand database service environment 1500 via one or more edge routers 1508 and 1512. A client machine can be any of the examples of user systems 12 described above. The edge routers can communicate with one or more core switches 1520 and 1524 through a firewall 1516. The core switches can communicate with a load balancer 1528, which can distribute server load over different pods, such as the pods 1540 and 1544. The pods 1540 and 1544, which can each include one or more servers or other computing resources, can perform data processing and other operations used to provide on-demand services. Communication with the pods can be conducted via pod switches 1532 and 1536. Components of the on-demand database service environment can communicate with database storage 1556 through a database firewall 1548 and a database switch 1552.
As shown in FIGS. 15A and 15B, accessing an on-demand database service environment can involve communications transmitted among a variety of different hardware or software components. Further, the on-demand database service environment 1500 is a simplified representation of an actual on-demand database service environment. For example, while only one or two devices of each type are shown in FIGS. 15A and 15B, some implementations of an on-demand database service environment can include anywhere from one to several devices of each type. Also, the on-demand database service environment need not include each device shown in FIGS. 15A and 15B, or can include additional devices not shown in FIGS. 15A and 15B.
Additionally, it should be appreciated that one or more of the devices in the on-demand database service environment 1500 can be implemented on the same physical device or on different hardware. Some devices can be implemented using hardware or a combination of hardware and software. Thus, terms such as “data processing apparatus,” “machine,” “server” and “device” as used herein are not limited to a single hardware device, rather references to these terms can include any suitable combination of hardware and software configured to provide the described functionality.
The cloud 1504 is intended to refer to a data network or multiple data networks, often including the Internet. Client machines communicably connected with the cloud 1504 can communicate with other components of the on-demand database service environment 1500 to access services provided by the on-demand database service environment. For example, client machines can access the on-demand database service environment to retrieve, store, edit, or process information. In some implementations, the edge routers 1508 and 1512 route packets between the cloud 1504 and other components of the on-demand database service environment 1500. For example, the edge routers 1508 and 1512 can employ the Border Gateway Protocol (BGP). The BGP is the core routing protocol of the Internet. The edge routers 1508 and 1512 can maintain a table of IP networks or ‘prefixes’, which designate network reachability among autonomous systems on the Internet.
In some implementations, the firewall 1516 can protect the inner components of the on-demand database service environment 1500 from Internet traffic. The firewall 1516 can block, permit, or deny access to the inner components of the on-demand database service environment 1500 based upon a set of rules and other criteria. The firewall 1516 can act as one or more of a packet filter, an application gateway, a stateful filter, a proxy server, or any other type of firewall.
In some implementations, the core switches 1520 and 1524 are high-capacity switches that transfer packets within the on-demand database service environment 1500. The core switches 1520 and 1524 can be configured as network bridges that quickly route data between different components within the on-demand database service environment. In some implementations, the use of two or more core switches 1520 and 1524 can provide redundancy or reduced latency.
In some implementations, the pods 1540 and 1544 perform the core data processing and service functions provided by the on-demand database service environment. Each pod can include various types of hardware or software computing resources. An example of the pod architecture is discussed in greater detail with reference to FIG. 15B. In some implementations, communication between the pods 1540 and 1544 is conducted via the pod switches 1532 and 1536. The pod switches 1532 and 1536 can facilitate communication between the pods 1540 and 1544 and client machines communicably connected with the cloud 1504, for example via core switches 1520 and 1524. Also, the pod switches 1532 and 1536 may facilitate communication between the pods 1540 and 1544 and the database storage 1556. In some implementations, the load balancer 1528 can distribute workload between the pods 1540 and 1544. Balancing the on-demand service requests between the pods can assist in improving the use of resources, increasing throughput, reducing response times, or reducing overhead. The load balancer 1528 may include multilayer switches to analyze and forward traffic.
In some implementations, access to the database storage 1556 is guarded by a database firewall 1548. The database firewall 1548 can act as a computer application firewall operating at the database application layer of a protocol stack. The database firewall 1548 can protect the database storage 1556 from application attacks such as structure query language (SQL) injection, database rootkits, and unauthorized information disclosure. In some implementations, the database firewall 1548 includes a host using one or more forms of reverse proxy services to proxy traffic before passing it to a gateway router. The database firewall 1548 can inspect the contents of database traffic and block certain content or database requests. The database firewall 1548 can work on the SQL application level atop the TCP/IP stack, managing applications' connection to the database or SQL management interfaces as well as intercepting and enforcing packets traveling to or from a database network or application interface.
In some implementations, communication with the database storage 1556 is conducted via the database switch 1552. The multi-tenant database storage 1556 can include more than one hardware or software components for handling database queries. Accordingly, the database switch 1552 can direct database queries transmitted by other components of the on-demand database service environment (for example, the pods 1540 and 1544) to the correct components within the database storage 1556. In some implementations, the database storage 1556 is an on-demand database system shared by many different organizations as described above with reference to FIG. 13 and FIG. 14.
FIG. 15B shows a system diagram further illustrating example architectural components of an on-demand database service environment according to some implementations. The pod 1544 can be used to render services to a user of the on-demand database service environment 1500. In some implementations, each pod includes a variety of servers or other systems. The pod 1544 includes one or more content batch servers 1564, content search servers 1568, query servers 1582, file force servers 1586, access control system (ACS) servers 1580, batch servers 1584, and app servers 1588. The pod 1544 also can include database instances 1590, quick file systems (QFS) 1592, and indexers 1594. In some implementations, some or all communication between the servers in the pod 1544 can be transmitted via the switch 1536.
In some implementations, the app servers 1588 include a hardware or software framework dedicated to the execution of procedures (for example, programs, routines, scripts) for supporting the construction of applications provided by the on-demand database service environment 1500 via the pod 1544. In some implementations, the hardware or software framework of an app server 1588 is configured to execute operations of the services described herein, including performance of the blocks of various methods or processes described herein. In some alternative implementations, two or more app servers 1588 can be included and cooperate to perform such methods, or one or more other servers described herein can be configured to perform the disclosed methods.
The content batch servers 1564 can handle requests internal to the pod. Some such requests can be long-running or not tied to a particular customer. For example, the content batch servers 1564 can handle requests related to log mining, cleanup work, and maintenance tasks. The content search servers 1568 can provide query and indexer functions. For example, the functions provided by the content search servers 1568 can allow users to search through content stored in the on-demand database service environment. The file force servers 1586 can manage requests for information stored in the File force storage 1598. The File force storage 1598 can store information such as documents, images, and basic large objects (BLOBs). By managing requests for information using the file force servers 1586, the image footprint on the database can be reduced. The query servers 1582 can be used to retrieve information from one or more file storage systems. For example, the query system 1582 can receive requests for information from the app servers 1588 and transmit information queries to the NFS 1596 located outside the pod.
The pod 1544 can share a database instance 1590 configured as a multi-tenant environment in which different organizations share access to the same database. Additionally, services rendered by the pod 1544 may call upon various hardware or software resources. In some implementations, the ACS servers 1580 control access to data, hardware resources, or software resources. In some implementations, the batch servers 1584 process batch jobs, which are used to run tasks at specified times. For example, the batch servers 1584 can transmit instructions to other servers, such as the app servers 1588, to trigger the batch jobs.
In some implementations, the QFS 1592 is an open source file storage system available from Sun Microsystems® of Santa Clara, Calif. The QFS can serve as a rapid-access file storage system for storing and accessing information available within the pod 1544. The QFS 1592 can support some volume management capabilities, allowing many disks to be grouped together into a file storage system. File storage system metadata can be kept on a separate set of disks, which can be useful for streaming applications where long disk seeks cannot be tolerated. Thus, the QFS system can communicate with one or more content search servers 1568 or indexers 1594 to identify, retrieve, move, or update data stored in the network file storage systems 1596 or other storage systems.
In some implementations, one or more query servers 1582 communicate with the NFS 1596 to retrieve or update information stored outside of the pod 1544. The NFS 1596 can allow servers located in the pod 1544 to access information to access files over a network in a manner similar to how local storage is accessed. In some implementations, queries from the query servers 1582 are transmitted to the NFS 1596 via the load balancer 1528, which can distribute resource requests over various resources available in the on-demand database service environment. The NFS 1596 also can communicate with the QFS 1592 to update the information stored on the NFS 1596 or to provide information to the QFS 1592 for use by servers located within the pod 1544.
In some implementations, the pod includes one or more database instances 1590. The database instance 1590 can transmit information to the QFS 1592. When information is transmitted to the QFS, it can be available for use by servers within the pod 1544 without using an additional database call. In some implementations, database information is transmitted to the indexer 1594. Indexer 1594 can provide an index of information available in the database 1590 or QFS 1592. The index information can be provided to file force servers 1586 or the QFS 1592.
FIG. 16 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 1600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. The system 1600 may be in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server machine in client-server network environment. The machine may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, computer system 1600 may represent application server 110, as shown in FIGS. 2-4 and 6.
The exemplary computer system 1600 includes a processing device (processor) 1602, a main memory 1604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1606 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 1618, which communicate with each other via a bus 1630.
Processing device 1602 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 1602 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 1602 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like.
The computer system 1600 may further include a network interface device 1608. The computer system 1600 also may include a video display unit 1610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1612 (e.g., a keyboard), a cursor control device 1614 (e.g., a mouse), and a signal generation device 1616 (e.g., a speaker).
The data storage device 1618 may include a computer-readable medium 1628 on which is stored one or more sets of instructions 1622 (e.g., instructions of in-memory buffer service 114) embodying any one or more of the methodologies or functions described herein. The instructions 1622 may also reside, completely or at least partially, within the main memory 1604 and/or within processing logic 1626 of the processing device 1602 during execution thereof by the computer system 1600, the main memory 1604 and the processing device 1602 also constituting computer-readable media. The instructions may further be transmitted or received over a network 1620 via the network interface device 1608.
While the computer-readable storage medium 1628 is shown in an exemplary embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a good understanding of several embodiments of the present invention. It will be apparent to one skilled in the art, however, that at least some embodiments of the present invention may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present invention. Thus, the specific details set forth are merely exemplary. Particular implementations may vary from these exemplary details and still be contemplated to be within the scope of the present invention.
In the above description, numerous details are set forth. It will be apparent, however, to one of ordinary skill in the art having the benefit of this disclosure, that embodiments of the invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the description.
Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining”, “identifying”, “adding”, “selecting” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Embodiments of the invention also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (28)

What is claimed:
1. A system, comprising:
a data store that is configured to store events; and
a temporary events file storage system (TEFSS);
a cluster of application servers each being configured to run a client application that generates events, wherein the cluster comprises:
a first application server comprising:
a capture service configured to: receive events generated by the client application running on the first application server and to provide the events to an in-memory buffer service that is configured to temporarily hold the events in a bounded buffer that is implemented in physical memory at the first application server, wherein the bounded buffer comprises a plurality of single-threaded buffer segments that are each configured to temporarily hold some of the events received from the capture service until a flush operation is performed, wherein each particular single-threaded buffer segment holds a plurality of events;
a consumer executor service configured to: instantiate a plurality of consumer threads that each correspond to a particular single-threaded buffer segment, wherein each consumer thread comprises: an instance of an events reader that is configured to read events flushed from a corresponding single-threaded buffer segment and to add an identifier to each the events; and an instance of a direct events writer that is configured to attempt to directly write each event to the data store;
an indirect events writer that is invoked when events generated by the first application server are unable to be directly written to the data store, comprising:
a plurality of event capture threads each being configured to: generate a particular events file that includes a plurality of events flushed from an in-memory buffer service, and write that particular events file to the TEFSS; and
a second application server comprising:
an events file uploader service that is configured to: determine that the first application server is inactive; read at least one events file from the TEFSS; and write the events from each of the events files that was read to the data store.
2. The system according to claim 1, wherein each consumer thread provides the events to a corresponding one of the event capture threads, and wherein each of the event capture threads comprises:
an instance of an events file writer; and
an instance of an invoker that is configured to:
determine whether a corresponding instance of the direct events writer was able to successfully directly write each event, that was read from a corresponding single-threaded buffer segment, to an event table at the data store; and
invoke the instance of an events file writer at that event capture thread, whenever the invoker determines that the corresponding instance of the direct events writer was unable to successfully write each event directly to the data store, to serialize the plurality of events from the particular single-threaded buffer segment to generate an events file and to write the events file to the TEFSS.
3. The system according to claim 1, wherein each events file comprises:
a file name;
a group identifier that uniquely identifies a group of events files that were generated during a particular time window; and
file information comprising a file type, and a plurality of events that are serialized.
4. The system according to claim 3, wherein the file name comprises: an application server name of the application server that generated the events in the events file; a creation date for each of the events in the events file; and a thread identifier that specifies a name of an event capture thread writing the events file.
5. The system according to claim 1, wherein the data store further comprises:
an events uploader job detail table that comprises a plurality of rows, wherein each row is configured to store an uploader job record that includes job detail information that points to a particular events file stored in the TEFSS, and further comprising:
wherein the events file uploader service comprises:
an events uploader manager that is configured to:
create an uploader job record that points to the particular events file stored at the TEFSS, and
write the uploader job record to the events uploader job detail table maintained at the data store.
6. The system according to claim 5, wherein each uploader job record comprises:
a cluster identifier that uniquely identifies a collection of instances that a grouped as a cluster;
an instance identifier that uniquely identifies a collection of application servers grouped as an instance;
an application server name of a particular application server;
a group identifier that uniquely identifies a group of events files that were generated during a particular time window;
a job started date that specifies when the particular events file was created; and
a job type that indicates either regular or backup.
7. The system according to claim 6, wherein each uploader job record further comprises: a file name of the particular events file, and wherein each row of the events uploader job detail table includes a row key that is used by an events uploader manager to look up job detail information for a particular event file, wherein the row key is defined by the combination of the instance identifier, the application server name, the group identifier, the file name, and the job started date.
8. The system according to claim 6, wherein the cluster identifier, the instance identifier, the application server name, and the job type are used to identify a particular instance of the events file uploader service that runs on a particular application server of the cluster.
9. The system according to claim 6, wherein each uploader job record further comprises:
a number of events that specifies a number of events for this particular events file;
a last updated time that indicates when the particular events file was last updated;
a number of retry attempts that specifies how many times the events file uploader service has attempted to process the particular events file; and
a status that indicates either running or completed.
10. The system according to claim 1, wherein the events file uploader service further comprises:
an events uploader manager that is configured to:
maintain each of the uploader job records that are stored in the events uploader job detail table; and
retrieve job detail information from particular uploader job records stored at the events uploader job detail table of the data store.
11. The system according to claim 10, wherein the events uploader manager is further configured to:
find an application server in the cluster that has been inactive for a certain duration, wherein the events uploader manager determines that the first application server has been inactive for the certain duration;
read, from the events uploader job detail table, the most recent uploader job record for the first application server;
search the most recent uploader job record for a starting group identifier that uniquely identifies a group of events files generated by the first application server during a particular time window; and
determine from the most recent uploader job record whether a status for the most recent uploader job record indicates running or completed.
12. The system according to claim 11, wherein the events uploader manager is further configured to:
when the status of the most recent uploader job record indicates completed: search the TFESS for an events file that includes the starting group identifier; and
determine whether an events file that is associated with the starting group identifier was able to be read from the TEFSS.
13. The system according to claim 12, when the events file that is associated with the starting group identifier was not able to be read from the TEFSS, wherein the events uploader manager is further configured to:
determine a next possible group identifier that uniquely identifies another group of events files generated by the first application server during another particular time window;
determine whether the next possible group identifier is for a current time window;
search, when the next possible group identifier is not for the current time window, the TFESS for an events file that includes the next possible group identifier, or
terminate the events file uploader service when the next possible group identifier is for the current time window.
14. The system according to claim 12, when the events file that includes the starting group identifier was able to be read from the TEFSS, wherein the events uploader manager is further configured to:
instantiate a worker thread and assign the worker thread to the particular events file that was read, wherein the worker thread comprises: a particular events file reader and a particular event loader; and
upsert the most recent uploader job record for the first application server with updated job detail information at the events uploader job detail table of the data store;
wherein the events file reader is configured to:
read at least one events file for the first application server from the TEFSS based on the job detail information retrieved from the events uploader job detail table, serialize the events from the events file, and send the events to the event loader.
15. The system according to claim 14, wherein the events loader is configured to:
load and save each of the events from the events file to a row of an event table maintained at the data store;
wherein the events uploader manager is further configured to:
update, in the events uploader job detail table, the status of the uploader job record that corresponds to the events file to indicate completed;
delete the events file from the TEFSS; and
search the TFESS for another events file that includes the starting group identifier.
16. The system according to claim 11, when it is determined from the most recent uploader job record that status for the most recent uploader job record indicates running, wherein the events uploader manager is further configured to:
assign a worker thread for the particular events file that was read, wherein the worker thread comprises: a particular events file reader and a particular event loader; and
upsert the most recent uploader job record for the first application server with updated job detail information at the events uploader job detail table of the data store;
wherein the events file reader is configured to:
read the events file, serialize the events from the events file and send the events to the event loader.
17. The system according to claim 16, wherein the events loader is configured to:
load and save each of the events from the events file to a row of an event table maintained at the data store;
wherein the events uploader manager is further configured to:
update, in the events uploader job detail table, the status of the uploader job record that corresponds to the events file to indicate completed;
delete the events file from the TEFSS; and
wherein the events uploader manager is further configured to search the TFESS for another events file that includes the starting group identifier.
18. A non-transitory computer-readable medium including instructions, which when executed by a processing system at an application server having an in-memory buffer service, are configurable to cause the processing system to perform a method, comprising:
receiving, at a capture service at the application server, events from a client application running on the application server, wherein the events are to be stored in a data store;
storing, via the capture service, the events in the in-memory buffer service comprising: a bounded buffer that is implemented in physical memory at the application server, wherein the bounded buffer comprises a plurality of single-threaded buffer segments that are each configured to temporarily store some of the events until a flush operation is performed;
flushing the plurality of single-threaded buffer segments of the bounded buffer, wherein each particular single-threaded buffer segment includes the plurality of events;
instantiating, via a consumer executor service at the application server, a plurality of consumer threads that each corresponds to a particular single-threaded buffer segment, wherein each consumer thread comprises: an instance of an events reader and an instance of a direct events writer;
reading, at the events reader of each consumer thread, the events flushed from the particular single-threaded buffer segment that corresponds to that consumer thread;
adding, at each direct events writer, an identifier to each the events;
at each direct events writer, attempting to directly write each event to the data store;
instantiating, at an indirect events writer, a plurality of event capture threads each comprising an events file writer:
generating, at each events file writer, a particular events file that comprises a plurality of events received from the in-memory buffer service when those events are unable to be directly written to the data store; and
writing, via each events file writer, the particular events file to a temporary events file storage system (TEFSS) that is configured to temporarily store the events files for subsequent writing to the data store.
19. The non-transitory computer-readable medium of claim 18, wherein each of the event capture threads further comprises: an invoker, and wherein the method further comprises:
determining, at each invoker, whether a corresponding instance of the direct events writer was able to successfully write the events, that were read from the particular single-threaded buffer segment, directly to an event table at the data store;
whenever an instance of an invoker at one of the event capture thread determines that a corresponding instance of the direct events writer was unable to successfully write the events directly to the data store, invoking an instance of the events file writer at that event capture thread, and receiving the plurality of events from that corresponding instance of the direct events writer for that events file writer; and
serializing, at each instance of the events file writer, the plurality of events from the particular single-threaded buffer segment into an events file.
20. The non-transitory computer-readable medium of claim 18, wherein each events file comprises:
a file name comprising: an application server name of the application server that generated the events in the events file; a creation date for each of the events in the events file; and a thread identifier that specifies a name of an event capture thread writing the events file;
a group identifier that uniquely identifies a group of events files that were generated during a particular time window; and
a plurality of events that are serialized.
21. A non-transitory computer-readable medium including instructions, which when executed by a processing system at a first application server, are configurable to cause the processing system to perform a method for storing events in a data store comprising an events uploader job detail table, wherein the first application server comprises: an events file uploader service that includes an events uploader manager, and the method comprising:
determining that a second application server is inactive, wherein determining that the second application server is inactive, comprises: finding, via the events uploader manager, the second application server in the cluster that has been inactive for a certain duration;
reading one or more events files from a temporary events file storage system (TEFSS), wherein each events file was written to the TEFSS by an indirect events writer of the second application server, and wherein each events file comprises a plurality of events, wherein the step of reading one or more events files from the TEFSS, comprises:
reading, from an events uploader job detail table via the events uploader manager, a most recent uploader job record for the second application server;
searching, via the events uploader manager, the most recent uploader job record for the second application server for a starting group identifier that uniquely identifies a group of events files generated by the second application server during a particular time window; and
determining, via the events uploader manager, from the most recent uploader job record whether a status for the most recent uploader job record indicates running or completed; and
writing each event from each of the events files that were read to the data store.
22. The non-transitory computer-readable medium of claim 21,
wherein the events uploader job detail table comprises a plurality of rows, wherein each row is configured to store an uploader job record that includes job detail information that points to a particular events file stored in the TEFSS, and
wherein the events uploader manager maintains each of the uploader job records that are stored in the events uploader job detail table.
23. The non-transitory computer-readable medium of claim 22, wherein the first application server and the second application server are part of a cluster of application servers, wherein each uploader job record comprises:
a cluster identifier that uniquely identifies a collection of instances that a grouped as a cluster;
an instance identifier that uniquely identifies a collection of application servers grouped as an instance;
an application server name of a particular application server;
a group identifier that uniquely identifies a group of events files that were generated during a particular time window.
24. The non-transitory computer-readable medium of claim 21, when the status of the most recent uploader job record indicates completed:
searching, via the events uploader manager of the first application server, the TFESS for an events file that includes the starting group identifier; and
determining, via the events uploader manager of the first application server, whether an events file that is associated with the starting group identifier was able to be read from the TEFSS.
25. The non-transitory computer-readable medium of claim 24, when the events file that is associated with the starting group identifier was not able to be read from the TEFSS:
determining, via the events uploader manager of the first application server, a next possible group identifier that uniquely identifies another group of events files generated by the second application server during another particular time window;
determining, via the events uploader manager of the first application server, whether the next possible group identifier is for a current time window;
when the next possible group identifier is for the current time window, terminating the events file uploader service at the first application server; and
when the next possible group identifier is not for the current time window, searching, via the events uploader manager of the first application server, the TFESS for an events file that includes the next possible group identifier.
26. The non-transitory computer-readable medium of claim 24, when the events file that includes the starting group identifier was able to be read from the TEFSS:
assigning, via the events uploader manager of the first application server, a worker thread for the particular events file that was read, wherein the worker thread comprises: a particular events file reader and a particular event loader; and
upserting, via the events uploader manager of the first application server, the most recent uploader job record for the second application server at the events uploader job detail table of the data store with updated job detail information; and
at each particular events file reader:
reading at least one events file for the application server from the TEFSS based on the job detail information retrieved, serializing the events from the events file, and sending the events to the event loader;
wherein the step of writing each event from each of the events files that were read to the data store, comprises:
loading and saving, via the events loader, each of the events from the events file to a row of an event table maintained at the data store; and
further comprising:
updating in the events uploader job detail table, via the events uploader manager, the status of the uploader job record that corresponds to the events file to indicate completed;
deleting, via the events uploader manager, the events file from the TEFSS; and
searching, via the events uploader manager, the TFESS for another events file that includes the starting group identifier.
27. The non-transitory computer-readable medium of claim 21, when it is determined from the most recent uploader job record that status for the most recent uploader job record indicates running, further comprising:
assigning, via the events uploader manager, a worker thread for the particular events file that was read, wherein the worker thread comprises: a particular events file reader and a particular event loader; and
upserting, at the events uploader job detail table of the data store via the events uploader manager, the most recent uploader job record for the second application server with updated job detail information; and
at the events file reader: reading the events file that includes the starting group identifier, serializing the events from the events file and sending the events to the event loader;
wherein the step of writing the plurality of events from each events file that was read to an event table maintained at the data store, comprises:
loading and saving, via the events loader, each of the events from the events file to a row of the event table maintained at the data store; and further comprising:
updating, in the events uploader job detail table via the events uploader manager, the status of the uploader job record that corresponds to the events file to indicate completed; deleting, via the events uploader manager, the events file from the TEFSS; and searching, via the events uploader manager, the TFESS for another events file that includes the starting group identifier.
28. A computer-implemented method, comprising:
at a first application server:
receiving, at a capture service, the events generated by the client application running on the first application server and providing the events to an in-memory buffer service, wherein the in-memory buffer service is configured to temporarily hold the events in a bounded buffer that is implemented in physical memory at the first application server, wherein the bounded buffer comprises a plurality of single-threaded buffer segments that are each configured to temporarily hold some of the events received from the capture service until a flush operation is performed, wherein each particular single-threaded buffer segment holds a plurality of events;
instantiating, via a consumer executor service, a plurality of consumer threads that each correspond to a particular single-threaded buffer segment, wherein each consumer thread comprises: an instance of an events reader that is configured to read events flushed from a corresponding single-threaded buffer segment and to add an identifier to each the events; and an instance of a direct events writer that is configured to attempt to directly write each event to a data store; and
invoking, an indirect events writer when events generated by the client application running on the first application server are unable to be directly written to a data store, wherein the indirect events writer comprises: a plurality of event capture threads each comprising an instance of an events file writer;
generating, at each events file writer of each event capture thread, a particular events file that comprises a plurality of events received from the in-memory buffer service when those events are unable to be directly written to the data store;
writing, via each events file writer of each event capture thread, the particular events file generated by that event capture thread to a temporary events file storage system (TEFSS) that is configured to temporarily store one or more events files for subsequent writing to the data store; and
at an events file uploader service of a second application server:
determining that the first application server is inactive;
reading at least one events file from the TEFSS that was written by the indirect events writer of the first application server; and
writing the plurality of events from each events file that was read to the data store.
US14/976,125 2015-09-15 2015-12-21 System having in-memory buffer service, temporary events file storage system and backup events file uploader service Active US9658801B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/976,125 US9658801B2 (en) 2015-09-15 2015-12-21 System having in-memory buffer service, temporary events file storage system and backup events file uploader service

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562218871P 2015-09-15 2015-09-15
US14/976,125 US9658801B2 (en) 2015-09-15 2015-12-21 System having in-memory buffer service, temporary events file storage system and backup events file uploader service

Publications (2)

Publication Number Publication Date
US20170075627A1 US20170075627A1 (en) 2017-03-16
US9658801B2 true US9658801B2 (en) 2017-05-23

Family

ID=58238844

Family Applications (6)

Application Number Title Priority Date Filing Date
US14/976,115 Active US9632849B2 (en) 2015-09-15 2015-12-21 System having in-memory buffer service, temporary events file storage system and events file uploader service
US14/976,125 Active US9658801B2 (en) 2015-09-15 2015-12-21 System having in-memory buffer service, temporary events file storage system and backup events file uploader service
US15/443,219 Active US10037233B2 (en) 2015-09-15 2017-02-27 System having in-memory buffer service, temporary events file storage system and events file uploader service
US15/946,596 Active US10185603B2 (en) 2015-09-15 2018-04-05 System having in-memory buffer service, temporary events file storage system and backup events file uploader service
US16/202,239 Active US10496456B2 (en) 2015-09-15 2018-11-28 System having in-memory buffer service, temporary events file storage system and events file uploader service
US16/674,197 Active US10891175B2 (en) 2015-09-15 2019-11-05 System having in-memory buffer service, temporary events file storage system and events file uploader service

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/976,115 Active US9632849B2 (en) 2015-09-15 2015-12-21 System having in-memory buffer service, temporary events file storage system and events file uploader service

Family Applications After (4)

Application Number Title Priority Date Filing Date
US15/443,219 Active US10037233B2 (en) 2015-09-15 2017-02-27 System having in-memory buffer service, temporary events file storage system and events file uploader service
US15/946,596 Active US10185603B2 (en) 2015-09-15 2018-04-05 System having in-memory buffer service, temporary events file storage system and backup events file uploader service
US16/202,239 Active US10496456B2 (en) 2015-09-15 2018-11-28 System having in-memory buffer service, temporary events file storage system and events file uploader service
US16/674,197 Active US10891175B2 (en) 2015-09-15 2019-11-05 System having in-memory buffer service, temporary events file storage system and events file uploader service

Country Status (1)

Country Link
US (6) US9632849B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10579445B2 (en) 2018-04-06 2020-03-03 Salesforce.Com, Inc. Dynamic routing of events to destinations
USD894213S1 (en) 2018-12-14 2020-08-25 Salesforce.Com, Inc. Display screen or portion thereof with graphical user interface
US11068448B2 (en) 2019-01-07 2021-07-20 Salesforce.Com, Inc. Archiving objects in a database environment
US11281847B2 (en) 2016-09-30 2022-03-22 Salesforce.Com, Inc. Generating content objects using an integrated development environment
US11699352B2 (en) 2018-09-24 2023-07-11 Salesforce, Inc. Implementing an achievement platform using a database system
US11714694B2 (en) 2019-11-08 2023-08-01 Salesforce, Inc. Error notification mechanism for streaming events
US11816356B2 (en) 2021-07-06 2023-11-14 Pure Storage, Inc. Container orchestrator-aware storage system

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170228409A1 (en) * 2016-02-08 2017-08-10 Red Hat, Inc. In-memory journaling
US10467241B2 (en) 2017-03-24 2019-11-05 Ca, Inc. Dynamically provisioning instances of a single-tenant application for multi-tenant use
US11663205B2 (en) 2017-05-04 2023-05-30 Salesforce, Inc. Technologies for asynchronous querying
JP6751064B2 (en) * 2017-09-20 2020-09-02 株式会社東芝 Data search system, data search method, and program
US10496460B2 (en) 2017-11-15 2019-12-03 Bank Of America Corporation System for technology anomaly detection, triage and response using solution data modeling
US10749791B2 (en) * 2017-11-15 2020-08-18 Bank Of America Corporation System for rerouting electronic data transmissions based on generated solution data models
CN108574609B (en) * 2017-12-29 2021-05-14 视联动力信息技术股份有限公司 Transmission monitoring method and device
US10848473B1 (en) * 2018-02-02 2020-11-24 Cox Communications, Inc. Secure management of security keys to control access to a protected communication network
CN110427582A (en) * 2018-04-28 2019-11-08 华为技术有限公司 The read method and device of file cache
US11023835B2 (en) 2018-05-08 2021-06-01 Bank Of America Corporation System for decommissioning information technology assets using solution data modelling
US10970406B2 (en) 2018-05-08 2021-04-06 Bank Of America Corporation System for mitigating exposure associated with identified unmanaged devices in a network using solution data modelling
US10977283B2 (en) 2018-05-08 2021-04-13 Bank Of America Corporation System for mitigating intentional and unintentional exposure using solution data modelling
US10936984B2 (en) 2018-05-08 2021-03-02 Bank Of America Corporation System for mitigating exposure associated with identified impacts of technological system changes based on solution data modelling
US10761748B2 (en) * 2018-08-06 2020-09-01 Datera, Incorporated Future write in distributed system maintaining data storage integrity
US11233874B2 (en) 2018-08-06 2022-01-25 Vmware, Inc. Ordinary write in distributed system maintaining data storage integrity
US10783044B2 (en) 2018-09-24 2020-09-22 Salesforce, Inc. Method and apparatus for a mechanism of disaster recovery and instance refresh in an event recordation system
US11385945B2 (en) 2018-12-03 2022-07-12 Salesforce.Com, Inc. Method and system for event consumer management in an aggregated event platform
CN111382133B (en) * 2018-12-28 2023-04-11 广东亿迅科技有限公司 Distributed high-performance quasi-real-time data flow calculation method and device
US10776186B2 (en) 2019-02-11 2020-09-15 Salesforce.Com, Inc. Method and system for detection and handling of discontinuities in event delivery systems
US11061958B2 (en) * 2019-11-14 2021-07-13 Jetblue Airways Corporation Systems and method of generating custom messages based on rule-based database queries in a cloud platform
US11048519B2 (en) 2019-11-22 2021-06-29 T-Mobile Usa, Inc. System and method for asynchronous distribution of operations that require synchronous execution
US11570099B2 (en) 2020-02-04 2023-01-31 Bank Of America Corporation System and method for autopartitioning and processing electronic resources
US11526489B1 (en) * 2020-06-03 2022-12-13 Egnyte, Inc. System and method for grouping and executing transactions while ensuring hybrid cloud cache integrity and consistency
JP7406461B2 (en) * 2020-06-19 2023-12-27 株式会社オービック Data processing device, data processing method, and data processing program
US11481120B2 (en) 2020-11-30 2022-10-25 Red Hat, Inc. Reducing wear on zoned storage devices for storing digital data

Citations (132)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577188A (en) 1994-05-31 1996-11-19 Future Labs, Inc. Method to provide for virtual screen overlay
US5608872A (en) 1993-03-19 1997-03-04 Ncr Corporation System for allowing all remote computers to perform annotation on an image and replicating the annotated image on the respective displays of other comuters
US5649104A (en) 1993-03-19 1997-07-15 Ncr Corporation System for allowing user of any computer to draw image over that generated by the host computer and replicating the drawn image to other computers
US5715450A (en) 1995-09-27 1998-02-03 Siebel Systems, Inc. Method of selecting and presenting data from a database using a query language to a user of a computer system
US5821937A (en) 1996-02-23 1998-10-13 Netsuite Development, L.P. Computer method for updating a network design
US5831610A (en) 1996-02-23 1998-11-03 Netsuite Development L.P. Designing networks
US5873096A (en) 1997-10-08 1999-02-16 Siebel Systems, Inc. Method of maintaining a network of partially replicated database system
US5918159A (en) 1997-08-04 1999-06-29 Fomukong; Mundi Location reporting satellite paging system with optional blocking of location reporting
US5963953A (en) 1998-03-30 1999-10-05 Siebel Systems, Inc. Method, and system for product configuration
US6092083A (en) 1997-02-26 2000-07-18 Siebel Systems, Inc. Database management system which synchronizes an enterprise server and a workgroup user client using a docking agent
US6161149A (en) 1998-03-13 2000-12-12 Groupserve, Inc. Centrifugal communication and collaboration method
US6169534B1 (en) 1997-06-26 2001-01-02 Upshot.Com Graphical user interface for customer information management
US6178425B1 (en) 1997-02-26 2001-01-23 Siebel Systems, Inc. Method of determining the visibility to a remote database client of a plurality of database transactions using simplified visibility rules
US6216135B1 (en) 1997-02-26 2001-04-10 Siebel Systems, Inc. Method of determining visibility to a remote database client of a plurality of database transactions having variable visibility strengths
US6233617B1 (en) 1997-02-26 2001-05-15 Siebel Systems, Inc. Determining the visibility to a remote database client
US6266669B1 (en) 1997-02-28 2001-07-24 Siebel Systems, Inc. Partially replicated distributed database with multiple levels of remote clients
US6295530B1 (en) 1995-05-15 2001-09-25 Andrew M. Ritchie Internet service of differently formatted viewable data signals including commands for browser execution
US20010044791A1 (en) 2000-04-14 2001-11-22 Richter James Neal Automated adaptive classification system for bayesian knowledge networks
US6324568B1 (en) 1999-11-30 2001-11-27 Siebel Systems, Inc. Method and system for distributing objects over a network
US6324693B1 (en) 1997-03-12 2001-11-27 Siebel Systems, Inc. Method of synchronizing independently distributed software and database schema
US6336137B1 (en) 2000-03-31 2002-01-01 Siebel Systems, Inc. Web client-server system and method for incompatible page markup and presentation languages
USD454139S1 (en) 2001-02-20 2002-03-05 Rightnow Technologies Display screen for a computer
US6367077B1 (en) 1997-02-27 2002-04-02 Siebel Systems, Inc. Method of upgrading a software application in the presence of user modifications
US6393605B1 (en) 1998-11-18 2002-05-21 Siebel Systems, Inc. Apparatus and system for efficient delivery and deployment of an application
US20020072951A1 (en) 1999-03-03 2002-06-13 Michael Lee Marketing support database management method, system and program product
US20020082892A1 (en) 1998-08-27 2002-06-27 Keith Raffel Method and apparatus for network-based sales force management
US6434550B1 (en) 2000-04-14 2002-08-13 Rightnow Technologies, Inc. Temporal updates of relevancy rating of retrieved information in an information search system
US6446089B1 (en) 1997-02-26 2002-09-03 Siebel Systems, Inc. Method of using a cache to determine the visibility to a remote database client of a plurality of database transactions
US20020143997A1 (en) 2001-03-28 2002-10-03 Xiaofei Huang Method and system for direct server synchronization with a computing device
US20020140731A1 (en) 2001-03-28 2002-10-03 Pavitra Subramaniam Engine to present a user interface based on a logical structure, such as one for a customer relationship management system, across a web site
US20020162090A1 (en) 2001-04-30 2002-10-31 Parnell Karen P. Polylingual simultaneous shipping of software
US20020165742A1 (en) 2000-03-31 2002-11-07 Mark Robins Feature centric release manager method and system
US20030004971A1 (en) 2001-06-29 2003-01-02 Gong Wen G. Automatic generation of data models and accompanying user interfaces
US20030018830A1 (en) 2001-02-06 2003-01-23 Mingte Chen Adaptive communication application programming interface
US20030018705A1 (en) 2001-03-31 2003-01-23 Mingte Chen Media-independent communication server
US6535909B1 (en) 1999-11-18 2003-03-18 Contigo Software, Inc. System and method for record and playback of collaborative Web browsing session
US20030066031A1 (en) 2001-09-28 2003-04-03 Siebel Systems, Inc. Method and system for supporting user navigation in a browser environment
US20030066032A1 (en) 2001-09-28 2003-04-03 Siebel Systems,Inc. System and method for facilitating user interaction in a browser environment
US20030070000A1 (en) 2001-09-29 2003-04-10 John Coker Computing system and method to implicitly commit unsaved data for a World Wide Web application
US20030070004A1 (en) 2001-09-29 2003-04-10 Anil Mukundan Method, apparatus, and system for implementing a framework to support a web-based application
US20030069936A1 (en) 2001-10-09 2003-04-10 Warner Douglas K. Method for routing electronic correspondence based on the level and type of emotion contained therein
US20030070005A1 (en) 2001-09-29 2003-04-10 Anil Mukundan Method, apparatus, and system for implementing view caching in a framework to support web-based applications
US20030074418A1 (en) 2001-09-29 2003-04-17 John Coker Method, apparatus and system for a mobile web client
US6553563B2 (en) 1998-11-30 2003-04-22 Siebel Systems, Inc. Development tool, method, and system for client server applications
US6560461B1 (en) 1997-08-04 2003-05-06 Mundi Fomukong Authorized location reporting paging system
US6574635B2 (en) 1999-03-03 2003-06-03 Siebel Systems, Inc. Application instantiation based upon attributes and values stored in a meta data repository, including tiering of application layers objects and components
US6577726B1 (en) 2000-03-31 2003-06-10 Siebel Systems, Inc. Computer telephony integration hotelling method and system
US6601087B1 (en) 1998-11-18 2003-07-29 Webex Communications, Inc. Instant document sharing
US6604117B2 (en) 1996-03-19 2003-08-05 Siebel Systems, Inc. Method of maintaining a network of partially replicated database system
US20030151633A1 (en) 2002-02-13 2003-08-14 David George Method and system for enabling connectivity to a data system
US20030159136A1 (en) 2001-09-28 2003-08-21 Huang Xiao Fei Method and system for server synchronization with a computing device
US6621834B1 (en) 1999-11-05 2003-09-16 Raindance Communications, Inc. System and method for voice transmission over network protocols
US20030189600A1 (en) 2002-03-29 2003-10-09 Prasad Gune Defining an approval process for requests for approval
US20030204427A1 (en) 2002-03-29 2003-10-30 Prasad Gune User interface for processing requests for approval
US20030206192A1 (en) 2001-03-31 2003-11-06 Mingte Chen Asynchronous message push to web browser
US6654032B1 (en) 1999-12-23 2003-11-25 Webex Communications, Inc. Instant sharing of documents on a remote server
US20030225730A1 (en) 2002-06-03 2003-12-04 Rightnow Technologies, Inc. System and method for generating a dynamic interface via a communications network
US6665655B1 (en) 2000-04-14 2003-12-16 Rightnow Technologies, Inc. Implicit rating of retrieved information in an information search system
US6665648B2 (en) 1998-11-30 2003-12-16 Siebel Systems, Inc. State models for monitoring process
US20040001092A1 (en) 2002-06-27 2004-01-01 Rothwein Thomas M. Prototyping graphical user interfaces
US20040010489A1 (en) 2002-07-12 2004-01-15 Rightnow Technologies, Inc. Method for providing search-specific web pages in a network computing environment
US20040015981A1 (en) 2002-06-27 2004-01-22 Coker John L. Efficient high-interactivity user interface for client-server applications
US20040027388A1 (en) 2002-06-27 2004-02-12 Eric Berg Method and apparatus to facilitate development of a customer-specific business process model
US6711565B1 (en) 2001-06-18 2004-03-23 Siebel Systems, Inc. Method, apparatus, and system for previewing search results
US6724399B1 (en) 2001-09-28 2004-04-20 Siebel Systems, Inc. Methods and apparatus for enabling keyboard accelerators in applications implemented via a browser
US6728960B1 (en) 1998-11-18 2004-04-27 Siebel Systems, Inc. Techniques for managing multiple threads in a browser environment
US6728702B1 (en) 2001-06-18 2004-04-27 Siebel Systems, Inc. System and method to implement an integrated search center supporting a full-text search and query on a database
US6732111B2 (en) 1998-03-03 2004-05-04 Siebel Systems, Inc. Method, apparatus, system, and program product for attaching files and other objects to a partially replicated database
US6732095B1 (en) 2001-04-13 2004-05-04 Siebel Systems, Inc. Method and apparatus for mapping between XML and relational representations
US6732100B1 (en) 2000-03-31 2004-05-04 Siebel Systems, Inc. Database access method and system for user role defined access
US20040128001A1 (en) 2002-08-28 2004-07-01 Levin Issac Stephen Method and apparatus for an integrated process modeller
US6763501B1 (en) 2000-06-09 2004-07-13 Webex Communications, Inc. Remote document serving
US6763351B1 (en) 2001-06-18 2004-07-13 Siebel Systems, Inc. Method, apparatus, and system for attaching search results
US6768904B2 (en) 2000-10-11 2004-07-27 Lg Electronics Inc. Data communication method using mobile terminal
US6772229B1 (en) 2000-11-13 2004-08-03 Groupserve, Inc. Centrifugal communication and collaboration method
US6782383B2 (en) 2001-06-18 2004-08-24 Siebel Systems, Inc. System and method to implement a persistent and dismissible search center frame
US20040186860A1 (en) 2003-03-21 2004-09-23 Wen-Hsin Lee Method and architecture for providing data-change alerts to external applications via a push service
US20040193510A1 (en) 2003-03-25 2004-09-30 Catahan Nardo B. Modeling of order data
US20040199536A1 (en) 2003-03-24 2004-10-07 Barnes Leon Maria Theresa Product common object
US20040199543A1 (en) 2003-04-04 2004-10-07 Braud Luke A. Facilitating data manipulation in a browser-based user interface of an enterprise business application
US20040199489A1 (en) 2003-03-24 2004-10-07 Barnes-Leon Maria Theresa Custom common object
US6804330B1 (en) 2002-01-04 2004-10-12 Siebel Systems, Inc. Method and system for accessing CRM data via voice
US6826745B2 (en) 1998-11-30 2004-11-30 Siebel Systems, Inc. System and method for smart scripting call centers and configuration thereof
US6826582B1 (en) 2001-09-28 2004-11-30 Emc Corporation Method and system for using file systems for content management
US6829655B1 (en) 2001-03-28 2004-12-07 Siebel Systems, Inc. Method and system for server synchronization with a computing device via a companion device
US20040249854A1 (en) 2003-03-24 2004-12-09 Barnes-Leon Maria Theresa Common common object
US20040260534A1 (en) 2003-06-19 2004-12-23 Pak Wai H. Intelligent data search
US20040260659A1 (en) 2003-06-23 2004-12-23 Len Chan Function space reservation system
US20040268299A1 (en) 2003-06-30 2004-12-30 Shu Lei Application user interface template with free-form layout
US6842748B1 (en) 2000-04-14 2005-01-11 Rightnow Technologies, Inc. Usage based strength between related information in an information retrieval system
US6850895B2 (en) 1998-11-30 2005-02-01 Siebel Systems, Inc. Assignment manager
US20050050555A1 (en) 2003-08-28 2005-03-03 Exley Richard Mark Universal application network architecture
US20060021019A1 (en) 2004-07-21 2006-01-26 International Business Machines Corporation Method and system for federated provisioning
US7024652B1 (en) * 2003-11-13 2006-04-04 Cadence Design Systems, Inc. System and method for adaptive partitioning of circuit components during simulation
US7062502B1 (en) 2001-12-28 2006-06-13 Kesler John N Automated generation of dynamic data entry user interface for relational database management systems
US7069231B1 (en) 2000-07-20 2006-06-27 Oracle International Corporation Methods and systems for defining, applying and executing customer care relationship plans
US20060184410A1 (en) * 2003-12-30 2006-08-17 Shankar Ramamurthy System and method for capture of user actions and use of capture data in business processes
US20060200520A1 (en) * 1999-11-18 2006-09-07 Todd Vernon System and method for record and playback of collaborative communications session
US7181758B1 (en) 1994-07-25 2007-02-20 Data Innovation, L.L.C. Information distribution and processing system
US7289976B2 (en) 2004-12-23 2007-10-30 Microsoft Corporation Easy-to-use data report specification
US7340411B2 (en) 1998-02-26 2008-03-04 Cook Rachael L System and method for generating, capturing, and managing customer lead information over a computer network
US7356482B2 (en) 1998-12-18 2008-04-08 Alternative Systems, Inc. Integrated change management unit
US7412455B2 (en) 2003-04-30 2008-08-12 Dillon David M Software framework that facilitates design and implementation of database applications
US20090063414A1 (en) 2007-08-31 2009-03-05 Yahoo! Inc. System and method for generating a playlist from a mood gradient
US7508789B2 (en) 1994-04-07 2009-03-24 Data Innovation Llc Information distribution and processing system
US20090100342A1 (en) 2007-10-12 2009-04-16 Gabriel Jakobson Method and system for presenting address and mapping information
US20090177744A1 (en) 2008-01-04 2009-07-09 Yahoo! Inc. Identifying and employing social network relationships
US7620655B2 (en) 2003-05-07 2009-11-17 Enecto Ab Method, device and computer program product for identifying visitors of websites
US7698160B2 (en) 1999-05-07 2010-04-13 Virtualagility, Inc System for performing collaborative tasks
US7779475B2 (en) 2006-07-31 2010-08-17 Petnote Llc Software-based method for gaining privacy by affecting the screen of a computing device
US8014943B2 (en) 2008-05-08 2011-09-06 Gabriel Jakobson Method and system for displaying social networking navigation information
US20110218958A1 (en) 2010-03-08 2011-09-08 Salesforce.Com, Inc. System, method and computer program product for performing one or more actions utilizing a uniform resource locator
US8032297B2 (en) 2008-05-08 2011-10-04 Gabriel Jakobson Method and system for displaying navigation information on an electronic map
US20110247051A1 (en) 2010-04-01 2011-10-06 Salesforce.Com, Inc. System, method and computer program product for performing one or more actions based on a determined access permissions for a plurality of users
US8082301B2 (en) 2006-11-10 2011-12-20 Virtual Agility, Inc. System for supporting collaborative activity
US8095413B1 (en) 1999-05-07 2012-01-10 VirtualAgility, Inc. Processing management information
US20120042218A1 (en) 2010-08-13 2012-02-16 Salesforce.Com, Inc. Debugging site errors by an admin as a guest user in a multi-tenant database environment
US8209308B2 (en) 2006-05-01 2012-06-26 Rueben Steven L Method for presentation of revisions of an electronic document
US20120233137A1 (en) 2006-05-01 2012-09-13 Gabriel Jakobson Presentation of document history in a web browsing application
US8490025B2 (en) 2008-02-01 2013-07-16 Gabriel Jakobson Displaying content associated with electronic mapping systems
US8504945B2 (en) 2008-02-01 2013-08-06 Gabriel Jakobson Method and system for associating content with map zoom function
US8510045B2 (en) 2009-12-22 2013-08-13 Steven L. Rueben Digital maps displaying search-resulting points-of-interest in user delimited regions
US8510664B2 (en) 2008-09-06 2013-08-13 Steven L. Rueben Method and system for displaying email thread information
US20130212497A1 (en) 2012-02-10 2013-08-15 Liveperson, Inc. Analytics driven engagement
US20130218948A1 (en) 2012-02-17 2013-08-22 Gabriel Jakobson Variable speed collaborative web browsing system
US20130218949A1 (en) 2012-02-17 2013-08-22 Gabriel Jakobson Collaborative web browsing system integrated with social networks
US20130218966A1 (en) 2012-02-17 2013-08-22 Gabriel Jakobson Collaborative web browsing system having document object model element interaction detection
US20130247216A1 (en) 2008-11-03 2013-09-19 Salesforce.Com, Inc System, method and computer program product for publicly providing web content of a tenant using a multi-tenant on-demand database service
US8566301B2 (en) 2006-05-01 2013-10-22 Steven L. Rueben Document revisions in a collaborative computing environment
US20130326277A1 (en) * 2012-05-31 2013-12-05 International Business Machines Corporation Data lifecycle management
US8646103B2 (en) 2008-06-30 2014-02-04 Gabriel Jakobson Method and system for securing online identities
US20140136274A1 (en) * 2012-11-12 2014-05-15 Sap Ag Providing multiple level process intelligence and the ability to transition between levels

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3014943A (en) 1957-06-24 1961-12-26 Bayer Ag Phosphonic acid esters
NL284199A (en) 1957-10-10
US3032297A (en) 1958-06-06 1962-05-01 Sarl Rech S Etudes Production Aircraft landing gear with tandem wheels
JPH10504127A (en) 1995-06-09 1998-04-14 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ Method for enabling a user to derive a particular information item from a set of information items, and a system for performing such a method
US5983227A (en) 1997-06-12 1999-11-09 Yahoo, Inc. Dynamic page generator
US6236978B1 (en) 1997-11-14 2001-05-22 New York University System and method for dynamic profiling of users in one-to-one applications
AU6410699A (en) 1998-10-13 2000-05-01 Chris Cheah Method and system for controlled distribution of information over a network
US6288717B1 (en) 1999-03-19 2001-09-11 Terry Dunkle Headline posting algorithm
US6907566B1 (en) 1999-04-02 2005-06-14 Overture Services, Inc. Method and system for optimum placement of advertisements on a webpage
US6411949B1 (en) 1999-08-12 2002-06-25 Koninklijke Philips Electronics N.V., Customizing database information for presentation with media selections
US6671714B1 (en) 1999-11-23 2003-12-30 Frank Michael Weyer Method, apparatus and business system for online communications with online and offline recipients
US7136860B2 (en) 2000-02-14 2006-11-14 Overture Services, Inc. System and method to determine the validity of an interaction on a network
EP1307831A4 (en) 2000-07-11 2007-05-09 Yahoo Inc Online playback system with community bias
US7406501B2 (en) 2003-03-24 2008-07-29 Yahoo! Inc. System and method for instant messaging using an e-mail protocol
US7269590B2 (en) 2004-01-29 2007-09-11 Yahoo! Inc. Method and system for customizing views of information associated with a social network user
US7747648B1 (en) 2005-02-14 2010-06-29 Yahoo! Inc. World modeling using a relationship network with communication channels to entities
US7991764B2 (en) 2005-07-22 2011-08-02 Yogesh Chunilal Rathod Method and system for communication, publishing, searching, sharing and dynamically providing a journal feed
US9202241B2 (en) 2005-11-30 2015-12-01 John Nicholas and Kristin Gross System and method of delivering content based advertising
US7856445B2 (en) 2005-11-30 2010-12-21 John Nicholas and Kristin Gross System and method of delivering RSS content based advertising
US7827208B2 (en) 2006-08-11 2010-11-02 Facebook, Inc. Generating a feed of stories personalized for members of a social network
US7945653B2 (en) 2006-10-11 2011-05-17 Facebook, Inc. Tagging digital media
US7853881B1 (en) 2006-07-03 2010-12-14 ISQ Online Multi-user on-line real-time virtual social networks based upon communities of interest for entertainment, information or e-commerce purposes
US7730478B2 (en) 2006-10-04 2010-06-01 Salesforce.Com, Inc. Method and system for allowing access to developed applications via a multi-tenant on-demand database service
WO2008045792A2 (en) 2006-10-06 2008-04-17 Technorati, Inc. Methods and apparatus for conversational advertising
US8073850B1 (en) 2007-01-19 2011-12-06 Wordnetworks, Inc. Selecting key phrases for serving contextually relevant content
US9208251B2 (en) 2007-01-19 2015-12-08 Linkedin Corporation Computer-based evaluation tool for selecting personalized content for users
US8166013B2 (en) 2007-11-05 2012-04-24 Intuit Inc. Method and system for crawling, mapping and extracting information associated with a business using heuristic and semantic analysis
US8489577B2 (en) 2008-03-17 2013-07-16 Fuhu Holdings, Inc. System and method for defined searching and web crawling
CA2665669A1 (en) 2008-05-09 2009-11-09 Research In Motion Limited Method for cell selection in a radio access network
US20100125523A1 (en) 2008-11-18 2010-05-20 Peer 39 Inc. Method and a system for certifying a document for advertisement appropriateness
US20100241576A1 (en) 2009-03-23 2010-09-23 Zohar Beeri System, method and computer program product for social network based transactions
US20110209067A1 (en) 2010-02-19 2011-08-25 Bogess Keandre System and Method for Website User Valuation

Patent Citations (154)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5608872A (en) 1993-03-19 1997-03-04 Ncr Corporation System for allowing all remote computers to perform annotation on an image and replicating the annotated image on the respective displays of other comuters
US5649104A (en) 1993-03-19 1997-07-15 Ncr Corporation System for allowing user of any computer to draw image over that generated by the host computer and replicating the drawn image to other computers
US5761419A (en) 1993-03-19 1998-06-02 Ncr Corporation Remote collaboration system including first program means translating user inputs into annotations and running on all computers while second program means runs on one computer
US5819038A (en) 1993-03-19 1998-10-06 Ncr Corporation Collaboration system for producing copies of image generated by first program on first computer on other computers and annotating the image by second program
US7508789B2 (en) 1994-04-07 2009-03-24 Data Innovation Llc Information distribution and processing system
US8457545B2 (en) 1994-04-07 2013-06-04 Online News Link Llc Information distribution and processing system
US5577188A (en) 1994-05-31 1996-11-19 Future Labs, Inc. Method to provide for virtual screen overlay
US7181758B1 (en) 1994-07-25 2007-02-20 Data Innovation, L.L.C. Information distribution and processing system
US6826565B2 (en) 1995-05-15 2004-11-30 Ablaise Limited Method and apparatus for serving files to browsing clients
US6295530B1 (en) 1995-05-15 2001-09-25 Andrew M. Ritchie Internet service of differently formatted viewable data signals including commands for browser execution
US5715450A (en) 1995-09-27 1998-02-03 Siebel Systems, Inc. Method of selecting and presenting data from a database using a query language to a user of a computer system
US5821937A (en) 1996-02-23 1998-10-13 Netsuite Development, L.P. Computer method for updating a network design
US5831610A (en) 1996-02-23 1998-11-03 Netsuite Development L.P. Designing networks
US6189011B1 (en) 1996-03-19 2001-02-13 Siebel Systems, Inc. Method of maintaining a network of partially replicated database system
US6604117B2 (en) 1996-03-19 2003-08-05 Siebel Systems, Inc. Method of maintaining a network of partially replicated database system
US6178425B1 (en) 1997-02-26 2001-01-23 Siebel Systems, Inc. Method of determining the visibility to a remote database client of a plurality of database transactions using simplified visibility rules
US6092083A (en) 1997-02-26 2000-07-18 Siebel Systems, Inc. Database management system which synchronizes an enterprise server and a workgroup user client using a docking agent
US6216135B1 (en) 1997-02-26 2001-04-10 Siebel Systems, Inc. Method of determining visibility to a remote database client of a plurality of database transactions having variable visibility strengths
US6233617B1 (en) 1997-02-26 2001-05-15 Siebel Systems, Inc. Determining the visibility to a remote database client
US6446089B1 (en) 1997-02-26 2002-09-03 Siebel Systems, Inc. Method of using a cache to determine the visibility to a remote database client of a plurality of database transactions
US6684438B2 (en) 1997-02-26 2004-02-03 Siebel Systems, Inc. Method of using cache to determine the visibility to a remote database client of a plurality of database transactions
US20020129352A1 (en) 1997-02-27 2002-09-12 Brodersen Robert A. Method and apparatus for upgrading a software application in the presence of user modifications
US6367077B1 (en) 1997-02-27 2002-04-02 Siebel Systems, Inc. Method of upgrading a software application in the presence of user modifications
US6405220B1 (en) 1997-02-28 2002-06-11 Siebel Systems, Inc. Partially replicated distributed database with multiple levels of remote clients
US6266669B1 (en) 1997-02-28 2001-07-24 Siebel Systems, Inc. Partially replicated distributed database with multiple levels of remote clients
US6754681B2 (en) 1997-02-28 2004-06-22 Siebel Systems, Inc. Partially replicated distributed database with multiple levels of remote clients
US6324693B1 (en) 1997-03-12 2001-11-27 Siebel Systems, Inc. Method of synchronizing independently distributed software and database schema
US6169534B1 (en) 1997-06-26 2001-01-02 Upshot.Com Graphical user interface for customer information management
US5918159A (en) 1997-08-04 1999-06-29 Fomukong; Mundi Location reporting satellite paging system with optional blocking of location reporting
US6560461B1 (en) 1997-08-04 2003-05-06 Mundi Fomukong Authorized location reporting paging system
US5873096A (en) 1997-10-08 1999-02-16 Siebel Systems, Inc. Method of maintaining a network of partially replicated database system
US7340411B2 (en) 1998-02-26 2008-03-04 Cook Rachael L System and method for generating, capturing, and managing customer lead information over a computer network
US6732111B2 (en) 1998-03-03 2004-05-04 Siebel Systems, Inc. Method, apparatus, system, and program product for attaching files and other objects to a partially replicated database
US8015495B2 (en) 1998-03-13 2011-09-06 Groupserve It Trust Llc Centrifugal communication and collaboration method
US6161149A (en) 1998-03-13 2000-12-12 Groupserve, Inc. Centrifugal communication and collaboration method
US5963953A (en) 1998-03-30 1999-10-05 Siebel Systems, Inc. Method, and system for product configuration
US20020082892A1 (en) 1998-08-27 2002-06-27 Keith Raffel Method and apparatus for network-based sales force management
US6601087B1 (en) 1998-11-18 2003-07-29 Webex Communications, Inc. Instant document sharing
US6549908B1 (en) 1998-11-18 2003-04-15 Siebel Systems, Inc. Methods and apparatus for interpreting user selections in the context of a relation distributed as a set of orthogonalized sub-relations
US6393605B1 (en) 1998-11-18 2002-05-21 Siebel Systems, Inc. Apparatus and system for efficient delivery and deployment of an application
US6728960B1 (en) 1998-11-18 2004-04-27 Siebel Systems, Inc. Techniques for managing multiple threads in a browser environment
US20050091098A1 (en) 1998-11-30 2005-04-28 Siebel Systems, Inc. Assignment manager
US6665648B2 (en) 1998-11-30 2003-12-16 Siebel Systems, Inc. State models for monitoring process
US6553563B2 (en) 1998-11-30 2003-04-22 Siebel Systems, Inc. Development tool, method, and system for client server applications
US6826745B2 (en) 1998-11-30 2004-11-30 Siebel Systems, Inc. System and method for smart scripting call centers and configuration thereof
US6850895B2 (en) 1998-11-30 2005-02-01 Siebel Systems, Inc. Assignment manager
US7356482B2 (en) 1998-12-18 2008-04-08 Alternative Systems, Inc. Integrated change management unit
US8484111B2 (en) 1998-12-18 2013-07-09 Applications In Internet Time, Llc Integrated change management unit
US6574635B2 (en) 1999-03-03 2003-06-03 Siebel Systems, Inc. Application instantiation based upon attributes and values stored in a meta data repository, including tiering of application layers objects and components
US20030120675A1 (en) 1999-03-03 2003-06-26 Siebel Systems, Inc. Application instantiation based upon attributes and values stored in a meta data repository, including tiering of application layers, objects, and components
US20020072951A1 (en) 1999-03-03 2002-06-13 Michael Lee Marketing support database management method, system and program product
US7698160B2 (en) 1999-05-07 2010-04-13 Virtualagility, Inc System for performing collaborative tasks
US8275836B2 (en) 1999-05-07 2012-09-25 Virtualagility Inc. System and method for supporting collaborative activity
US8095594B2 (en) 1999-05-07 2012-01-10 VirtualAgility, Inc. System for performing collaborative tasks
US8095413B1 (en) 1999-05-07 2012-01-10 VirtualAgility, Inc. Processing management information
US6621834B1 (en) 1999-11-05 2003-09-16 Raindance Communications, Inc. System and method for voice transmission over network protocols
US20060200520A1 (en) * 1999-11-18 2006-09-07 Todd Vernon System and method for record and playback of collaborative communications session
US6535909B1 (en) 1999-11-18 2003-03-18 Contigo Software, Inc. System and method for record and playback of collaborative Web browsing session
US6324568B1 (en) 1999-11-30 2001-11-27 Siebel Systems, Inc. Method and system for distributing objects over a network
US6604128B2 (en) 1999-11-30 2003-08-05 Siebel Systems, Inc. Method and system for distributing objects over a network
US20030187921A1 (en) 1999-11-30 2003-10-02 Siebel Systems, Inc. Method and system for distributing objects over a network
US6654032B1 (en) 1999-12-23 2003-11-25 Webex Communications, Inc. Instant sharing of documents on a remote server
US6732100B1 (en) 2000-03-31 2004-05-04 Siebel Systems, Inc. Database access method and system for user role defined access
US6609150B2 (en) 2000-03-31 2003-08-19 Siebel Systems, Inc. Web client-server system and method for incompatible page markup and presentation languages
US20020165742A1 (en) 2000-03-31 2002-11-07 Mark Robins Feature centric release manager method and system
US6577726B1 (en) 2000-03-31 2003-06-10 Siebel Systems, Inc. Computer telephony integration hotelling method and system
US6336137B1 (en) 2000-03-31 2002-01-01 Siebel Systems, Inc. Web client-server system and method for incompatible page markup and presentation languages
US20010044791A1 (en) 2000-04-14 2001-11-22 Richter James Neal Automated adaptive classification system for bayesian knowledge networks
US6665655B1 (en) 2000-04-14 2003-12-16 Rightnow Technologies, Inc. Implicit rating of retrieved information in an information search system
US6842748B1 (en) 2000-04-14 2005-01-11 Rightnow Technologies, Inc. Usage based strength between related information in an information retrieval system
US6434550B1 (en) 2000-04-14 2002-08-13 Rightnow Technologies, Inc. Temporal updates of relevancy rating of retrieved information in an information search system
US6763501B1 (en) 2000-06-09 2004-07-13 Webex Communications, Inc. Remote document serving
US7069231B1 (en) 2000-07-20 2006-06-27 Oracle International Corporation Methods and systems for defining, applying and executing customer care relationship plans
US6768904B2 (en) 2000-10-11 2004-07-27 Lg Electronics Inc. Data communication method using mobile terminal
US6772229B1 (en) 2000-11-13 2004-08-03 Groupserve, Inc. Centrifugal communication and collaboration method
US20030018830A1 (en) 2001-02-06 2003-01-23 Mingte Chen Adaptive communication application programming interface
USD454139S1 (en) 2001-02-20 2002-03-05 Rightnow Technologies Display screen for a computer
US20020143997A1 (en) 2001-03-28 2002-10-03 Xiaofei Huang Method and system for direct server synchronization with a computing device
US6829655B1 (en) 2001-03-28 2004-12-07 Siebel Systems, Inc. Method and system for server synchronization with a computing device via a companion device
US20020140731A1 (en) 2001-03-28 2002-10-03 Pavitra Subramaniam Engine to present a user interface based on a logical structure, such as one for a customer relationship management system, across a web site
US20030018705A1 (en) 2001-03-31 2003-01-23 Mingte Chen Media-independent communication server
US20030206192A1 (en) 2001-03-31 2003-11-06 Mingte Chen Asynchronous message push to web browser
US6732095B1 (en) 2001-04-13 2004-05-04 Siebel Systems, Inc. Method and apparatus for mapping between XML and relational representations
US20020162090A1 (en) 2001-04-30 2002-10-31 Parnell Karen P. Polylingual simultaneous shipping of software
US6782383B2 (en) 2001-06-18 2004-08-24 Siebel Systems, Inc. System and method to implement a persistent and dismissible search center frame
US6711565B1 (en) 2001-06-18 2004-03-23 Siebel Systems, Inc. Method, apparatus, and system for previewing search results
US6763351B1 (en) 2001-06-18 2004-07-13 Siebel Systems, Inc. Method, apparatus, and system for attaching search results
US6728702B1 (en) 2001-06-18 2004-04-27 Siebel Systems, Inc. System and method to implement an integrated search center supporting a full-text search and query on a database
US20030004971A1 (en) 2001-06-29 2003-01-02 Gong Wen G. Automatic generation of data models and accompanying user interfaces
US6724399B1 (en) 2001-09-28 2004-04-20 Siebel Systems, Inc. Methods and apparatus for enabling keyboard accelerators in applications implemented via a browser
US6826582B1 (en) 2001-09-28 2004-11-30 Emc Corporation Method and system for using file systems for content management
US20030066032A1 (en) 2001-09-28 2003-04-03 Siebel Systems,Inc. System and method for facilitating user interaction in a browser environment
US20030066031A1 (en) 2001-09-28 2003-04-03 Siebel Systems, Inc. Method and system for supporting user navigation in a browser environment
US20030159136A1 (en) 2001-09-28 2003-08-21 Huang Xiao Fei Method and system for server synchronization with a computing device
US20030070004A1 (en) 2001-09-29 2003-04-10 Anil Mukundan Method, apparatus, and system for implementing a framework to support a web-based application
US20030070005A1 (en) 2001-09-29 2003-04-10 Anil Mukundan Method, apparatus, and system for implementing view caching in a framework to support web-based applications
US20030074418A1 (en) 2001-09-29 2003-04-17 John Coker Method, apparatus and system for a mobile web client
US20030070000A1 (en) 2001-09-29 2003-04-10 John Coker Computing system and method to implicitly commit unsaved data for a World Wide Web application
US20030069936A1 (en) 2001-10-09 2003-04-10 Warner Douglas K. Method for routing electronic correspondence based on the level and type of emotion contained therein
US7401094B1 (en) 2001-12-28 2008-07-15 Kesler John N Automated generation of dynamic data entry user interface for relational database management systems
US7062502B1 (en) 2001-12-28 2006-06-13 Kesler John N Automated generation of dynamic data entry user interface for relational database management systems
US6804330B1 (en) 2002-01-04 2004-10-12 Siebel Systems, Inc. Method and system for accessing CRM data via voice
US20030151633A1 (en) 2002-02-13 2003-08-14 David George Method and system for enabling connectivity to a data system
US20030204427A1 (en) 2002-03-29 2003-10-30 Prasad Gune User interface for processing requests for approval
US20030189600A1 (en) 2002-03-29 2003-10-09 Prasad Gune Defining an approval process for requests for approval
US6850949B2 (en) 2002-06-03 2005-02-01 Right Now Technologies, Inc. System and method for generating a dynamic interface via a communications network
US20030225730A1 (en) 2002-06-03 2003-12-04 Rightnow Technologies, Inc. System and method for generating a dynamic interface via a communications network
US20040027388A1 (en) 2002-06-27 2004-02-12 Eric Berg Method and apparatus to facilitate development of a customer-specific business process model
US20040001092A1 (en) 2002-06-27 2004-01-01 Rothwein Thomas M. Prototyping graphical user interfaces
US20040015981A1 (en) 2002-06-27 2004-01-22 Coker John L. Efficient high-interactivity user interface for client-server applications
US20040010489A1 (en) 2002-07-12 2004-01-15 Rightnow Technologies, Inc. Method for providing search-specific web pages in a network computing environment
US20040128001A1 (en) 2002-08-28 2004-07-01 Levin Issac Stephen Method and apparatus for an integrated process modeller
US20040186860A1 (en) 2003-03-21 2004-09-23 Wen-Hsin Lee Method and architecture for providing data-change alerts to external applications via a push service
US20040199536A1 (en) 2003-03-24 2004-10-07 Barnes Leon Maria Theresa Product common object
US20040199489A1 (en) 2003-03-24 2004-10-07 Barnes-Leon Maria Theresa Custom common object
US20040249854A1 (en) 2003-03-24 2004-12-09 Barnes-Leon Maria Theresa Common common object
US20040193510A1 (en) 2003-03-25 2004-09-30 Catahan Nardo B. Modeling of order data
US20040199543A1 (en) 2003-04-04 2004-10-07 Braud Luke A. Facilitating data manipulation in a browser-based user interface of an enterprise business application
US7412455B2 (en) 2003-04-30 2008-08-12 Dillon David M Software framework that facilitates design and implementation of database applications
US20080249972A1 (en) 2003-04-30 2008-10-09 Dillon David M Software framework that facilitates design and implementation of database applications
US7620655B2 (en) 2003-05-07 2009-11-17 Enecto Ab Method, device and computer program product for identifying visitors of websites
US20040260534A1 (en) 2003-06-19 2004-12-23 Pak Wai H. Intelligent data search
US20040260659A1 (en) 2003-06-23 2004-12-23 Len Chan Function space reservation system
US20040268299A1 (en) 2003-06-30 2004-12-30 Shu Lei Application user interface template with free-form layout
US20050050555A1 (en) 2003-08-28 2005-03-03 Exley Richard Mark Universal application network architecture
US7024652B1 (en) * 2003-11-13 2006-04-04 Cadence Design Systems, Inc. System and method for adaptive partitioning of circuit components during simulation
US20060184410A1 (en) * 2003-12-30 2006-08-17 Shankar Ramamurthy System and method for capture of user actions and use of capture data in business processes
US20060021019A1 (en) 2004-07-21 2006-01-26 International Business Machines Corporation Method and system for federated provisioning
US7289976B2 (en) 2004-12-23 2007-10-30 Microsoft Corporation Easy-to-use data report specification
US20120233137A1 (en) 2006-05-01 2012-09-13 Gabriel Jakobson Presentation of document history in a web browsing application
US8566301B2 (en) 2006-05-01 2013-10-22 Steven L. Rueben Document revisions in a collaborative computing environment
US8209308B2 (en) 2006-05-01 2012-06-26 Rueben Steven L Method for presentation of revisions of an electronic document
US7779475B2 (en) 2006-07-31 2010-08-17 Petnote Llc Software-based method for gaining privacy by affecting the screen of a computing device
US8082301B2 (en) 2006-11-10 2011-12-20 Virtual Agility, Inc. System for supporting collaborative activity
US20090063414A1 (en) 2007-08-31 2009-03-05 Yahoo! Inc. System and method for generating a playlist from a mood gradient
US20090100342A1 (en) 2007-10-12 2009-04-16 Gabriel Jakobson Method and system for presenting address and mapping information
US20090177744A1 (en) 2008-01-04 2009-07-09 Yahoo! Inc. Identifying and employing social network relationships
US8490025B2 (en) 2008-02-01 2013-07-16 Gabriel Jakobson Displaying content associated with electronic mapping systems
US8504945B2 (en) 2008-02-01 2013-08-06 Gabriel Jakobson Method and system for associating content with map zoom function
US8032297B2 (en) 2008-05-08 2011-10-04 Gabriel Jakobson Method and system for displaying navigation information on an electronic map
US8014943B2 (en) 2008-05-08 2011-09-06 Gabriel Jakobson Method and system for displaying social networking navigation information
US8646103B2 (en) 2008-06-30 2014-02-04 Gabriel Jakobson Method and system for securing online identities
US8510664B2 (en) 2008-09-06 2013-08-13 Steven L. Rueben Method and system for displaying email thread information
US20130247216A1 (en) 2008-11-03 2013-09-19 Salesforce.Com, Inc System, method and computer program product for publicly providing web content of a tenant using a multi-tenant on-demand database service
US8510045B2 (en) 2009-12-22 2013-08-13 Steven L. Rueben Digital maps displaying search-resulting points-of-interest in user delimited regions
US20110218958A1 (en) 2010-03-08 2011-09-08 Salesforce.Com, Inc. System, method and computer program product for performing one or more actions utilizing a uniform resource locator
US20110247051A1 (en) 2010-04-01 2011-10-06 Salesforce.Com, Inc. System, method and computer program product for performing one or more actions based on a determined access permissions for a plurality of users
US20120042218A1 (en) 2010-08-13 2012-02-16 Salesforce.Com, Inc. Debugging site errors by an admin as a guest user in a multi-tenant database environment
US20130212497A1 (en) 2012-02-10 2013-08-15 Liveperson, Inc. Analytics driven engagement
US20130218948A1 (en) 2012-02-17 2013-08-22 Gabriel Jakobson Variable speed collaborative web browsing system
US20130218949A1 (en) 2012-02-17 2013-08-22 Gabriel Jakobson Collaborative web browsing system integrated with social networks
US20130218966A1 (en) 2012-02-17 2013-08-22 Gabriel Jakobson Collaborative web browsing system having document object model element interaction detection
US20130326277A1 (en) * 2012-05-31 2013-12-05 International Business Machines Corporation Data lifecycle management
US20140136274A1 (en) * 2012-11-12 2014-05-15 Sap Ag Providing multiple level process intelligence and the ability to transition between levels

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11281847B2 (en) 2016-09-30 2022-03-22 Salesforce.Com, Inc. Generating content objects using an integrated development environment
US10579445B2 (en) 2018-04-06 2020-03-03 Salesforce.Com, Inc. Dynamic routing of events to destinations
US11699352B2 (en) 2018-09-24 2023-07-11 Salesforce, Inc. Implementing an achievement platform using a database system
USD894213S1 (en) 2018-12-14 2020-08-25 Salesforce.Com, Inc. Display screen or portion thereof with graphical user interface
US11068448B2 (en) 2019-01-07 2021-07-20 Salesforce.Com, Inc. Archiving objects in a database environment
US11640378B2 (en) 2019-01-07 2023-05-02 salesforce.com,inc. Archiving objects in a database environment
US11714694B2 (en) 2019-11-08 2023-08-01 Salesforce, Inc. Error notification mechanism for streaming events
US11816356B2 (en) 2021-07-06 2023-11-14 Pure Storage, Inc. Container orchestrator-aware storage system

Also Published As

Publication number Publication date
US20170075627A1 (en) 2017-03-16
US20170075735A1 (en) 2017-03-16
US20190095260A1 (en) 2019-03-28
US20180225159A1 (en) 2018-08-09
US20200073732A1 (en) 2020-03-05
US10891175B2 (en) 2021-01-12
US9632849B2 (en) 2017-04-25
US10185603B2 (en) 2019-01-22
US10037233B2 (en) 2018-07-31
US10496456B2 (en) 2019-12-03
US20170168880A1 (en) 2017-06-15

Similar Documents

Publication Publication Date Title
US10891175B2 (en) System having in-memory buffer service, temporary events file storage system and events file uploader service
US9767022B2 (en) In-memory buffer service
US10275281B2 (en) Scheduling jobs for processing log files using a database system
US11669510B2 (en) Parallel processing of disjoint change streams into a single stream
US11044336B2 (en) Systems, methods, and apparatuses for capturing data change events in a cloud based computing environment
US9767040B2 (en) System and method for generating and storing real-time analytics metric data using an in memory buffer service consumer framework
US10725829B2 (en) Scheduling capacity in a data-processing cluster to an application queue by repurposing monitoring-based capacity of a delegator queue for job execution in the application queue
US20160055042A1 (en) Detecting and Managing Flooding of Multi-tenant Message Queues
US11614967B2 (en) Distributed scheduling in a virtual machine environment
US10664192B2 (en) In-memory service with plural buffer type assignment
US20210216502A1 (en) System and method for synchronizing delete operations between primary and secondary databases
US10810230B2 (en) Thread record provider
US10803080B2 (en) Thread record provider
US20230095756A1 (en) Dynamically reconfiguring a database system of a tenant based on risk profile(s) of the tenant
US20220092048A1 (en) Techniques and Architectures for Providing an Extract-Once Framework Across Multiple Data Sources
US20210232465A1 (en) Generation of database backups in response to environmental signals
US10902021B2 (en) Automated self-scaling database system for automatically scaling out read operations and method for implementing the same
US20200097556A1 (en) Automated self-scaling database system and method for implementing the same
US11657027B2 (en) Systems and methods for logically copying data from a source database to one or more target databases
US9632803B2 (en) Managing configuration states in an application server

Legal Events

Date Code Title Description
AS Assignment

Owner name: SALESFORCE.COM, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PRADEEP, AAKASH;TORMAN, ADAM;WARSHAVSKY, ALEX;AND OTHERS;SIGNING DATES FROM 20151217 TO 20151218;REEL/FRAME:037339/0571

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4