US20220207024A1 - Tiered server topology - Google Patents

Tiered server topology Download PDF

Info

Publication number
US20220207024A1
US20220207024A1 US17/316,603 US202117316603A US2022207024A1 US 20220207024 A1 US20220207024 A1 US 20220207024A1 US 202117316603 A US202117316603 A US 202117316603A US 2022207024 A1 US2022207024 A1 US 2022207024A1
Authority
US
United States
Prior art keywords
tier
servers
server
updates
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/316,603
Inventor
Glen Wheeler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whisper Capital LLC
Original Assignee
Waggle Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Waggle Corp filed Critical Waggle Corp
Priority to US17/316,603 priority Critical patent/US20220207024A1/en
Priority to CA3204037A priority patent/CA3204037A1/en
Priority to PCT/US2021/065644 priority patent/WO2022147220A1/en
Priority to EP21916475.3A priority patent/EP4272375A1/en
Assigned to WHISPER CAPITAL, LLC reassignment WHISPER CAPITAL, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Waggle Corporation
Publication of US20220207024A1 publication Critical patent/US20220207024A1/en
Assigned to Waggle Corporation reassignment Waggle Corporation ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WHEELER, GLEN
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations
    • G06F16/24554Unary operations; Data partitioning operations
    • G06F16/24556Aggregation; Duplicate elimination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/256Integrating or interfacing systems involving database management systems in federated or virtual databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/282Hierarchical databases, e.g. IMS, LDAP data stores or Lotus Notes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/10015Access to distributed or replicated servers, e.g. using brokers

Definitions

  • live sporting events are a popular method of entertainment among the masses, among advertisers, and among those who engage on social media.
  • fans In sporting events fans commonly are rooting for a particular team or result, which can change throughout the event. Take for example Football, where some fans may be a be rooting for a particular team for the entire event, and some Fantasy Football players may be rooting for specific players to score, so their opinions on the game may change moment-to-moment. However, fans don't have a platform on which they can express which team in the event that they are rooting for and then quickly understand the viewpoint of millions of others regarding that same moment.
  • fans watching talent or performance shows may be voting for their favorite and least favorite performers.
  • viewers allowing viewers to express, and quickly understand the viewpoint of millions of other viewers on the performances, as they unfold, in that same moment, would enhance the experience for the viewers.
  • a database network topology which allows a single shared value to be updated by a multitude of users simultaneously on multiple sub-level servers which feed higher level servers, which aggregate the data from the sub-level servers, and feed a master server, which aggregates all the data updates and feeds back to the users.
  • the network topology is best represented as a pyramid.
  • FIG. 1 illustrates a flow chart of the server topology method of the current invention.
  • FIG. 2 illustrates a flow chart of the server topology method of an alternate embodiment of the current invention.
  • the present invention was developed to allow millions of users to update an application database.
  • the application database contains a single shared value in a database which is updated by many users operating a variety of personal devices 104 , such as cellular phones, tablets, laptops, etc.
  • updates to the database single shared value must update within a predetermined timeframe from receiving the update from the user. Millions of users may be updating the application simultaneously, and the number of simultaneous database updates cannot negatively affect the update timeframe of the single shared value.
  • the server topology of the present invention utilizes a master server 101 , which sits at the top of the server topology.
  • the master server 101 connects to a first sub-layer of slave servers 102 .
  • the master server 101 and slave servers 102 both contain databases with a same shared value.
  • the shared value on the master server 101 database is fed the data from the databases of a first tier of slave servers 102 at a predetermined time interval.
  • the time interval is equal to the maximum number of connections that can be serviced within the desired timeframe.
  • a second tier of slave servers 103 (and subsequent tiers of slave servers) with databases containing the same shared value can be added under each of the first-tier slave servers 102 to increase capacity.
  • slave server 101 If you add a second tier of slave server databases, you can service up to 1 billion user connections, and only triple the time it takes the master server 101 to process its user connections. So, if master server 101 can service 1,000 connections in 1 second, using the above described server topology, it will only take two seconds to service 1 million user connections with a first-tier of slave servers 102 , and only 3 seconds to service 1 billion user connections with a second tier of slave servers 103 .
  • the maximum number of connections that can update X in T time is the number of servers in the last level N, or simply N to the next power.
  • a slave server sends an update of the single shared value, which is in the form of a data packet of the single aggregate value of all the updates received by the slave server, to either the database in the server tier above that slave server, or the database of the master server 102 , that slave server database will reset the single shared value to a default value, e.g. zero if the single shared value is performing a counting function.
  • the slave server database does not expend processing time determining the difference between the current value of the single shared value and the value of the single shared value at the time the slave server database last updated either the server in the tier above that slave server, or the master server 102 .

Abstract

The present invention utilizes a database network topology which allows a single shared value to be updated by a multitude of users simultaneously on multiple sub-level servers which feed higher level servers, which aggregate the data from the sub-level servers, and feed a master server, which aggregates all the data updates and feeds back to the users. The network topology is best represented as a pyramid.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • Not Applicable
  • FIELD OF THE INVENTION
  • The present invention relates generally to managing user updates to a database and more specifically database network topology.
  • BACKGROUND
  • Live events and activities attract much attention—in person and through media such as television or streaming video. For example, live sporting events are a popular method of entertainment among the masses, among advertisers, and among those who engage on social media.
  • While viewing these live events and activities, the consumer's viewing experiences can be enhanced through expressing real-time opinions on the events with other viewers, such as the team they are rooting for, or voting on their favorite singer/dancer in competition. Additionally, the viewers want to know the opinions of other viewers as well. There is currently no digital platform that enables fans of a show, a team, a sport, a game, activity or event to express their real-time, opinions with other viewers and then in real time be able to quantitatively analyze those responses.
  • In sporting events fans commonly are rooting for a particular team or result, which can change throughout the event. Take for example Football, where some fans may be a be rooting for a particular team for the entire event, and some Fantasy Football players may be rooting for specific players to score, so their opinions on the game may change moment-to-moment. However, fans don't have a platform on which they can express which team in the event that they are rooting for and then quickly understand the viewpoint of millions of others regarding that same moment.
  • Alternatively, fans watching talent or performance shows may be voting for their favorite and least favorite performers. Similarly, allowing viewers to express, and quickly understand the viewpoint of millions of other viewers on the performances, as they unfold, in that same moment, would enhance the experience for the viewers.
  • Existing applications allow viewers to express their opinions on their personal electronic devices (Laptops, Cell Phones, Tablets, etc.) and transmitting these opinions to the server database(s) of the solution which will update shared values on the database(s) with the opinions of the viewers. Existing applications will analyze the data in the database(s) in a series of data packets based on the server capacity, one after another, introducing latency, which is defined as the time it takes for a request to travel from the sender (viewer) to the server and for the server to process that request. The challenge is that databases cannot update shared values simultaneously. While server databases are very fast, and can make updates in microseconds, when millions of separate shared value updates are requested simultaneously, that translates into seconds of latency.
  • Using existing technologies, it is not possible to instantaneously (in a matter of less than 1 second) gather the opinions of the viewers, compute, and communicate back to all the viewers in the moment of the play or voting period, due to the number of users updating the data simultaneously, which can be in the millions. This is one reason why talent or performance show voting periods are aligned with commercial breaks, to provide time for all the database updates required to the shared value.
  • While it's true that other very large-scaleSolutions, such as Twitter and Facebook, must handle millions of simultaneous user connections, those users are not all trying to update a single shared value in a server database (ie, the number of active users). Additionally, there are solutions, such as a Redis Cluster, that allow for distributed servers in a cluster to service millions of users and replicate that data between the master server cluster. However, those solutions are also designed to provide optimum failover and have the downside of introducing latency as additional servers are added to the mesh network.
  • When a Tweet is sent, it does not have to reach every other Twitter user within one second. It is likely to take several minutes for the Tweet to reach all users. Also, Twitter does not have 1 million users all updating the same tweet at the same time. Current server topologies do not allow every active user to update the same numbers at the same time without introducing latency.
  • Therefore, what is needed is a database network topology that will allow the maximum number of people to update the exact same data point within the least amount of time.
  • SUMMARY
  • To accomplish this objective, a database network topology has been developed which allows a single shared value to be updated by a multitude of users simultaneously on multiple sub-level servers which feed higher level servers, which aggregate the data from the sub-level servers, and feed a master server, which aggregates all the data updates and feeds back to the users. The network topology is best represented as a pyramid.
  • This database network topology topology eliminates the previously described latency created by servers analyzing the data in data packets by creating capacity.
  • Using this database network topology, millions of users can express their opinions on the personal devices (laptops, cellular phones, tablets, etc), and their opinions, as well as the opinions of millions of other users can be received by the applications servers, tabulated in less than 1 second, and then tabulated data is fed back to the individual users.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a flow chart of the server topology method of the current invention.
  • FIG. 2 illustrates a flow chart of the server topology method of an alternate embodiment of the current invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention was developed to allow millions of users to update an application database. The application database contains a single shared value in a database which is updated by many users operating a variety of personal devices 104, such as cellular phones, tablets, laptops, etc. In order for the application to operate effectively, updates to the database single shared value must update within a predetermined timeframe from receiving the update from the user. Millions of users may be updating the application simultaneously, and the number of simultaneous database updates cannot negatively affect the update timeframe of the single shared value.
  • Referring to FIG. 1 the server topology of the present invention utilizes a master server 101, which sits at the top of the server topology. The master server 101 connects to a first sub-layer of slave servers 102. The master server 101 and slave servers 102 both contain databases with a same shared value. The shared value on the master server 101 database is fed the data from the databases of a first tier of slave servers 102 at a predetermined time interval. The time interval is equal to the maximum number of connections that can be serviced within the desired timeframe. A second tier of slave servers 103 (and subsequent tiers of slave servers) with databases containing the same shared value can be added under each of the first-tier slave servers 102 to increase capacity.
  • Additional first tier slave servers 102 will not add to the total processing time, but an additional tier of slave servers will. For example, if the master server 101 can handle 1,000 slave servers or user connections in 1 second, then each first-tier slave server 102 can also handle 1,000 slave servers or user connections in one second. That means that the time it takes the master server 101 database to service all 1,000 of its first-tier slave servers 102 databases, is the same amount of time it will take each first-tier slave server 102 database to service 1,000 second-tier slave server databases. The final outcome is that upping the load from 1,000 user connections to 1,000,000 user connections will only double the time to two seconds.
  • If you add a second tier of slave server databases, you can service up to 1 billion user connections, and only triple the time it takes the master server 101 to process its user connections. So, if master server 101 can service 1,000 connections in 1 second, using the above described server topology, it will only take two seconds to service 1 million user connections with a first-tier of slave servers 102, and only 3 seconds to service 1 billion user connections with a second tier of slave servers 103.
  • Below is a mathematical calculation of FIG. 1 having the form of EQNs 1-5, whereby X=the number of users needed to reach, T=maximum amount of time to refresh value X, t=maximum allowed time per level, #=number of slave server tiers needed, &N=numbers of user connections a single server can update in t. The maximum number of connections that can update X in T time is the number of servers in the last level N, or simply N to the next power.

  • X=1: #=0   EQN 1

  • X=N: #=1   EQN 2

  • X=N{circumflex over ( )}2: #=2   EQN 3

  • X=N{circumflex over ( )}3: #=3   EQN 4

  • t=T#  EQN 5
  • Referring to EQNs 1-5, if N=100, T=1 second, and X=800,000 then at the second level, X=100, at the 3rd level X=10,000, and at the 4th level X=1,000,000. Since X>800,000 at #=3, 3 levels of servers will be needed. At 3 levels of services (#=3) and T=1 second, t would be 333 ms.
  • In order to operate at maximum efficiency, when a slave server sends an update of the single shared value, which is in the form of a data packet of the single aggregate value of all the updates received by the slave server, to either the database in the server tier above that slave server, or the database of the master server 102, that slave server database will reset the single shared value to a default value, e.g. zero if the single shared value is performing a counting function. By resetting after sending an update, the slave server database does not expend processing time determining the difference between the current value of the single shared value and the value of the single shared value at the time the slave server database last updated either the server in the tier above that slave server, or the master server 102.
  • Referring to FIG. 1, the flow of data between the databases of the master server 101 and the slave servers 102, 103 is two way, therefore, as the slave servers 102, 103 are updating the single shared value in the databases of their master server 101, 102, the master servers 101, 102 are updating the databases of their slave servers 102, 103 of the aggregate value of the single shared value, which is then shared with the users 104.
  • Referring to FIG. 2, in alternate embodiment of the invention, a separate feedback server with a database 105, is utilized to update the users 104, with the aggregate value of the single shared value. In this embodiment the flow of database values for the single shared value between the master server 101 and the slave servers 102, 103 is one way, therefore, as the slave servers 102, 103 are updating the single shared value in their master server database 101, 102, the master server 101 is sharing the aggregate value of the single shared value with the feedback server(s) 105, which is then relayed the users 104.

Claims (4)

What is claimed is:
1. A method of managing simultaneous user updates to a database to minimize latency comprising:
providing at least one master server, the master server comprising at least one database with a single shared value;
providing a plurality of slave servers, each slave server comprising the database with the single shared value;
having a plurality of users simultaneously updating the single shared value on the user personal devices and transmitting the update to the slave servers;
the master and slave servers being capable of completing a maximum number of database updates within a timeframe;
organizing the master server and slave servers into a server topology comprising a top-tier, at least one mid-tier, and a bottom-tier where the top tier comprises a master server, the mid-tier comprises slave servers which update the master server single shared value, and the bottom-tier comprises slave servers which update the mid-tier servers' single shared value;
determining the number of mid-tier and bottom-tier servers in the server topology by comparing the maximum number of database updates within a timeframe the servers can achieve and a target timeframe for the master server to receive an update to the single shared value once the user has update the single shared value is received by the slave server;
whereby a plurality of users will simultaneously transmit updates to the single shared value from their devices, which are received by the bottom-tier slave servers, which collect and aggregate the user updates into a single update value that is transmitted to the mid-tier servers;
whereby the mid-tier servers receive a plurality of updates from mid-tier and bottom-tier slave servers, which collect and aggregate the user updates into a single update value that is transmitted to the either master server or other mid-tier servers;
whereby the master receives a plurality of updates from mid-tier servers, which collects and aggregates the user updates into a single update value that is transmitted to user.
2. The method of method of managing simultaneous user updates to a database to minimize latency of claim 1 further comprising the additional step of slave servers erasing data from the single shared value when the server transmits an update to another server.
3. The method of method of managing simultaneous user updates to a database to minimize latency of claim 1 whereby the master servers transmits updates to the users back through the mid-tier and bottom-tier servers.
4. The method of method of managing simultaneous user updates to a database to minimize latency of claim 1 whereby the master servers transmits updates to the users back through a feedback server.
US17/316,603 2020-12-31 2021-05-10 Tiered server topology Abandoned US20220207024A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/316,603 US20220207024A1 (en) 2020-12-31 2021-05-10 Tiered server topology
CA3204037A CA3204037A1 (en) 2020-12-31 2021-12-30 Communications network topology for minimizing latency in a many-to-one environment
PCT/US2021/065644 WO2022147220A1 (en) 2020-12-31 2021-12-30 Communications network topology for minimizing latency in a many-to-one environment
EP21916475.3A EP4272375A1 (en) 2020-12-31 2021-12-30 Communications network topology for minimizing latency in a many-to-one environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063132987P 2020-12-31 2020-12-31
US17/316,603 US20220207024A1 (en) 2020-12-31 2021-05-10 Tiered server topology

Publications (1)

Publication Number Publication Date
US20220207024A1 true US20220207024A1 (en) 2022-06-30

Family

ID=82118685

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/316,603 Abandoned US20220207024A1 (en) 2020-12-31 2021-05-10 Tiered server topology

Country Status (4)

Country Link
US (1) US20220207024A1 (en)
EP (1) EP4272375A1 (en)
CA (1) CA3204037A1 (en)
WO (1) WO2022147220A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070179931A1 (en) * 2006-01-31 2007-08-02 International Business Machines Corporation Method and program product for automating the submission of multiple server tasks for updating a database
US8433771B1 (en) * 2009-10-02 2013-04-30 Amazon Technologies, Inc. Distribution network with forward resource propagation
US9043325B1 (en) * 2011-06-24 2015-05-26 Google Inc. Collecting useful user feedback about geographical entities
US20180048587A1 (en) * 2016-05-16 2018-02-15 Yang Bai Port switch service

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015177802A1 (en) * 2014-05-23 2015-11-26 Banerjee Saugata System and method for establishing single window online meaningful access and effective communication
JP6926035B2 (en) * 2018-07-02 2021-08-25 株式会社東芝 Database management device and query partitioning method
US11514341B2 (en) * 2019-05-21 2022-11-29 Azra Analytics, Inc. Systems and methods for sports data crowdsourcing and analytics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070179931A1 (en) * 2006-01-31 2007-08-02 International Business Machines Corporation Method and program product for automating the submission of multiple server tasks for updating a database
US8433771B1 (en) * 2009-10-02 2013-04-30 Amazon Technologies, Inc. Distribution network with forward resource propagation
US9043325B1 (en) * 2011-06-24 2015-05-26 Google Inc. Collecting useful user feedback about geographical entities
US20180048587A1 (en) * 2016-05-16 2018-02-15 Yang Bai Port switch service

Also Published As

Publication number Publication date
CA3204037A1 (en) 2022-07-07
WO2022147220A1 (en) 2022-07-07
EP4272375A1 (en) 2023-11-08

Similar Documents

Publication Publication Date Title
US9386355B2 (en) Generating alerts for live performances
US20200344320A1 (en) Facilitating client decisions
US9060210B2 (en) Generating excitement levels for live performances
KR102418756B1 (en) User-centric audience channel for live gameplay in multi-player games
US8925007B2 (en) Generating teasers for live performances
US10201755B2 (en) System and method for providing a platform for real time interactive game participation
US20190160383A1 (en) Methods and apparatus for distributed gaming over a mobile device
US10009241B1 (en) Monitoring the performance of a content player
US8874964B1 (en) Detecting problems in content distribution
US20090094656A1 (en) System, method, and apparatus for connecting non-co-located video content viewers in virtual TV rooms for a shared participatory viewing experience
US10056116B2 (en) Data processing system for automatically generating excitement levels with improved response times using prospective data
US20190168114A1 (en) Game of skill played by remote participants utilizing wireless devices in connection with a common game event
US20120089437A1 (en) Interest Profiles for Audio and/or Video Streams
US20160234556A1 (en) System and Method for Organizing, Ranking and Identifying Users as Official Mobile Device Video Correspondents
Basiri et al. Delay-aware resource provisioning for cost-efficient cloud gaming
US11587404B2 (en) Multi-event media feed integration for unified video streaming for sportsbook application
He et al. Crowdtranscoding: Online video transcoding with massive viewers
CN109525627B (en) Data transmission method, data transmission device, storage medium and electronic device
US10862994B1 (en) Facilitating client decisions
US11522969B2 (en) Systems and methods for adjusting storage based on determining content item popularity
US20220207024A1 (en) Tiered server topology
CN109847340B (en) Information processing method, device, equipment and medium
He et al. Utilizing massive viewers for video transcoding in crowdsourced live streaming
CN115738295A (en) Spectator system in an online game
US20140162741A1 (en) Entertainment Fantasy League

Legal Events

Date Code Title Description
AS Assignment

Owner name: WHISPER CAPITAL, LLC, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WAGGLE CORPORATION;REEL/FRAME:058505/0782

Effective date: 20211229

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: WAGGLE CORPORATION, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WHEELER, GLEN;REEL/FRAME:060797/0099

Effective date: 20201231

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION