WO2013148693A1 - Offload processing of data packets - Google Patents

Offload processing of data packets Download PDF

Info

Publication number
WO2013148693A1
WO2013148693A1 PCT/US2013/033889 US2013033889W WO2013148693A1 WO 2013148693 A1 WO2013148693 A1 WO 2013148693A1 US 2013033889 W US2013033889 W US 2013033889W WO 2013148693 A1 WO2013148693 A1 WO 2013148693A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
financial market
switch
data packets
messages
Prior art date
Application number
PCT/US2013/033889
Other languages
French (fr)
Inventor
Scott Parsons
David E. Taylor
Original Assignee
Exegy Incorporated
Indeck, Ronald S.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/833,098 external-priority patent/US10121196B2/en
Application filed by Exegy Incorporated, Indeck, Ronald S. filed Critical Exegy Incorporated
Priority to EP13767579.9A priority Critical patent/EP2832045A4/en
Publication of WO2013148693A1 publication Critical patent/WO2013148693A1/en
Priority to US14/195,550 priority patent/US9990393B2/en
Priority to US14/195,510 priority patent/US20140180904A1/en
Priority to US14/195,462 priority patent/US10650452B2/en
Priority to US14/195,531 priority patent/US11436672B2/en
Priority to US15/994,262 priority patent/US10872078B2/en
Priority to US17/903,236 priority patent/US20220414778A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Definitions

  • Accelerated data processing is an ever present need in the art. This need is acutely present in the processing of financial market data to support the trading of financial instruments. However, it should be understood that the need for accelerated data processing is also present for a wide variety of other applications.
  • the process of trading financial instruments may be viewed broadly as proceeding through a cycle as shown in Figure 1.
  • the exchange which is responsible for matching up offers to buy and sell financial instruments.
  • Exchanges disseminate market information, such as the appearance of new buy/sell offers and trade transactions, as streams of events known as market data feeds.
  • Trading firms receive market data from the various exchanges upon which they trade. Note that many traders manage diverse portfolios of instruments requiring them to monitor the state of multiple exchanges.
  • trading systems Utilizing the data received from the exchange feeds, trading systems make trading decisions and issue buy/sell orders to the financial exchanges. Orders flow into the exchange where they are inserted into a sorted "book" of orders, triggering the publication of one or more events on the market data feeds.
  • FIG. 2 illustrates an exemplary platform that is currently known in the art.
  • the electronic trading platform 200 comprises a plurality of functional units 202 that are configured to carry out data processing operations such as the ones depicted in units 202, whereby traders at workstations 204 have access to financial data of interest and whereby trade information can be sent to various exchanges or other outside systems via output path 210.
  • the purpose and details of the functions performed by functional units 202 are well-known in the art.
  • a stream 206 of financial data arrives at the system 200 from an external source such as the exchanges themselves (e.g., NYSE,
  • the financial data source stream 206 comprises a series of messages that individually represent a new offer to buy or sell a financial instrument, an indication of a completed sale of a financial instrument, notifications of corrections to previously-reported sales of a financial instrument, administrative messages related to such transactions, and the like.
  • a "financial instrument” refers to a contract representing equity ownership, debt or credit, typically in relation to a corporate or governmental entity, wherein the contract is saleable. Examples of “financial instruments” include stocks, bonds, commodities, currency traded on currency markets, etc.
  • Functional units 202 of the system then operate on stream 206 or data derived therefrom to carry out a variety of financial processing tasks.
  • financial market data refers to the data contained in or derived from a series of messages that individually represent a new offer to buy or sell a financial instrument, an indication of a completed sale of a financial instrument, notifications of corrections to previously-reported sales of a financial instrument, administrative messages related to such transactions, and the like.
  • financial market source data refers to a feed of financial market data directly from a data source such as an exchange itself or a third party provider (e.g., a Sawis or BT Radianz provider).
  • financial market secondary data refers to financial market data that has been derived from financial market source data, such as data produced by a feed compression operation, a feed handling operation, an option pricing operation, etc.
  • various processing tasks are offloaded from an electronic trading platform to one or more processors upstream or downstream from the electronic trading platform.
  • upstream in this context is meant to identify a directional flow with respect to data that is moving to an electronic trading platform, in which case an offload processor upstream from the electronic trading platform would process financial market data flowing toward the electronic trading platform.
  • downstream is meant to identify a directional flow with respect to data that is moving away from an electronic trading platform, in which case an offload processor downstream from the electronic trading platform would process financial market data flowing out of the electronic trading platform.
  • the offloaded processing can be moved into the data
  • one or more of the offloaded financial market data processing tasks described herein can be implemented in one or more network elements of the data distribution network, such as a s witch within the data distribution network.
  • network elements of the data distribution network such as a s witch within the data distribution network.
  • a number of market data consumption, normalization, aggregation, enrichment, and distribution functions can be embedded within the elements that comprise the market data feed network 214.
  • these embodiments offload processing tasks typically performed by downstream processing elements 202 such as feed handlers and virtual order books.
  • downstream processing elements 202 such as feed handlers and virtual order books.
  • the inventors also disclose a number of market data distribution functions that can be embedded within the network elements that comprise the financial application data network 208.
  • these embodiments effectively offload processing tasks typically performed by ticker plants, messaging middleware, and downstream applications. Offloading these tasks from traditional platform components and embedding them in network elements may obviate some platform components, improve the performance of some components, reduce the total amount of space and power required by the platform, achieve higher system throughput, and deliver lower latency market data to consuming applications.
  • Figure 1 illustrates an exemplary process cycle for trading financial instruments.
  • Figure 2 illustrates an exemplary electronic trading platform.
  • Figures 3-6 illustrate exemplary embodiments for offload processors that provide repackaging functionality.
  • Figure 7 illustrates an exemplary system where an offload processor is deployed
  • Figure 8 illustrates an exemplary system where an intelligent feed switch is positioned within the market data feed network of an electronic trading platform.
  • Figure 9 illustrates an exemplary system where conventional switches are used to aggregate financial market data feeds for delivery to an intelligent feed switch.
  • Figure 10 illustrates an exemplary system where conventional switches are used to aggregate financial market data feeds for delivery to multiple intelligent feed switches.
  • Figure 11 depicts an exemplary electronic trading platform with an intelligent feed switch deployed in the market data network.
  • Figure 12 illustrates the system of Figure 11 including a logical diagram of functions performed by a typical feed handler in an electronic trading platform.
  • Figure 13 illustrates the system of Figure 11 but where several functions are offloaded from the feed handler to the intelligent feed switch.
  • Figure 14 illustrates an exemplary electronic trading platform that includes one or more ticker plant components.
  • Figure 15 illustrates the system of Figure 14 but where several functions are offloaded from a ticker plant to the intelligent feed switch.
  • Figure 16 illustrates an exemplary system where latency-sensitive trading applications consume data directly from an intelligent feed switch.
  • Figure 17 illustrates an example of redundant feed arbitration.
  • Figure 18 illustrates an example of a line arbitration offload engine.
  • Figure 19 illustrates an example of a packet mapping offload engine.
  • Figure 20 illustrates an exemplary processing module configured to perform symbol- routing and repackaging.
  • Figure 21 illustrates an exemplary intelligent feed switch that provides multiple ports of 10 Gigabit Ethernet connectivity.
  • Figure 22 illustrates an exemplary intelligent feed switch wherein the switch device is replaced by another FPGA device with a dedicated memory cache.
  • Figure 23 illustrates an exemplary intelligent feed switch wherein a single FPGA
  • Figure 24 illustrates an exemplary intelligent distribution switch positioned
  • Figure 25 illustrates an exemplary intelligent distribution switch that hosts one or more distribution functions.
  • Figure 26 illustrates an exemplary system where a feed handler is configured
  • Figure 27 illustrates an exemplary intelligent feed switch that is configured to
  • Figure 28 illustrates an exemplary engine that provides symbol and order mapping.
  • Figures 29-32 illustrate exemplary embodiments for offload processors that provide repackaging functionality with respect to nonfinancial data.
  • Figure 33 illustrates an exemplary system where an offload processor is deployed upstream from multiple data consumers.
  • Figure 34 depicts an exemplary intelligent feed switch for processing nonfinancial data.
  • Figure 35 depicts an exemplary process flow that can be implemented by the intelligent feed switch of Figure 34.
  • an offload processor can be configured to process incoming data packets, where each of at least a plurality of the incoming data packets contain a plurality of financial market data messages, and wherein the financial market data messages comprise a plurality of data fields describing financial market data for a plurality of financial instruments.
  • the payload of each incoming data packet can comprise one or more financial market data messages.
  • Such an offload processor can filter and repackage the financial market data into outgoing data packets where the financial market data that is grouped into outgoing data packets is grouped using a criterion different than the criterion upon which financial market data was grouped into the incoming data packets.
  • the offload processor can alleviate the processing burden on the downstream electronic trading platform(s).
  • Figures 3-6 Examples of such an offload processor are shown in Figures 3-6.
  • Figure 3 depicts an exemplary offload processor 300 that is configured to receive as an input a consolidated stream of incoming data packets from different financial markets. As shown in Figure 3, each incoming data packet has a payload that contains multiple financial market data messages from the same financial market.
  • a plurality of financial market data messages from the feed for Financial Market 1 are combined in the same packet (e.g., where financial market data message FMDMl(Mkt 1) is a new offer to buy stock for Company A from the NYSE, FMDM2(Mkt 1) is a new offer to sell stock for Company B from the NYSE, and where FMDM3(Mkt 1) is a notification of a completed trade on stock for Company C from the NYSE), while a plurality of financial market data messages from the feed for Financial Market 2 (e.g., NASDAQ) are combined in the same packet, and so on.
  • financial market data message FMDMl(Mkt 1) is a new offer to buy stock for Company A from the NYSE
  • FMDM2(Mkt 1) is a new offer to sell stock for Company B from the NYSE
  • FMDM3(Mkt 1) is a notification of a completed trade on stock for Company C from the NYSE
  • the offload processor 300 performs financial market data filtering and repackaging between incoming and outgoing data packets such that the outgoing financial market data packets contain financial market data messages that are organized using a different criterion.
  • the offload processor filters and sorts the financial market data from the different markets by a criterion such as which downstream data consumers have expressed an interest in such financial market data.
  • the offload processor 300 can mix payload portions of incoming data packets on a criterion-specific basis to generate outgoing data packets with newly organized payloads.
  • data consumer A may have an interest in all new messages relating a particular set of financial instruments (e.g., IBM stock, Apple stock, etc.) regardless of which market served as the source of the messages on such instruments.
  • financial instruments e.g., IBM stock, Apple stock, etc.
  • FIG. 3 shows outgoing data packets that are consumer-specific. As can be seen, the payloads of these consumer-specific data packets comprise financial market data messages from different markets that arrived in different incoming data packets.
  • an offload processor can be configured to perform packet mapping functions on incoming data packets from various financial market data feeds.
  • Figure 4 depicts another exemplary embodiment of an offload processor 300 that provides repackaging functionality.
  • the offload processor receives a plurality of streams of incoming data packets, where each stream may be market-specific (e.g., an input stream of data packets from the NYSE on a first port and an input stream of data packets from NASDAQ on a second port).
  • the offload processor 300 of Figure 4 can then repackage the financial market data in these incoming data packets into outgoing data packets as previously discussed.
  • Figure 5 depicts another exemplary embodiment of an offload processor 300 that provides repackaging functionality.
  • the offload processor produces multiple output streams of outgoing data packets, where each output stream may be criterion-specific (e.g., an output stream of data packets destined for Consumer A from a first port and an output stream of data packets destined for Consumer B from a second port, and so on).
  • the stream of incoming data packets can be a consolidated stream as described in connection with Figure 3.
  • Figure 6 depicts another exemplary embodiment of an offload processor 300 that provides repackaging functionality.
  • the offload processor produces multiple output streams of outgoing data packets from multiple input streams of incoming data packets, where the input streams can be like those shown in Figure 4 while the output streams can be like those shown in Figure 5.
  • the output streams produced by the offload processor in Figure 3, 4, 5, and 6 may be delivered by a unicast protocol (a unique stream for each consumer) or a multicast protocol (multiple consumers of the same stream).
  • a unicast protocol a unique stream for each consumer
  • a multicast protocol multiple consumers of the same stream.
  • the consumer-specific output packets would contain the address of the targeted consumer.
  • the consumer-specific output packets would contain the address of the targeted group of consumers (e.g. a UDP multicast address).
  • multiple output streams, unicast or multicast may be carried on a single network link.
  • the number of network links used to carry the output streams produced by the offload processor may be selected independently of the number of unique output streams.
  • the offload processor 300 can take any of a number of forms, including one or more general purpose processors (GPPs), reconfigurable logic devices (such as field
  • FPGAs programmable gate arrays
  • ASICs application-specific integrated circuits
  • GPUs graphics processing units
  • CMPs chip multiprocessors
  • GPP general-purpose processor
  • CPU central processing unit
  • exemplary embodiments of GPPs include an Intel Xeon processor and an AMD Opteron processor.
  • reconfigurable logic refers to any logic technology whose form and function can be significantly altered (i.e., reconfigured) in the field post-manufacture. This is to be contrasted with a GPP, whose function can change post-manufacture, but whose form is fixed at manufacture.
  • the term "software” refers to data processing functionality that is deployed on a GPP or other processing devices, wherein software cannot be used to change or define the form of the device on which it is loaded
  • firmware refers to data processing functionality that is deployed on reconfigurable logic or other processing devices, wherein firmware may be used to change or define the form of the device on which it is loaded.
  • the offload processor 300 comprises a reconfigurable logic device such as an FPGA
  • hardware logic will be present on the device that permits fine-grained parallelism with respect to the different operations that the offload processor performs, thereby providing the offload processor with the ability to operate at hardware processing speeds that are orders of magnitude faster than would be possible through software execution on a GPP.
  • processing tasks can be intelligently engineered into processing pipelines deployed as firmware in the hardware logic on the FPGA.
  • downstream pipeline modules can perform a processing task on data that was previously processed by upstream pipelined modules while the upstream pipeline modules are simultaneously performing other processing tasks on new data, thereby providing tremendous throughput gains.
  • other types of offload processors that provide parallelized processing capabilities can also contribute to improved latency and throughput.
  • FIG. 7 depicts an exemplary system where the offload processor 300 is deployed upstream from one or more electronic trading platform(s) (ETP(s)) 700.
  • ETP electronic trading platform
  • Each ETP 700 may include one or more data consumers within it, and the outgoing data packets from the offload processor 300 can be customized to each consumer.
  • the offload processor can
  • the offload processor can be configured to perform packet mapping as described below in connection with Figure 19.
  • the offload processor when positioned upstream from an electronic trading platform, can be employed in a network element resident in a data distribution network for financial market data.
  • network elements include repeaters, switches, routers, and firewalls.
  • a repeater embodiment, a single input port and single output port device, may be viewed as a "smart" link where data is processed as it flows through the network link.
  • such a network element can be a network switch.
  • the inventors disclose various embodiments of a network switch that offloads various processing tasks from electronic trading platforms, including embodiments of an intelligent feed switch and embodiments of an intelligent distribution switch, as described below.
  • a common practice in financial exchange and electronic trading platform architecture is to achieve greater scale by "striping the data" across multiple instances of the platform components responsible for data transmission, consumption, and processing. If the data is imagined to flow vertically through a depiction of the overall system, then this approach to scale is often termed “horizontal scaling". This approach is accepted in the industry as the most viable approach from an overall platform perspective, as the escalating rate of market data messages (doubling every 6 to 11 months) is outpacing the technology improvements available to individual components in the platform.
  • feed sources typically exchanges
  • a given line caries a proper subset of the market data published by the financial exchange.
  • all of the market data updates associated with a given financial instrument is transmitted on a single line.
  • the assignment of a given financial instrument to a line may be static or dynamic. Static assignments typically partition the set of instruments by using the starting characters in an instrument symbol and assigning an alphabet range to a given line. For example, consider a feed partitioned into four lines.
  • Line 0 carries updates for financial instruments whose symbol begins with letters "A” through “F”; line 1 carries updates for symbols beginning with letters “G” through “M”; line 2 carries updates for symbols beginning with letters “N” through “S”; line 3 carries updates for symbols beginning with letters “T” through “Z”.
  • Dynamic line assignments are typically performed as follows.
  • a static mapping line transmits information to feed consumers communicating the number of data lines, the address(es) of the data lines, and the mapping of financial instruments to each data line.
  • a financial exchange typically enforce striping across the ports provided for order entry.
  • a financial exchange provides multiple communication ports to which market participants establish connections and enter orders to electronically buy and sell financial instruments.
  • Exchanges define the subset of financial instruments for which orders are accepted on a given port.
  • exchanges statically define the subset of financial instruments by using the starting character(s) in the instrument symbol. They assign an alphabet range to a given port. For example, consider an exchange that provides four ports to a given participant.
  • Port 0 accepts orders for financial instruments whose symbol begins with letters "A” through “F”; port 1 accepts orders for symbols beginning with letters “G” through “M”; port 2 accepts orders for symbols beginning with letters “N” through “S”; port 3 accepts orders for symbols beginning with letters "T” through “Z”.
  • Each market data feed source implements its own striping strategy. Note that some market data feeds are not striped at all and employ a single line. The subsets of financial instruments associated with the lines on one market data feed may be different from the subsets of financial instruments associated with the lines on another market data feed.
  • IFS Intelligent Feed Switch
  • the IFS can be implemented on a wide variety of platforms that provide the necessary processing and memory resources, switching resources, and multiple physical network ports. Just as network switches can be built at various scales, two ports up to thousands of ports, the IFS can be scaled to meet the needs of electronic trading platforms of varying scale. In the embodiment shown in Figure 21, the IFS provides multiple ports of 10 Gigabit Ethernet connectivity, in addition to a 10/100/1000 Ethernet port for management and control. An FPGA that is resident within the switch can provide fine-grained parallel processing resources for offload engines as previously noted. The memory cache provides dedicated high-speed memory resources for the offload engines resident on the FPGA.
  • the memory cache may be implemented in Synchronous Dynamic Random Access Memory (SDRAM), Synchronous Random Access Memory (SRAM), a combination of the two, or other known memory technologies.
  • SDRAM Synchronous Dynamic Random Access Memory
  • SRAM Synchronous Random Access Memory
  • a dedicated Ethernet switch ASIC increases the port count of the IFS using existing, commodity switching devices and allows traffic to bypass the offload engines in the FPGA.
  • the FPGA is directly connected to the switching device by consuming one or more ports on the switching device. The amount of communication bandwidth between the FPGA and switching device can be scaled by increasing the number of ports dedicated to the interface.
  • the FPGA may also provide one or more ports for external connectivity, adding to the total number of ports available on the IFS.
  • standard protocol connectivity e.g. Ethernet
  • the ports that are directly connected to the FPGA can be leveraged to implement custom protocols. For example, if multiple Intelligent Feed
  • the FPGAs inside the switches may implement a custom protocol that eliminates unnecessary overhead.
  • a custom Network Interface Card containing an FPGA directly connected to the physical network port(s)
  • a custom protocol can be employed between the IFS and the server.
  • the control processor provides general purpose processing resources to control software.
  • a standard operating system (OS) such as Linux is installed on the control processor.
  • Configuration, control, and monitoring software interfaces with the FPGA device via a standard system bus, preferably PCI Express.
  • the control processor also features a system bus interface to the switch device.
  • FIG 22 shows another embodiment of the IFS wherein the switch device is replaced by another FPGA device with a dedicated memory cache.
  • the peer-to-peer (P2P) interface between the FPGA devices need not utilize a standard network protocol, such as Ethernet, but may use a low-overhead protocol for communicating over high speed device interconnects.
  • P2P peer-to-peer
  • This architecture increases the amount of processing resources available for offload functions and allows custom network protocols to be supported on any port.
  • additional FPGAs can be interconnected to scale the number of external ports provided by the IFS.
  • FIG 23 shows another embodiment of the IFS wherein a single FPGA device is utilized. This architecture can minimize cost and complexity. The number of physical ports supported is subject to the capabilities of the selected FPGA device. Note that some devices include embedded general purpose processors capable of hosting configuration, control, and monitoring applications. [0063] Note that other processing resources such as chip multi-processors (CMPs), graphics processing units (GPUs), and network processing units (NPUs) may be used in lieu of an FPGA.
  • CMPs chip multi-processors
  • GPUs graphics processing units
  • NPUs network processing units
  • An example of a network switch platform that may suitable for use as an intelligent switch to process financial market data is the Arista Application Switch 7124FX from Arista Networks, Inc. of Santa Clara, CA.
  • the IFS can be positioned within the market data feed network of the electronic trading platform.
  • a single IFS may be capable of providing the required number of switch ports, processing capacity, and data throughput.
  • the number of switch ports required depends on the number of physical network links carrying input market data feeds and the number of physical network links connecting to downstream platform components.
  • the amount of processing capacity required depends on the tasks performed by the IFS and the requirements imposed by the input market data feeds.
  • the data throughput depends on the aggregate data rates of input market data feeds and aggregate data rates of output streams delivered to platform
  • a multielement network can be constructed that includes the IFS.
  • multiple conventional switch elements can be used to aggregate the data from the physical network links carrying market data feeds.
  • a conventional switch could be used to aggregate data from forty (40) 1 Gigabit Ethernet links into four (4) 10 Gigabit Ethernet links for transfer to the IFS. This reduces the number of upstream ports required by the IFS.
  • multiple Intelligent Feed Switches can be used if the requirements exceed the capacity of a single IFS.
  • multiple IFS elements consume aggregated data from upstream conventional switches, then distribute data to downstream platform elements.
  • the network architectures in Figures 9 and 10 are exemplary but not exhaustive.
  • the IFS can be combined with other switch elements to form large networks, as is well-known in the art.
  • FIG 11 presents a simplified diagram of a conventional electronic trading platform with an IFS deployed in the market data network.
  • the IFS offloads one or more functions from the downstream feed handler components.
  • Figure 12 provides a logical diagram of the functions performed by a typical feed handler in a conventional electronic trading platform. A description of the specific functions and how they can be offloaded to the IFS are described in detail in the sections below.
  • Figure 13 provides a logical diagram of a conventional electronic trading platform with numerous feed handler function performed by the IFS. Note that the only remaining functions performed by the feed handler components are message parsing, business logic and message normalization, and subscription-based distribution. Note that we later describe an embodiment capable of further offloading the feed handler components from subscription-based distribution.
  • feed handler components can thus receive substantial benefits with no modification by simply having less data to process.
  • feed handler components can also be re-engineered to be more simple, efficient, and performant.
  • the number of discrete feed handler components required by the electronic trading platform can be substantially reduced.
  • the latency associated with market data normalization and distribution can be substantially reduced, resulting in advantages for latency-sensitive trading applications.
  • the amount of space and power required to host the electronic trading platform can be substantially reduced, resulting in simplified system monitoring and maintenance as well as reduced cost.
  • Figure 14 presents a simplified diagram of an electronic trading platform that includes one or more ticker plant components that integrate multiple components in the conventional electronic trading platform.
  • An example of an integrated ticker plant component that leverages hardware acceleration and offload engines is described in the above-referenced and incorporated patents and patent applications (see, for example, U.S. Patent No.
  • IFS integrated ticker plant components
  • the IFS can offload the feed handling tasks reflected in Figure 13, as well as additional functions such as price aggregation, event caching, top-of-book quote generation, and data quality monitoring.
  • additional functions such as price aggregation, event caching, top-of-book quote generation, and data quality monitoring.
  • a description of these functions and how they can be offloaded to an IFS is provided in subsequent sections. Offloading these functions can boost the capacity of an integrated ticker plant component, reducing the need to horizontally scale.
  • An IFS can also simplify the task of horizontally scaling with multiple integrated ticker plant components.
  • ticker plant components are used and horizontal scaling is achieved by striping the symbol range across the ticker plant components.
  • the first ticker plant is responsible for processing updates for instrument symbols beginning with characters "A" through ' ⁇ ".
  • the IFS is capable of ensuring that the first ticker plant only receives updates for the assigned set of instruments by performing the symbol routing and repackaging functions depicted in Figure 15. Note that other functions predicate the symbol routing function as described subsequently. Striping the data in this way allows each ticker plant component to retain the ability to compute composite, or pan-market, views of financial instruments. Examples of hardware-accelerated processing modules for computing composite quote and order book views are described in the above-referenced and incorporated U.S. Patent No. 7,921,046 and WO Pub. WO 2010/077829.
  • Some latency-sensitive trading applications require minimal data normalization in order to drive their trading strategies. Some of these applications may be able to directly consume data from an IFS, as shown in Figure 16. This eliminates additional network hops and processing from the datapath, thus reducing the latency of the data delivered to the applications. This latency reduction can provide advantages to these latency-sensitive trading applications. Furthermore, one or more of such latency-sensitive trading
  • IFS applications that consume data directly from the IFS can also be optionally configured to consume data from the distribution network to also receive normalized market data from a ticker plant such as a hardware-accelerated low latency ticker plant (see the dashed connection in Figure 16).
  • a ticker plant such as a hardware-accelerated low latency ticker plant (see the dashed connection in Figure 16).
  • An example of a situation where such an arrangement would be highly advantageous would be when a trading application takes ultra-low-latency data from a direct feed (e.g., in the same data center) for a local market, as well as data sourced from a consolidated feed for remote markets, such as a futures or foreign exchange market in a different country.
  • the IFS is positioned within the market data feed network, and represents the physical embodiment of that network.
  • the IFS may be configured to offload one or more functions from downstream feed consumers.
  • the same set of functions may not be performed for every feed flowing through the IFS.
  • the way in which each function is performed may vary by feed, as feed sources employ different message formats, field identifiers, datatypes, compression schemes, packet formats, transmission protocols, etc.
  • the IFS In order to correctly perform the prescribed functions on a given packet, the IFS must first identify the feed to which a given packet belongs, then retrieve the necessary information about how packets belonging to the given feed are to be handled.
  • the IFS preferably maintains a mapping table using a tuple such as the IP ⁇ source address, destination address, protocol> tuple to identify the feed to which a packet belongs (additional optional members of the tuple may include a source port number, a destination port number, and a transport protocol port number).
  • the embedded processor in the IFS utilizes a hash table, where the ⁇ source address, destination address, protocol> tuple is used as input to the hash function.
  • a content addressable memory (CAM) is another alternative to a hash table for the packet mapping operation. In a hashing
  • a control processor in the IFS configures the hash function and maintains the hash table.
  • the entry in the table contains a feed identifier.
  • the additional information about how packets belonging to the feed should be handled may be stored directly in the hash table, or in a separate table indexed by the feed identifier.
  • the additional information may include one or more of the following pieces of meta-data:
  • this code would be a binary enumeration of the ISO 10383 market identification codes (MIC) for the markets supported by the IFS.
  • MIC market identification codes
  • XNYS is the MIC for the New York Stock Exchange which may be assigned an enumerated value in order to consume minimal space in the meta-data table and pre-normalized messages.
  • DSIC Data source identification code
  • CQS Consolidated Quote System
  • CTS Consolidated Tape System
  • NYSE Quotes NYSE Trades
  • NYSE OpenBook Ultra etc.
  • Each feed, or data source is assigned a unique tag. Similar to the market codes, the data source codes are assigned an enumerated value in order to consume minimal space in the meta-data table and pre-normalized messages.
  • Line identification code a unique identifier for the specific line within the feed. Similar to the MIC and DSIC, each unique line is assigned a unique tag.
  • the line identifiers configured on the IFS are preferably assigned an enumerated value in order to consume minimal space in the meta-data table and pre-normalized messages.
  • This meta-information can be propagated to downstream offload engines in the IFS, along with the packet, as shown in Figure 19.
  • the configuration, control, and table management logic configures the hash function and table entries. This logic is preferable hosted on a co-resident control processor, preferably as a pipelined processing engine.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP Unreliable Datagram Protocol
  • TCP provides a reliable point-to-point connection between the feed source and the feed consumer.
  • Feed consumers initiate a connection with the feed source, and the feed source must transmit a copy of all market data updates to each feed consumer.
  • Usage of TCP places a large data replication load on the feed source, therefore it is typically used for lower bandwidth feeds and/or feeds with a restricted set of consumers.
  • a feed handler can terminate the TCP connection, passing along the payload of the TCP packets to the packet parsing and decoding logic.
  • Implementation of the TCP receive logic is commonly provided by the Operating System (OS) or network interface adapter of the system upon which the feed handler is running.
  • OS Operating System
  • redundant TCP connections are not used for financial market data transmission, as TCP provides reliable transmission.
  • UDP does not provide reliable transmission, but does include multicast capability.
  • Multicast allows the sender to transmit a single copy of a datagram to multiple consumers.
  • Multicast leverages network elements to perform the necessary datagram replication.
  • An additional protocol allows multicast consumers to "join” a multicast "group” by specifying the multicast address assigned to the "group". The sender sends a single datagram to the group address and intermediary network elements replicate the datagram as necessary in order to pass a copy of the datagram to the output ports associated with consumers that have joined the multicast group.
  • Datagrams can be lost in transit for a number of reasons: congestion within a network element causes the datagram to be dropped, a fault in a network link corrupts one or more datagrams transiting the link, etc. While there have been numerous reliable multicast protocols proposed from academia and industry, none have found widespread adoption. Most market data feed sources that utilize UDP multicast transmit redundant copies of the feed, an "A side" and a "B side". Note that more than two copies are possible. For each "line" of the feed, there is a dedicated multicast group, an "A" multicast group and a "B” multicast group. Typically, the feed source ensures that each copy of the feed is transmitted by independent systems, and feed consumers ensure that each copy of the feed transits an independent network path. Feed consumers then perform arbitration to recover from data loss on one of the redundant copies of the feed.
  • a packet may contain one or more market data update messages for one or more financial instruments.
  • feed sources assign a monotonically increasing sequence number to each packet transmitted on a given "line". This simplifies the task of detecting data loss on a given line. If the most recently received packet contains a sequence number of 5893, then the sequence number of the next packet should be 5894.
  • feed sources typically transmit identical packets on the redundant multicast groups associated with a line. For example, packet sequence number 3839 on the A and B side of the feed contains the same market data update messages in the same order. This simplifies the arbitration process for feed consumers.
  • Figure 17 provides a simple example of redundant feed arbitration.
  • the sequence of packets for a single pair of redundant lines is shown. Time progresses vertically, with packet 5894 received first from line 1A, packet 5895 received second from line 1 A, etc.
  • a line arbiter forwards the packet with the next sequence number, regardless of which "side" the packet arrives on. When the redundant copy of the packet is received on the other side, it is dropped. As depicted in Figure 17, one of the redundant sides typically delivers a packet consistently prior to the other side. If the arbiter receives a packet with a sequence number greater than the expected sequence number, it detects a gap on one of the redundant lines.
  • the arbiter can be configured to wait a configured hold time to see if the missing packet is delivered by the other side.
  • the difference between the arrival times of copies of the same packet on the redundant lines is referred to as the line skew.
  • the hold time can be configured to be greater than the average line skew. If the missing packet does arrive on the redundant side prior to the expiration of the hold time, then a gap is registered for the particular feed line.
  • the arbiter typically reports the missing sequence numbers to a separate component that manages gap mitigation and recovery. If the feed provides retransmission capabilities, then the arbiter may buffer packets on both sides until the missing packets are returned by the gap recovery component.
  • a packet sequence number may not be monotonically increasing or may not be present at all.
  • arbitration is performed among one or more copies of a UDP multicast feed; however, arbitration can occur among copies of the feed delivered via different transmission protocols (UDP, TCP, etc.). In these scenarios, the content of packets on the redundant copies of the feed may not be identical.
  • the transmitter of packets on the A side may packetize the sequence of market data update messages differently from the transmitter on the B side. This requires the IFS to parse packets prior to performing the arbitration function.
  • the packet allows the IFS to perform the appropriate line arbitration actions for a given packet. If the packet belongs to an unarbitrated TCP flow, then the packet may bypass the line arbitration and gap detection engine. If the line requires dictates arbitration at the message- level as opposed to the packet level, then the IFS first routes the packet to parsing and decoding engines. The line arbitration and gap detection function may be performed by multiple parallel engines. The LIC may also be used to the route the packet to the appropriate engine handling arbitration for the associated feed line. Furthermore, the LIC is used to identify the appropriate arbitration buffer into which the packet should be inserted.
  • Figure 18 provides an example of a line arbitration offload engine, which is preferably implemented in a pipelined processing engine.
  • the arbiter For each input line, the arbiter maintains a packet buffer to store the packets received from the redundant sides of the feed line.
  • the example in Figure 18 demonstrates two-arbitration; additional buffers are provisioned if multi-way arbitration is performed.
  • the packet buffers in the arbiter may optionally provide for resequencing by inserting each new packet in the proper sequence in the buffer.
  • arbiter functions typically omit resequencing to reduce overhead and complexity.
  • a register is used to maintain the next expected sequence number.
  • the logic compares the sequence number of the packet residing at the head of each packet buffer. If a matching sequence number is found, the packet is forwarded. If the sequence number is less than the expected sequence number, the packet is dropped. If the sequence number is greater than the expected sequence number, the other buffer or buffers are examined for the required packet. Note that this may require that multiple packets be read until a match is found, the buffer is empty, or a gap is detected. If a gap is detected the gap detection and reporting logic resets then starts the wait timer. If the expected packet sequence number does not arrive before the wait timer exceeds the value in the max hold time register, then a gap is reported to the gap mitigation and recovery engine with the missing packet sequence number range.
  • the gap detection and reporting logic may also report gap information to a control processor or to downstream monitoring applications via generated monitoring messages. If the gap mitigation and recovery engine is configured to request retransmissions, then the arbiter pauses until the gap mitigation and recovery engine passes the missing packet or packets to the arbiter or returns a retransmission timeout signal.
  • the gap mitigation and recovery engine may be hosted on the same device as the arbiter, or it may be hosted on a control processor within the IFS.
  • the IFS may implement TCP termination logic in order to offload feed handler processing for feeds utilizing TCP for reliable transmission.
  • TCP consumer logic including implementation in custom hardware logic
  • TCP hardware stack modules e.g., firmware modules that perform TCP endpoint functionality, such as PLDA, Embedded Design Studio, HiTech Global, etc.
  • TCP feeds processed by the TCP termination logic can bypass the line arbitration and gap detection component, as redundant TCP stream are not typically used.
  • the output protocol can be a protocol such as UDP unicast or multicast, raw Ethernet, or a Remote Direct Memory Access (RDM A) protocol implemented over Ethernet (e.g., RoCE).
  • RDM A Remote Direct Memory Access
  • the IFS can perform one or more "pre-normalization" functions in order to simplify the task of downstream consumers.
  • the IFS preferably decomposes packets into discrete messages.
  • feed sources typically pack multiple update messages in a single packet.
  • the pre-normalization engine in the IFS utilizes the packet parsing templates retrieved by the packet mapping engine. Packet parsing techniques amenable to implementation in hardware and parallel processors are known in the art as described in the above-referenced and incorporated U.S. Patent No. 7,921,046.
  • the pre-normalization engine must utilize the FAST decoding template in order to decompress and parse the packet into individual messages, as described in the above-referenced and incorporated U.S. Patent No. 7,921,046.
  • the message parsing logic can be configured to preserve the original
  • Extracted fields such as symbols and order reference numbers, can be added to the meta-data that accompanies the packet as it propagates through the IFS.
  • downstream consumer applications need not be changed when an IFS is introduced in the market data network.
  • an existing feed handler for the NASDAQ TotalView feed need not change, as the format of the messages it processes still conforms to the feed specification. If the symbol-routing and repackaging function is applied, the existing feed handler will simply receive packets with messages associated with the symbol range for which it is responsible, but the message formats will conform to the exchange specification. This function is described in more detail below.
  • the pre-normalization logic can also be configured to offload normalization logic from downstream consumers.
  • the parsing logic can be configured to perform FAST decompression and FIX parsing.
  • the fields in each message can be configured to a prescribed native data type.
  • an ASCII- encoded price field can be converted into a signed 32-bit integer
  • an ASCII-encoded string can be mapped to a binary index value, etc.
  • the type-converted fields can then be aligned on byte or word boundaries in order to facilitate efficient consumption by consumers.
  • the pre-normalization logic can maintain a table of downstream consumers capable of receiving the pre-normalized version of the feed. For example, the IFS may transmit pre-normalized messages on ports 3 through 8, but transmit the raw messages on ports 9 through 12.
  • the IFS can be configured to append fields to the raw message
  • the IFS may append the MIC, DSIC, LIC, and binary symbol index to the message.
  • Additional appended fields may include, but are not limited to, message-based sequence numbers and high-resolution IFS transmit timestamps.
  • the IFS can be configured to perform a symbol mapping function.
  • the symbol mapping function assigns a binary symbol index to the financial instrument associated with the update event. This index provides a convenient way for downstream functions and consuming applications to perform processing on a per symbol basis.
  • An efficient technique for mapping instrument symbols using parallel processing resources in offload engines is described in the above-referenced and incorporated U.S. Patent No. 7,921 ,046. Note that some feeds provide updates on a per-order basis and some update events do not contain the instrument symbol, but only an order reference number. As shown in Figure 28, feed consumers can maintain a table of active orders in order to map an order reference number to an active order to buy or sell the financial instrument identified by the associated symbol.
  • events that report a new active order include a reference to the symbol for the financial instrument.
  • the symbol is mapped to a symbol ID.
  • the order information and symbol ID are then added to the active order table.
  • the order reference number is used to lookup the order's entry in the active order table that includes the symbol ID.
  • a demultiplexer can receive streaming parsed messages that include a symbol reference or an order reference to identify a message or event type. This type data can determine whether the parsed message is passed to the output line feeding the symbol lookup operation or the output line feeding the order lookup operation.
  • data for new orders can be passed from the symbol lookup to the order lookup for updating the active order table.
  • a multiplexer (MUX) downstream from the symbol lookup and order lookup operations can merge the looked up data (symbol ID, order information, as appropriate) with the parsed messages for delivery downstream.
  • MUX multiplexer
  • An efficient technique for mapping order reference numbers to the mapped symbol index using parallel processing resources in offload engines is described in the above-referenced and incorporated WO Pub. WO 2010/077829.
  • the computational resources in the IFS can include dedicated high-speed memory interfaces.
  • the IFS may also assign one or more high- precision timestamps. For example, a timestamp may be assigned when the IFS receives a packet, a timestamp may be assigned immediately prior to transmitting a packet, etc.
  • the high-precision timestamp preferably provides nanosecond resolution.
  • the time source used to assign the timestamps should be disciplined with a high-precision time synchronization protocol.
  • Example protocols include the Network Time Protocol (NTP) and the Precision Time Protocol (PTP).
  • the protocol engine can be co-resident with the offload engines in the IFS, but is preferably implemented in a control processor that disciplines a timer in the offload engines.
  • the IFS may also assign additional sequence numbers. For example, the IFS may assign a per-message, per-symbol sequence number. This would provide a monotonically increasing sequence number for each instrument. These additional timestamps and sequence numbers may be appended to raw message formats or included in the pre-normalized message format, as described above.
  • the symbol-based routing allows the IFS to deliver updates for a prescribed set of symbols to downstream components in the electronic trading platform.
  • the IFS can act as a subscription based routing and filtering engine for latency-sensitive applications that consume the raw or pre-normalized updates directly from the IFS.
  • the IFS can facilitate a horizontal scaling strategy by striping the incoming raw feed data by symbol within the market data feed network itself. This allows the IFS to deliver the updates for the prescribed symbol range to downstream feed handler or ticker plant components, without having to rely on additional processing capabilities in those components to perform this function. This can dramatically reduce data delivery latency and increase the processing capacity of those components.
  • Figure 20 depicts an exemplary processing module configured to perform symbol- routing and repackaging.
  • a module is preferably implemented as a pipelined processing engine.
  • the symbol-routing and repackaging function first utilizes the symbol index to lookup an interest list in the interest list table.
  • additional fields such as the market identification code (MIC) and data source identification code (DSIC) may be used in addition to the symbol index to lookup an interest list.
  • MIC market identification code
  • DSIC data source identification code
  • the interest list is stored in the form of a bit vector where the position of each bit corresponds to a downstream consumer.
  • a downstream consumer may be a physical output port, a multicast group, a specific host or server, a specific application (such as a feed handler), etc.
  • the scope of a "consumer" depends on the downstream platform architecture.
  • Associated with each consumer is a message queue that contains the messages destined for the consumer.
  • a fair scheduler ensures that each of the message queues receives fair service.
  • Packetization logic reads multiple updates from the selected message queue and packages the updates into a packet for transmission on the prescribed output port, using the prescribed network address and transport port. Messages can be combined into an outgoing Ethernet frame with appropriate MAC-level, and optionally IP-level headers.
  • the packetization logic constructs maximally sized packets: the logic reads as many messages as possible from the queue until the maximum packet size is reached or the message queue is empty.
  • packetization strategy and destination parameters may be specified via packaging parameters stored in a table.
  • the packetization logic simply performs a lookup using the queue number that it is currently servicing in order to retrieve the appropriate parameters.
  • the interest list and packaging parameter tables are preferably managed by configuration, control, and table management logic hosted on a co-resident control processor.
  • the messages in the newly constructed packets may have been transmitted by their concomitant feed sources in different packets or in the same packet with other messages that are now excluded. This is an example of the IFS constructing a customized "feed" for downstream consumers.
  • downstream consumers are equipped with network interface devices that allow for custom protocol implementation, e.g. an FPGA connected directly to the physical network link
  • additional optimizations may be implemented by the packetization logic. For example, the Ethernet MAC-level (and above) headers and CRC trailer may be stripped off any packet. By doing so, unnecessary overhead can be removed from packets, reducing packet sizes, reducing data transmission latency, and reducing the amount of processing required to consume the packets. As shown in Figure 16, this optimization may apply to latency-sensitive trading applications, feed handlers, or ticker plants.
  • this optimization may apply to latency-sensitive trading applications, feed handlers, or ticker plants.
  • downstream consumers can consume price-aggregated updates reflecting new price points, changes to existing price points, and deletions of price points from the book. This can reduce the number of update events to downstream consumers.
  • price aggregation may be performed on a per-symbol, per-market basis (e.g.
  • NASDAQ market only a per-symbol, pan-market basis (e.g. NASDAQ, NYSE, BATS, ARCA, Direct Edge) to facilitate virtual order book views.
  • pan-market basis e.g. NASDAQ, NYSE, BATS, ARCA, Direct Edge
  • Size filtering is defined as the suppression of an update if the result of the update is a change in aggregate volume (size) at a pre-existing price point, where the amount of the change relative to the most recent update transmitted to consumers is less than a configured threshold. Note that the threshold may be relative to the current volume, e.g. a change in size of 50%.
  • price-aggregated entries can be sorted into a price book view for each symbol.
  • the top N levels of the price-aggregated represent a top-of-book quote.
  • N is typically one (i.e. only the best bid and offer values), but N may be set to be a small value such as three (3) to enhance the quote with visibility into the next N-l price levels in the book.
  • the techniques described in these incorporated referenced can be used to efficiently sort price-aggregated updates into price books and generate top-of-book quotes when an entry in the top N levels changes using parallel processing resources.
  • the IFS is capable of only transmitting updates for symbols for which downstream consumers are interested using the symbol-based routing described above. If a consumer wishes to add a symbol to its set of interest, the consumer would need to wait until a subsequent quote event is transmitted by the feed source in order to receive the current pricing for the associated financial instrument.
  • a simple form of a cache can be efficiently implemented in the IFS in order to allow downstream consumers to immediately receive current pricing data for a financial instrument if its symbol is dynamically added to its set of interest during a trading session.
  • the IFS can maintain a simply last event cache that stores the most recent quote and most recent trade event received on a per-symbol, per-market basis.
  • a table of events is maintained where an entry is located using the symbol index, MIC, and MSIC.
  • the current quote and trade events in the event cache are transmitted to the consumer. This allows the consumer to receive the current bid, offer, and last traded price information for the instrument.
  • LVC last value cache
  • the IFS can be also be configured to monitor a wide variety of data quality metrics on a per-symbol, per-market basis.
  • a list of data quality metrics includes but is not limited to: • Line gap: packet loss experienced on the line carrying updates for the symbol.
  • the data quality can be reflected in an enumerated value and included in messages transmitted to downstream consumers as an appended field, as previously described. These enumerated data quality states can be used by the IFS and/or downstream consumers to perform a variety data quality mitigation operations.
  • An example of a data quality mitigation operation is to provide data source failover.
  • DSIC data source identification code
  • the control logic alters the interest list entries associated with affected instruments and downstream consumers.
  • the IFS automatically transitions back to the higher-priority data source.
  • the IFS is configured to apply hysteresis to the data source failover function to prevent thrashing between data sources.
  • data source failover may rely on the presence of other functions within the IFS such as synthetic quote generation if failover is to be supported between depth of market feeds and top-of-book quote feeds.
  • monitoring, Configuration, and Control are preferably hosted on a co-resident processor in the IFS. This logic may interface with applications in the electronic platform or remote operations applications. In one embodiment of the IFS, control messages are received from an egress port. This allows one or more applications in the electronic trading platform to specify symbol routing parameters, packet and message parsing templates, prioritized lists of data sources, gap reporting and mitigation parameters, etc.
  • the IFS can also be used by feed sources (exchanges and consolidated feed vendors) to offload many of the functions required in feed generation. These tasks are largely the inverse of those performed by feed consumers. Specifically, the IFS can be configured to encode updates using prescribed encoding templates and transmit the updates on specified multicast groups, output ports, etc. Other functions that are applicable to feed generation include high-resolution timestamping, rate monitoring, and data quality monitoring.
  • Distribution Switch can be positioned downstream of market data normalization components in the electronic trading platform.
  • the IDS can be used to offload distribution functions from normalization components such ticker plants, to offload data consumption and management functions from downstream consumers such as trading applications, and to introduce new capabilities into the distribution network in the electronic trading platform. Examples of distribution capabilities are described in the above-referenced and incorporated U.S. Pat. App. Ser. No. 61/570,670.
  • the IDS architecture can be one of the previously described variants shown in Figures 21 , 22, and 23. Note that the number of switch ports and amount of interconnect bandwidth between internal devices (FPGAs, switch ASICS, memory, etc.) may be provisioned differently for an IDS application, relative to an IFS application. [00123] As shown in Figure 25, the IDS may host one or more distribution functions. The IDS can be used to offload the task of interest-based distribution. The IDS can maintain a mapping from instrument symbol to interest list, an example of such a mapping being described in the above-referenced and incorporated U.S. Patent No. 7,921,046.
  • the IDS makes the requisite copies of the update event and addresses each event for the specified consumer. By offloading this function, upstream components such as ticker plants only need to propagate a single copy of each update event. This reduces the processing resource requirement, or allows the processing resources previously dedicated to interest list maintenance and event replication to be redeployed for other purposes.
  • Data source failover may also be performed by the IDS. Like the previously mentioned
  • the IDS allows downstream consumers to specify a prioritized list of normalized data sources. When the preferred source becomes unavailable or the data quality transitions to an unacceptable state, the IDS switches to the next highest priority normalized data source.
  • the IDS may also perform customized computations a per-consumer basis.
  • Example computations include constructing user-defined Virtual Order Books, computing basket computations, computing options prices (and implied volatilities) and generating user- defined Best Bid and Offer (BBO) quotes (see the above-referenced and incorporated U.S. Patent Nos. 7,840,482 and 7,921,046, U.S. Pat. App. Pub. 2009/0182683, and WO Pub. WO 2010/077829 for examples of hardware-accelerated processing modules for such tasks).
  • BBO Best Bid and Offer
  • ticker plant distributing data to hundreds of consumers may not have the processing capacity to perform hundreds of customized computations, one for each consumer.
  • Examples of other customized per consumer computations include: liquidity target Net Asset Value (NAV) computations, future/spot price transformations, and currency conversions.
  • NAV Net Asset Value
  • the IDS may host one or more of the low latency data distribution
  • the IDS may perform all of the functions of an Edge Cache. In another embodiment, the IDS may perform all of the functions of a Connection Multiplexer. As such, the IDS includes at least one instance of a multi-class distribution engine (MDE) that includes some permutation of Critical Transmission Engine, Adaptive Transmission Engine, or Metered Transmission Engine.
  • MDE multi-class distribution engine
  • the IDS may also perform per consumer protocol bridging. For example, the upstream connection from the IDS to a ticker plant may use a point-to-point Remote Direct Memory Access (RDMA) protocol.
  • RDMA Remote Direct Memory Access
  • the IDS may be distributing data to a set of consumers via point-to-point connections using the Transmission Control Protocol (TCP) over Internet Protocol (IP), and distributing data to another set of consumers via a proprietary reliable multicast protocol over Unreliable Datagram Protocol (UDP).
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • UDP Unreliable Datagram Protocol
  • data packets from a plurality of data feeds arrive on an input link to the offload processor, and the offload processor 300 is configured to provide consumer-specific repackaging of the incoming data packets.
  • the offload processor 300 is configured to provide consumer-specific repackaging of the incoming data packets.
  • the messages of the incoming packets may have been organized, the outgoing packets can organize the messages on a consumer-specific or other basis.
  • the incoming data packets may correspond to only a single data feed.
  • Figure 30 depicts an embodiment where the offload processor 300 receives multiple incoming data feeds on multiple input links and provides repackaging for a single output link.
  • Figure 31 depicts an embodiment where the offload processor 300 receives one or more data feeds on a single input link and provides repackaging for multiple output links.
  • Figure 32 depicts an embodiment where the offload processor 300 receives multiple incoming data feeds on multiple input links and provides repackaging for a multiple output links.
  • nonfinancial data feeds could be data feeds such as those from social networks (e.g., a Twitter data feed, a Facebook data feed, etc.), content aggregation feeds (e.g., RSS feeds), machine-readable news feeds, and others.
  • Figure 33 depicts how the offload processor 300 can deliver the outgoing reorganized data packets to a plurality of different data consumers.
  • the offload processor 300 can take the form of an intelligent feed switch 3400,
  • Such a switch 3400 can reside in a data distribution network.
  • the intelligent feed switch 3400 can be configured to provide any of a number of data processing operations on incoming messages within the data packets of the one or more incoming data feeds.
  • these data processing operations can be hardware-accelerated data processing operations. Examples of hardware-accelerated data processing operations that can be performed include data processing operations such as data searching, regular expression pattern matching, approximate pattern matching,
  • suitable hardware acceleration platforms can include reconfigurable logic (e.g., FPGAs) and GPUs.
  • the different data consumers may have a desire to monitor one or more data feeds for data of interest. For example, a consumer may be interested in being notified of or receiving all messages in a data feed that include a particular company name, person's name, sports team, and/or city. Moreover, different data consumers would likely have varying interests with regard to such monitoring efforts.
  • the intelligent feed switch can be configured to perform search operations on the messages in one or more data feeds to find all messages which include data that matches one or more search terms. The messages that match the terms for a given data consumer can then be associated with that data consumer, and the intelligent feed switch can direct such messages to the interested data consumer.
  • Figure 35 illustrates a process flow for such an operation.
  • the intelligent feed switch can implement hardware-accelerated search capabilities as described in the above-referenced and incorporated patents and patent applications to implement the process flow of Figure 35.
  • different consumers may want different messages of interest to them encrypted in a certain fashion.
  • Such encryption operations can also be implemented in the intelligent feed switch, preferably as hardware-accelerated encryption.
  • different consumers may desire different data normalization/quality checking operations be performed on messages of interest to them. Once again, such operations could be implemented in the intelligent feed switch on a consumer-specific basis.

Abstract

Various techniques are disclosed for offloading the processing of data packets. For example, incoming data packets can be processed through an offload processor to generate a new stream of outgoing data packets that organize data packets in a manner different than the incoming data packets. Furthermore, in an exemplary embodiment, the offloaded processing can be resident in an intelligent switch, such as an intelligent switch, such as an intelligent switch upstream or downstream from an electronic trading platform.

Description

Offload Processing of Data Packets
Introduction:
[0001] Accelerated data processing, particularly for data communicated over networks, is an ever present need in the art. This need is acutely present in the processing of financial market data to support the trading of financial instruments. However, it should be understood that the need for accelerated data processing is also present for a wide variety of other applications.
[0002] The process of trading financial instruments may be viewed broadly as proceeding through a cycle as shown in Figure 1. At the top of the cycle is the exchange which is responsible for matching up offers to buy and sell financial instruments. Exchanges disseminate market information, such as the appearance of new buy/sell offers and trade transactions, as streams of events known as market data feeds. Trading firms receive market data from the various exchanges upon which they trade. Note that many traders manage diverse portfolios of instruments requiring them to monitor the state of multiple exchanges. Utilizing the data received from the exchange feeds, trading systems make trading decisions and issue buy/sell orders to the financial exchanges. Orders flow into the exchange where they are inserted into a sorted "book" of orders, triggering the publication of one or more events on the market data feeds.
[0003] In an attempt to promptly deliver financial information to interested parties such as traders, a variety of electronic trading platforms have been developed for the purpose of ostensible "real time" delivery of streaming bid, offer, and trade information for financial instruments to traders. Figure 2 illustrates an exemplary platform that is currently known in the art. As shown in Figure 2, the electronic trading platform 200 comprises a plurality of functional units 202 that are configured to carry out data processing operations such as the ones depicted in units 202, whereby traders at workstations 204 have access to financial data of interest and whereby trade information can be sent to various exchanges or other outside systems via output path 210. The purpose and details of the functions performed by functional units 202 are well-known in the art. A stream 206 of financial data arrives at the system 200 from an external source such as the exchanges themselves (e.g., NYSE,
NASDAQ, etc.) over private data communication lines or from extranet providers such as Sawis or BT Radians. The financial data source stream 206 comprises a series of messages that individually represent a new offer to buy or sell a financial instrument, an indication of a completed sale of a financial instrument, notifications of corrections to previously-reported sales of a financial instrument, administrative messages related to such transactions, and the like. As used herein, a "financial instrument" refers to a contract representing equity ownership, debt or credit, typically in relation to a corporate or governmental entity, wherein the contract is saleable. Examples of "financial instruments" include stocks, bonds, commodities, currency traded on currency markets, etc. but would not include cash or checks in the sense of how those items are used outside financial trading markets (i.e., the purchase of groceries at a grocery store using cash or check would not be covered by the term "financial instrument" as used herein; similarly, the withdrawal of $100 in cash from an Automatic Teller Machine using a debit card would not be covered by the term "financial instrument" as used herein). Functional units 202 of the system then operate on stream 206 or data derived therefrom to carry out a variety of financial processing tasks. As used herein, the term "financial market data" refers to the data contained in or derived from a series of messages that individually represent a new offer to buy or sell a financial instrument, an indication of a completed sale of a financial instrument, notifications of corrections to previously-reported sales of a financial instrument, administrative messages related to such transactions, and the like. The term "financial market source data" refers to a feed of financial market data directly from a data source such as an exchange itself or a third party provider (e.g., a Sawis or BT Radianz provider). The term "financial market secondary data" refers to financial market data that has been derived from financial market source data, such as data produced by a feed compression operation, a feed handling operation, an option pricing operation, etc.
4] Financial data applications require fast access to large volumes of financial market data, and latency is an ever present technical problem in need of ever evolving solutions in the field of processing financial market data. As depicted in Figure 2, the consumption, normalization, aggregation, and distribution of financial market data are key elements in a system that processes financial market data. For a broad spectrum of applications, platform architects seek to minimize the latency of market data processing and distribution, while minimizing the space and power required to host the market data processing and distribution elements. As described in the following patents and patent application, significant performance, efficiency, and scalability improvements can be achieved by leveraging reconfigurable hardware devices and other types of co-processors to integrate and consolidate market data consumption, normalization, aggregation, enrichment, and distribution functions: U.S. Patent Nos. 7,840,482, 7,921,046, and 7,954,114 as well as the following published patent applications: U.S. Pat. App. Pub. 2007/0174841, U.S. Pat. App. Pub. 2007/0294157, U.S. Pat. App. Pub. 2008/0243675, U.S. Pat. App. Pub. 2009/0182683, U.S. Pat. App. Pub. 2009/0287628, U.S. Pat. App. Pub. 2011/0040701, U.S. Pat. App. Pub. 2011/0178911, U.S. Pat. App. Pub. 2011/0178912, U.S. Pat. App. Pub. 2011/0178917, U.S. Pat. App. Pub. 2011/0178918, U.S. Pat. App. Pub. 2011/0178919, U.S. Pat. App. Pub. 2011/0178957, U.S. Pat. App. Pub. 2011/0179050, U.S. Pat. App. Pub. 2011/0184844, WO Pub. WO 2010/077829, U.S. Pat. App. Pub. 2012/0246052, and U.S. Pat. App. Ser. No. 61/570,670, entitled "Method and Apparatus for Low Latency Data Distribution", filed December 14, 2011, the entire disclosures of each of which are incorporated herein by reference. These concepts can be extended to various market data processing tasks as described in the above-referenced and incorporated patents and patent applications.
Similarly, the above-referenced and incorporated Pat. App. Ser. No. 61/570,670
demonstrates how the systems responsible for the distribution of real-time financial data can be greatly enhanced via the use of novel communication protocols implemented in reconfigurable hardware devices and other types of co-processors.
[0005] In accordance with various embodiments disclosed herein, the inventors further
disclose various methods, apparatuses, and systems for offloading the processing of data packets, including data packets that contain financial market data. In exemplary
embodiments, various processing tasks are offloaded from an electronic trading platform to one or more processors upstream or downstream from the electronic trading platform. It should be understood that the term upstream in this context is meant to identify a directional flow with respect to data that is moving to an electronic trading platform, in which case an offload processor upstream from the electronic trading platform would process financial market data flowing toward the electronic trading platform. Similarly, in this context downstream is meant to identify a directional flow with respect to data that is moving away from an electronic trading platform, in which case an offload processor downstream from the electronic trading platform would process financial market data flowing out of the electronic trading platform.
[0006] In some embodiments, the offloaded processing can be moved into the data
distribution network for financial market data. For example, one or more of the offloaded financial market data processing tasks described herein can be implemented in one or more network elements of the data distribution network, such as a s witch within the data distribution network. Disclosed herein are exemplary embodiments where a number of market data consumption, normalization, aggregation, enrichment, and distribution functions can be embedded within the elements that comprise the market data feed network 214.
Conceptually, these embodiments offload processing tasks typically performed by downstream processing elements 202 such as feed handlers and virtual order books. The inventors also disclose a number of market data distribution functions that can be embedded within the network elements that comprise the financial application data network 208.
Conceptually, these embodiments effectively offload processing tasks typically performed by ticker plants, messaging middleware, and downstream applications. Offloading these tasks from traditional platform components and embedding them in network elements may obviate some platform components, improve the performance of some components, reduce the total amount of space and power required by the platform, achieve higher system throughput, and deliver lower latency market data to consuming applications.
[0007] These and other features and advantages of the present invention will be apparent to those having ordinary skill in the art upon review of the teachings in the following description and drawings.
Brief Description of the Drawings:
[0008] Figure 1 illustrates an exemplary process cycle for trading financial instruments.
[0009] Figure 2 illustrates an exemplary electronic trading platform.
[0010] Figures 3-6 illustrate exemplary embodiments for offload processors that provide repackaging functionality.
[0011] Figure 7 illustrates an exemplary system where an offload processor is deployed
upstream from one or more electronic trading platform(s).
[0012] Figure 8 illustrates an exemplary system where an intelligent feed switch is positioned within the market data feed network of an electronic trading platform.
[0013] Figure 9 illustrates an exemplary system where conventional switches are used to aggregate financial market data feeds for delivery to an intelligent feed switch.
[0014] Figure 10 illustrates an exemplary system where conventional switches are used to aggregate financial market data feeds for delivery to multiple intelligent feed switches.
[0015] Figure 11 depicts an exemplary electronic trading platform with an intelligent feed switch deployed in the market data network.
[0016] Figure 12 illustrates the system of Figure 11 including a logical diagram of functions performed by a typical feed handler in an electronic trading platform. [0017] Figure 13 illustrates the system of Figure 11 but where several functions are offloaded from the feed handler to the intelligent feed switch.
[0018] Figure 14 illustrates an exemplary electronic trading platform that includes one or more ticker plant components.
[0019] Figure 15 illustrates the system of Figure 14 but where several functions are offloaded from a ticker plant to the intelligent feed switch.
[0020] Figure 16 illustrates an exemplary system where latency-sensitive trading applications consume data directly from an intelligent feed switch.
[0021] Figure 17 illustrates an example of redundant feed arbitration.
[0022] Figure 18 illustrates an example of a line arbitration offload engine.
[0023] Figure 19 illustrates an example of a packet mapping offload engine.
[0024] Figure 20 illustrates an exemplary processing module configured to perform symbol- routing and repackaging.
[0025] Figure 21 illustrates an exemplary intelligent feed switch that provides multiple ports of 10 Gigabit Ethernet connectivity.
[0026] Figure 22 illustrates an exemplary intelligent feed switch wherein the switch device is replaced by another FPGA device with a dedicated memory cache.
[0027] Figure 23 illustrates an exemplary intelligent feed switch wherein a single FPGA
device is utilized.
[0028] Figure 24 illustrates an exemplary intelligent distribution switch positioned
downstream of market data normalization components in an electronic trading platform.
[0029] Figure 25 illustrates an exemplary intelligent distribution switch that hosts one or more distribution functions.
[0030] Figure 26 illustrates an exemplary system where a feed handler is configured
terminate a TCP connection.
[0031] Figure 27 illustrates an exemplary intelligent feed switch that is configured to
implement TCP termination logic.
[0032] Figure 28 illustrates an exemplary engine that provides symbol and order mapping.
[0033] Figures 29-32 illustrate exemplary embodiments for offload processors that provide repackaging functionality with respect to nonfinancial data.
[0034] Figure 33 illustrates an exemplary system where an offload processor is deployed upstream from multiple data consumers.
[0035] Figure 34 depicts an exemplary intelligent feed switch for processing nonfinancial data. [0036] Figure 35 depicts an exemplary process flow that can be implemented by the intelligent feed switch of Figure 34.
Detailed Description: [0037] A. Offload Processor:
[0038] Thus, in an exemplary embodiment, the inventors disclose that an offload processor can be configured to process incoming data packets, where each of at least a plurality of the incoming data packets contain a plurality of financial market data messages, and wherein the financial market data messages comprise a plurality of data fields describing financial market data for a plurality of financial instruments. Thus, the payload of each incoming data packet can comprise one or more financial market data messages. Such an offload processor can filter and repackage the financial market data into outgoing data packets where the financial market data that is grouped into outgoing data packets is grouped using a criterion different than the criterion upon which financial market data was grouped into the incoming data packets. This permits the offload processor to serve a valuable role in generating a new set of customized outgoing data packets from incoming data packets. In various exemplary embodiments of such an offload processor, the offload processor can alleviate the processing burden on the downstream electronic trading platform(s).
[0039] Examples of such an offload processor are shown in Figures 3-6. Figure 3 depicts an exemplary offload processor 300 that is configured to receive as an input a consolidated stream of incoming data packets from different financial markets. As shown in Figure 3, each incoming data packet has a payload that contains multiple financial market data messages from the same financial market. Thus, a plurality of financial market data messages from the feed for Financial Market 1 (e.g., NYSE) are combined in the same packet (e.g., where financial market data message FMDMl(Mkt 1) is a new offer to buy stock for Company A from the NYSE, FMDM2(Mkt 1) is a new offer to sell stock for Company B from the NYSE, and where FMDM3(Mkt 1) is a notification of a completed trade on stock for Company C from the NYSE), while a plurality of financial market data messages from the feed for Financial Market 2 (e.g., NASDAQ) are combined in the same packet, and so on. The offload processor 300 performs financial market data filtering and repackaging between incoming and outgoing data packets such that the outgoing financial market data packets contain financial market data messages that are organized using a different criterion. Thus, the offload processor filters and sorts the financial market data from the different markets by a criterion such as which downstream data consumers have expressed an interest in such financial market data. In this fashion, the offload processor 300 can mix payload portions of incoming data packets on a criterion-specific basis to generate outgoing data packets with newly organized payloads. For example, data consumer A may have an interest in all new messages relating a particular set of financial instruments (e.g., IBM stock, Apple stock, etc.) regardless of which market served as the source of the messages on such instruments. Another data consumer, Consumer B, may have similar interests in a different set of financial instruments. In such a case, the offload processor can be configured to re-group the financial market data into the outgoing data packets around the interests of particular downstream consumers. Thus, Figure 3 also shows outgoing data packets that are consumer-specific. As can be seen, the payloads of these consumer-specific data packets comprise financial market data messages from different markets that arrived in different incoming data packets.
[0040] Exemplary processing pipelines that can be employed by the offload processor to provide such sorting and repackaging functions are described below in connection with Figures 13, 15, and 20.
[0041] In another exemplary embodiment, an offload processor can be configured to perform packet mapping functions on incoming data packets from various financial market data feeds.
[0042] Figure 4 depicts another exemplary embodiment of an offload processor 300 that provides repackaging functionality. In the example of Figure 4, the offload processor receives a plurality of streams of incoming data packets, where each stream may be market- specific (e.g., an input stream of data packets from the NYSE on a first port and an input stream of data packets from NASDAQ on a second port). The offload processor 300 of Figure 4 can then repackage the financial market data in these incoming data packets into outgoing data packets as previously discussed.
[0043] Figure 5 depicts another exemplary embodiment of an offload processor 300 that provides repackaging functionality. In the example of Figure 5, the offload processor produces multiple output streams of outgoing data packets, where each output stream may be criterion-specific (e.g., an output stream of data packets destined for Consumer A from a first port and an output stream of data packets destined for Consumer B from a second port, and so on). The stream of incoming data packets can be a consolidated stream as described in connection with Figure 3. [0044] Figure 6 depicts another exemplary embodiment of an offload processor 300 that provides repackaging functionality. In the example of Figure 6, the offload processor produces multiple output streams of outgoing data packets from multiple input streams of incoming data packets, where the input streams can be like those shown in Figure 4 while the output streams can be like those shown in Figure 5.
[0045] The output streams produced by the offload processor in Figure 3, 4, 5, and 6 may be delivered by a unicast protocol (a unique stream for each consumer) or a multicast protocol (multiple consumers of the same stream). In the case of a unicast protocol, the consumer- specific output packets would contain the address of the targeted consumer. In the case of a multicast protocol, the consumer-specific output packets would contain the address of the targeted group of consumers (e.g. a UDP multicast address). It should be understood that multiple output streams, unicast or multicast, may be carried on a single network link. The number of network links used to carry the output streams produced by the offload processor may be selected independently of the number of unique output streams.
[0046] The offload processor 300 can take any of a number of forms, including one or more general purpose processors (GPPs), reconfigurable logic devices (such as field
programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), graphics processing units (GPUs), and chip multiprocessors (CMPs), as well as
combinations thereof.
[0047] As used herein, the term "general-purpose processor" (or GPP) refers to a hardware device having a fixed form and whose functionality is variable, wherein this variable functionality is defined by fetching instructions and executing those instructions, of which a conventional central processing unit (CPU) is a common example. Exemplary embodiments of GPPs include an Intel Xeon processor and an AMD Opteron processor. As used herein, the term "reconfigurable logic" refers to any logic technology whose form and function can be significantly altered (i.e., reconfigured) in the field post-manufacture. This is to be contrasted with a GPP, whose function can change post-manufacture, but whose form is fixed at manufacture. Furthermore, as used herein, the term "software" refers to data processing functionality that is deployed on a GPP or other processing devices, wherein software cannot be used to change or define the form of the device on which it is loaded, while the term "firmware", as used herein, refers to data processing functionality that is deployed on reconfigurable logic or other processing devices, wherein firmware may be used to change or define the form of the device on which it is loaded. [0048] Thus, in embodiments where the offload processor 300 comprises a reconfigurable logic device such as an FPGA, hardware logic will be present on the device that permits fine-grained parallelism with respect to the different operations that the offload processor performs, thereby providing the offload processor with the ability to operate at hardware processing speeds that are orders of magnitude faster than would be possible through software execution on a GPP. Moreover, by leveraging such fine-grained parallelism, processing tasks can be intelligently engineered into processing pipelines deployed as firmware in the hardware logic on the FPGA. With such a pipeline, downstream pipeline modules can perform a processing task on data that was previously processed by upstream pipelined modules while the upstream pipeline modules are simultaneously performing other processing tasks on new data, thereby providing tremendous throughput gains. Furthermore, other types of offload processors that provide parallelized processing capabilities can also contribute to improved latency and throughput.
[0049] Figure 7 depicts an exemplary system where the offload processor 300 is deployed upstream from one or more electronic trading platform(s) (ETP(s)) 700. Each ETP 700 may include one or more data consumers within it, and the outgoing data packets from the offload processor 300 can be customized to each consumer.
[0050] Furthermore, in additional exemplary embodiments, the offload processor can
perform other functions in addition to or instead of the repackaging operations illustrated by Figures 3-6. For example, the offload processor can be configured to perform packet mapping as described below in connection with Figure 19.
[0051] As noted, when positioned upstream from an electronic trading platform, the offload processor can be employed in a network element resident in a data distribution network for financial market data. Examples of network elements include repeaters, switches, routers, and firewalls. A repeater embodiment, a single input port and single output port device, may be viewed as a "smart" link where data is processed as it flows through the network link. In a preferred embodiment, such a network element can be a network switch. As such, the inventors disclose various embodiments of a network switch that offloads various processing tasks from electronic trading platforms, including embodiments of an intelligent feed switch and embodiments of an intelligent distribution switch, as described below.
[0052] B. Intelligent Feed Switch:
[0053] A common practice in financial exchange and electronic trading platform architecture is to achieve greater scale by "striping the data" across multiple instances of the platform components responsible for data transmission, consumption, and processing. If the data is imagined to flow vertically through a depiction of the overall system, then this approach to scale is often termed "horizontal scaling". This approach is accepted in the industry as the most viable approach from an overall platform perspective, as the escalating rate of market data messages (doubling every 6 to 11 months) is outpacing the technology improvements available to individual components in the platform.
[0054] In order to facilitate data striping, some feed sources (typically exchanges) divide a market data feed into multiple "lines" where a given line caries a proper subset of the market data published by the financial exchange. Typically, all of the market data updates associated with a given financial instrument is transmitted on a single line. The assignment of a given financial instrument to a line may be static or dynamic. Static assignments typically partition the set of instruments by using the starting characters in an instrument symbol and assigning an alphabet range to a given line. For example, consider a feed partitioned into four lines. Line 0 carries updates for financial instruments whose symbol begins with letters "A" through "F"; line 1 carries updates for symbols beginning with letters "G" through "M"; line 2 carries updates for symbols beginning with letters "N" through "S"; line 3 carries updates for symbols beginning with letters "T" through "Z". Dynamic line assignments are typically performed as follows. A static mapping line transmits information to feed consumers communicating the number of data lines, the address(es) of the data lines, and the mapping of financial instruments to each data line.
[0055] Similarly, financial exchanges typically enforce striping across the ports provided for order entry. A financial exchange provides multiple communication ports to which market participants establish connections and enter orders to electronically buy and sell financial instruments. Exchanges define the subset of financial instruments for which orders are accepted on a given port. Typically, exchanges statically define the subset of financial instruments by using the starting character(s) in the instrument symbol. They assign an alphabet range to a given port. For example, consider an exchange that provides four ports to a given participant. Port 0 accepts orders for financial instruments whose symbol begins with letters "A" through "F"; port 1 accepts orders for symbols beginning with letters "G" through "M"; port 2 accepts orders for symbols beginning with letters "N" through "S"; port 3 accepts orders for symbols beginning with letters "T" through "Z".
[0056] The striping of data by exchanges, across multiple market data feed lines as well as multiple order entry ports, dictates a horizontally scaled architecture for electronic trading platforms. Trading applications are typically responsible for trading a subset of the financial instruments. Each application consumes the market data updates associated with its subset of financial instruments and generate orders for those instruments. Implementing a horizontally scaled system is straightforward for a platform that receives data from and transmits orders to a single market. The design task is significantly complicated when the trading platform receives data from multiple exchanges, computes pan-market views of financial instruments, and transmits orders to multiple exchanges.
[0057] Each market data feed source implements its own striping strategy. Note that some market data feeds are not striped at all and employ a single line. The subsets of financial instruments associated with the lines on one market data feed may be different from the subsets of financial instruments associated with the lines on another market data feed.
Therefore, the updates associated with financial instruments processed by a given
component can be sourced from different sets of lines from each market data feed. These factors significantly complicate the market data processing and distribution components that are responsible for delivering normalized market data to downstream applications, especially when composite, pan-market views of financial instruments are required.
[0058] Disclosed herein are multiple variants of an Intelligent Feed Switch (IFS) that
offloads numerous market data consumption, normalization, aggregation, enrichment, and distribution functions from downstream components such as feed handlers, virtual order books, or more generally, ticker plants. The specific functions performed by variants of the IFS are described in the sections below. As previously mentioned, utilizing an IFS in the market data feed network provides performance, efficiency, functionality, and scalability benefits to electronic trading platforms.
[0059] 1. IFS Architecture:
[0060] The IFS can be implemented on a wide variety of platforms that provide the necessary processing and memory resources, switching resources, and multiple physical network ports. Just as network switches can be built at various scales, two ports up to thousands of ports, the IFS can be scaled to meet the needs of electronic trading platforms of varying scale. In the embodiment shown in Figure 21, the IFS provides multiple ports of 10 Gigabit Ethernet connectivity, in addition to a 10/100/1000 Ethernet port for management and control. An FPGA that is resident within the switch can provide fine-grained parallel processing resources for offload engines as previously noted. The memory cache provides dedicated high-speed memory resources for the offload engines resident on the FPGA. The memory cache may be implemented in Synchronous Dynamic Random Access Memory (SDRAM), Synchronous Random Access Memory (SRAM), a combination of the two, or other known memory technologies. A dedicated Ethernet switch ASIC increases the port count of the IFS using existing, commodity switching devices and allows traffic to bypass the offload engines in the FPGA. The FPGA is directly connected to the switching device by consuming one or more ports on the switching device. The amount of communication bandwidth between the FPGA and switching device can be scaled by increasing the number of ports dedicated to the interface. The FPGA may also provide one or more ports for external connectivity, adding to the total number of ports available on the IFS. In addition to providing standard protocol connectivity, e.g. Ethernet, the ports that are directly connected to the FPGA can be leveraged to implement custom protocols. For example, if multiple Intelligent Feed
Switches are interconnected, the FPGAs inside the switches may implement a custom protocol that eliminates unnecessary overhead. Similarly, if a custom Network Interface Card (NIC) containing an FPGA directly connected to the physical network port(s) is used in a server connected to the IFS, a custom protocol can be employed between the IFS and the server. The control processor provides general purpose processing resources to control software. A standard operating system (OS) such as Linux is installed on the control processor. Configuration, control, and monitoring software interfaces with the FPGA device via a standard system bus, preferably PCI Express. The control processor also features a system bus interface to the switch device.
[0061] Figure 22 shows another embodiment of the IFS wherein the switch device is replaced by another FPGA device with a dedicated memory cache. Note that the peer-to-peer (P2P) interface between the FPGA devices need not utilize a standard network protocol, such as Ethernet, but may use a low-overhead protocol for communicating over high speed device interconnects. This architecture increases the amount of processing resources available for offload functions and allows custom network protocols to be supported on any port. Also note that additional FPGAs can be interconnected to scale the number of external ports provided by the IFS.
[0062] Figure 23 shows another embodiment of the IFS wherein a single FPGA device is utilized. This architecture can minimize cost and complexity. The number of physical ports supported is subject to the capabilities of the selected FPGA device. Note that some devices include embedded general purpose processors capable of hosting configuration, control, and monitoring applications. [0063] Note that other processing resources such as chip multi-processors (CMPs), graphics processing units (GPUs), and network processing units (NPUs) may be used in lieu of an FPGA.
[0064] An example of a network switch platform that may suitable for use as an intelligent switch to process financial market data is the Arista Application Switch 7124FX from Arista Networks, Inc. of Santa Clara, CA.
[0065] 2. Platform Architecture with IFS:
[0066] As shown in Figure 8, the IFS can be positioned within the market data feed network of the electronic trading platform. In some market data networks, a single IFS may be capable of providing the required number of switch ports, processing capacity, and data throughput. The number of switch ports required depends on the number of physical network links carrying input market data feeds and the number of physical network links connecting to downstream platform components. The amount of processing capacity required depends on the tasks performed by the IFS and the requirements imposed by the input market data feeds. The data throughput depends on the aggregate data rates of input market data feeds and aggregate data rates of output streams delivered to platform
components.
[0067] If the aforementioned requirements exceed the capacity of a single IFS, then a multielement network can be constructed that includes the IFS. As shown in Figure 9, multiple conventional switch elements can be used to aggregate the data from the physical network links carrying market data feeds. For example, a conventional switch could be used to aggregate data from forty (40) 1 Gigabit Ethernet links into four (4) 10 Gigabit Ethernet links for transfer to the IFS. This reduces the number of upstream ports required by the IFS. As shown in Figure 10, multiple Intelligent Feed Switches can be used if the requirements exceed the capacity of a single IFS. In this example, multiple IFS elements consume aggregated data from upstream conventional switches, then distribute data to downstream platform elements. The network architectures in Figures 9 and 10 are exemplary but not exhaustive. The IFS can be combined with other switch elements to form large networks, as is well-known in the art.
[0068] Figure 11 presents a simplified diagram of a conventional electronic trading platform with an IFS deployed in the market data network. In this arrangement, the IFS offloads one or more functions from the downstream feed handler components. Figure 12 provides a logical diagram of the functions performed by a typical feed handler in a conventional electronic trading platform. A description of the specific functions and how they can be offloaded to the IFS are described in detail in the sections below. Figure 13 provides a logical diagram of a conventional electronic trading platform with numerous feed handler function performed by the IFS. Note that the only remaining functions performed by the feed handler components are message parsing, business logic and message normalization, and subscription-based distribution. Note that we later describe an embodiment capable of further offloading the feed handler components from subscription-based distribution.
Existing feed handler components can thus receive substantial benefits with no modification by simply having less data to process. Moreover, with a substantially reduced workload, feed handler components can also be re-engineered to be more simple, efficient, and performant. As a result the number of discrete feed handler components required by the electronic trading platform can be substantially reduced. The latency associated with market data normalization and distribution can be substantially reduced, resulting in advantages for latency-sensitive trading applications. Furthermore, the amount of space and power required to host the electronic trading platform can be substantially reduced, resulting in simplified system monitoring and maintenance as well as reduced cost.
9] Figure 14 presents a simplified diagram of an electronic trading platform that includes one or more ticker plant components that integrate multiple components in the conventional electronic trading platform. An example of an integrated ticker plant component that leverages hardware acceleration and offload engines is described in the above-referenced and incorporated patents and patent applications (see, for example, U.S. Patent No.
7,921,046, U.S. Pat. App. Pub. 2009/0182683, and WO Pub. WO 2010/077829). Even integrated ticker plant components such as these can benefit from offloading functions to an IFS. As shown in Figure 15, the IFS can offload the feed handling tasks reflected in Figure 13, as well as additional functions such as price aggregation, event caching, top-of-book quote generation, and data quality monitoring. A description of these functions and how they can be offloaded to an IFS is provided in subsequent sections. Offloading these functions can boost the capacity of an integrated ticker plant component, reducing the need to horizontally scale. An IFS can also simplify the task of horizontally scaling with multiple integrated ticker plant components. For example, consider a platform architecture where three ticker plant components are used and horizontal scaling is achieved by striping the symbol range across the ticker plant components. The first ticker plant is responsible for processing updates for instrument symbols beginning with characters "A" through 'Ή". The IFS is capable of ensuring that the first ticker plant only receives updates for the assigned set of instruments by performing the symbol routing and repackaging functions depicted in Figure 15. Note that other functions predicate the symbol routing function as described subsequently. Striping the data in this way allows each ticker plant component to retain the ability to compute composite, or pan-market, views of financial instruments. Examples of hardware-accelerated processing modules for computing composite quote and order book views are described in the above-referenced and incorporated U.S. Patent No. 7,921,046 and WO Pub. WO 2010/077829.
[0070] Some latency-sensitive trading applications require minimal data normalization in order to drive their trading strategies. Some of these applications may be able to directly consume data from an IFS, as shown in Figure 16. This eliminates additional network hops and processing from the datapath, thus reducing the latency of the data delivered to the applications. This latency reduction can provide advantages to these latency-sensitive trading applications. Furthermore, one or more of such latency-sensitive trading
applications that consume data directly from the IFS can also be optionally configured to consume data from the distribution network to also receive normalized market data from a ticker plant such as a hardware-accelerated low latency ticker plant (see the dashed connection in Figure 16). An example of a situation where such an arrangement would be highly advantageous would be when a trading application takes ultra-low-latency data from a direct feed (e.g., in the same data center) for a local market, as well as data sourced from a consolidated feed for remote markets, such as a futures or foreign exchange market in a different country.
[0071] As shown in Figure 8, the IFS is positioned within the market data feed network, and represents the physical embodiment of that network.
[0072] 3. Packet Mapping:
[0073] As shown in Figures 13 and 15, the IFS may be configured to offload one or more functions from downstream feed consumers. The same set of functions may not be performed for every feed flowing through the IFS. Furthermore, the way in which each function is performed may vary by feed, as feed sources employ different message formats, field identifiers, datatypes, compression schemes, packet formats, transmission protocols, etc. In order to correctly perform the prescribed functions on a given packet, the IFS must first identify the feed to which a given packet belongs, then retrieve the necessary information about how packets belonging to the given feed are to be handled. In order to do so, the IFS preferably maintains a mapping table using a tuple such as the IP <source address, destination address, protocol> tuple to identify the feed to which a packet belongs (additional optional members of the tuple may include a source port number, a destination port number, and a transport protocol port number). Preferably, the embedded processor in the IFS utilizes a hash table, where the <source address, destination address, protocol> tuple is used as input to the hash function. However, a content addressable memory (CAM) is another alternative to a hash table for the packet mapping operation. In a hashing
embodiment, preferably, a control processor in the IFS configures the hash function and maintains the hash table. At minimum in this example, the entry in the table contains a feed identifier. The additional information about how packets belonging to the feed should be handled may be stored directly in the hash table, or in a separate table indexed by the feed identifier. The additional information may include one or more of the following pieces of meta-data:
• Market identification code (MIC); a unique identifier for the exchange/market.
Preferably, this code would be a binary enumeration of the ISO 10383 market identification codes (MIC) for the markets supported by the IFS. For example, XNYS is the MIC for the New York Stock Exchange which may be assigned an enumerated value in order to consume minimal space in the meta-data table and pre-normalized messages.
• Data source identification code (DSIC); a unique identifier for the specific feed. Note that multiple feeds may carry market updates for the same market. For example, updates for equities traded on the NYSE are reported by multiple feeds: the Consolidated Quote System (CQS), Consolidated Tape System (CTS), NYSE Quotes, NYSE Trades, NYSE OpenBook Ultra, etc. Each feed, or data source, is assigned a unique tag. Similar to the market codes, the data source codes are assigned an enumerated value in order to consume minimal space in the meta-data table and pre-normalized messages.
• Line identification code (LIC); a unique identifier for the specific line within the feed. Similar to the MIC and DSIC, each unique line is assigned a unique tag. The line identifiers configured on the IFS are preferably assigned an enumerated value in order to consume minimal space in the meta-data table and pre-normalized messages.
• A flag indicating if the feed utilizes FIX/FAST encoding
• FAST decoding templates (if necessary), or template specifying how to parse the packet into messages
• FIX decoding templates, or template specifying how to parse messages into fields
• Template specifying field datatype conversions to perform
• Field identifiers and/or offsets for fields comprising the instrument symbol
• Field identifier or offset for message sequence number (if necessary) [0074] This meta-information can be propagated to downstream offload engines in the IFS, along with the packet, as shown in Figure 19. The configuration, control, and table management logic configures the hash function and table entries. This logic is preferable hosted on a co-resident control processor, preferably as a pipelined processing engine.
[0075] 4. Redundant Feed Arbitration:
[0076] In order to allow a market data feed to be routed across multiple networks, the
Internet Protocol (IP) is ubiquitously used as the network protocol for market data feed distribution. Feed sources typically employ one of two transport protocols: Transmission Control Protocol (TCP) or Unreliable Datagram Protocol (UDP).
[0077] TCP provides a reliable point-to-point connection between the feed source and the feed consumer. Feed consumers initiate a connection with the feed source, and the feed source must transmit a copy of all market data updates to each feed consumer. Usage of TCP places a large data replication load on the feed source, therefore it is typically used for lower bandwidth feeds and/or feeds with a restricted set of consumers. As shown in Figure 26, a feed handler can terminate the TCP connection, passing along the payload of the TCP packets to the packet parsing and decoding logic. Implementation of the TCP receive logic is commonly provided by the Operating System (OS) or network interface adapter of the system upon which the feed handler is running. Typically, redundant TCP connections are not used for financial market data transmission, as TCP provides reliable transmission.
[0078] UDP does not provide reliable transmission, but does include multicast capability.
Multicast allows the sender to transmit a single copy of a datagram to multiple consumers. Multicast leverages network elements to perform the necessary datagram replication. An additional protocol allows multicast consumers to "join" a multicast "group" by specifying the multicast address assigned to the "group". The sender sends a single datagram to the group address and intermediary network elements replicate the datagram as necessary in order to pass a copy of the datagram to the output ports associated with consumers that have joined the multicast group.
[0079] While providing for efficient data distribution, UDP multicast is not reliable.
Datagrams can be lost in transit for a number of reasons: congestion within a network element causes the datagram to be dropped, a fault in a network link corrupts one or more datagrams transiting the link, etc. While there have been numerous reliable multicast protocols proposed from academia and industry, none have found widespread adoption. Most market data feed sources that utilize UDP multicast transmit redundant copies of the feed, an "A side" and a "B side". Note that more than two copies are possible. For each "line" of the feed, there is a dedicated multicast group, an "A" multicast group and a "B" multicast group. Typically, the feed source ensures that each copy of the feed is transmitted by independent systems, and feed consumers ensure that each copy of the feed transits an independent network path. Feed consumers then perform arbitration to recover from data loss on one of the redundant copies of the feed.
[0080] Note that a packet may contain one or more market data update messages for one or more financial instruments. Typically, feed sources assign a monotonically increasing sequence number to each packet transmitted on a given "line". This simplifies the task of detecting data loss on a given line. If the most recently received packet contains a sequence number of 5893, then the sequence number of the next packet should be 5894. When using redundant UDP multicast groups, feed sources typically transmit identical packets on the redundant multicast groups associated with a line. For example, packet sequence number 3839 on the A and B side of the feed contains the same market data update messages in the same order. This simplifies the arbitration process for feed consumers.
[0081] Figure 17 provides a simple example of redundant feed arbitration. The sequence of packets for a single pair of redundant lines is shown. Time progresses vertically, with packet 5894 received first from line 1A, packet 5895 received second from line 1 A, etc. A line arbiter forwards the packet with the next sequence number, regardless of which "side" the packet arrives on. When the redundant copy of the packet is received on the other side, it is dropped. As depicted in Figure 17, one of the redundant sides typically delivers a packet consistently prior to the other side. If the arbiter receives a packet with a sequence number greater than the expected sequence number, it detects a gap on one of the redundant lines. The arbiter can be configured to wait a configured hold time to see if the missing packet is delivered by the other side. The difference between the arrival times of copies of the same packet on the redundant lines is referred to as the line skew. In order to be effective, the hold time can be configured to be greater than the average line skew. If the missing packet does arrive on the redundant side prior to the expiration of the hold time, then a gap is registered for the particular feed line.
[0082] When line gaps occur there are a number of recovery and mitigation strategies that can be employed. The arbiter typically reports the missing sequence numbers to a separate component that manages gap mitigation and recovery. If the feed provides retransmission capabilities, then the arbiter may buffer packets on both sides until the missing packets are returned by the gap recovery component.
[0083] Some feeds sequence updates on a per-message basis or a per-message/per-instrument basis. In these cases, a packet sequence number may not be monotonically increasing or may not be present at all. Typically, arbitration is performed among one or more copies of a UDP multicast feed; however, arbitration can occur among copies of the feed delivered via different transmission protocols (UDP, TCP, etc.). In these scenarios, the content of packets on the redundant copies of the feed may not be identical. The transmitter of packets on the A side may packetize the sequence of market data update messages differently from the transmitter on the B side. This requires the IFS to parse packets prior to performing the arbitration function.
[0084] The line identification code (LIC) provided in the meta-data associated with the
packet allows the IFS to perform the appropriate line arbitration actions for a given packet. If the packet belongs to an unarbitrated TCP flow, then the packet may bypass the line arbitration and gap detection engine. If the line requires dictates arbitration at the message- level as opposed to the packet level, then the IFS first routes the packet to parsing and decoding engines. The line arbitration and gap detection function may be performed by multiple parallel engines. The LIC may also be used to the route the packet to the appropriate engine handling arbitration for the associated feed line. Furthermore, the LIC is used to identify the appropriate arbitration buffer into which the packet should be inserted.
[0085] Figure 18 provides an example of a line arbitration offload engine, which is preferably implemented in a pipelined processing engine. For each input line, the arbiter maintains a packet buffer to store the packets received from the redundant sides of the feed line. The example in Figure 18 demonstrates two-arbitration; additional buffers are provisioned if multi-way arbitration is performed. For feeds transmitted via UDP, it is possible for packets on a given multicast group to be delivered in out-of-sequence, if the packets traverse different paths through the network. The packet buffers in the arbiter may optionally provide for resequencing by inserting each new packet in the proper sequence in the buffer. Typically market data networks are carefully designed to minimize latency and tightly control routing, thus out-of-sequence delivery is typically not a problem. Thus, arbiter functions typically omit resequencing to reduce overhead and complexity.
[0086] The compare, select and drop logic in the arbiter performs the core arbitration
function as previously described. A register is used to maintain the next expected sequence number. The logic compares the sequence number of the packet residing at the head of each packet buffer. If a matching sequence number is found, the packet is forwarded. If the sequence number is less than the expected sequence number, the packet is dropped. If the sequence number is greater than the expected sequence number, the other buffer or buffers are examined for the required packet. Note that this may require that multiple packets be read until a match is found, the buffer is empty, or a gap is detected. If a gap is detected the gap detection and reporting logic resets then starts the wait timer. If the expected packet sequence number does not arrive before the wait timer exceeds the value in the max hold time register, then a gap is reported to the gap mitigation and recovery engine with the missing packet sequence number range. Note that the gap detection and reporting logic may also report gap information to a control processor or to downstream monitoring applications via generated monitoring messages. If the gap mitigation and recovery engine is configured to request retransmissions, then the arbiter pauses until the gap mitigation and recovery engine passes the missing packet or packets to the arbiter or returns a retransmission timeout signal. The gap mitigation and recovery engine may be hosted on the same device as the arbiter, or it may be hosted on a control processor within the IFS.
[0087] As shown in Figure 27, the IFS may implement TCP termination logic in order to offload feed handler processing for feeds utilizing TCP for reliable transmission.
Implementation of TCP consumer logic, including implementation in custom hardware logic, is available from hardware logic block vendors that supply TCP hardware stack modules (e.g., firmware modules that perform TCP endpoint functionality, such as PLDA, Embedded Design Studio, HiTech Global, etc. Note that TCP feeds processed by the TCP termination logic can bypass the line arbitration and gap detection component, as redundant TCP stream are not typically used. By terminating the TCP connection in the IFS, the IFS can effectively provide protocol transformation upstream from the feed handler. The output protocol can be a protocol such as UDP unicast or multicast, raw Ethernet, or a Remote Direct Memory Access (RDM A) protocol implemented over Ethernet (e.g., RoCE).
[0088] 5. Feed Pre-Normalization:
[0089] In addition to performing line arbitration and gap detection, mitigation, and recovery, the IFS can perform one or more "pre-normalization" functions in order to simplify the task of downstream consumers. Following line arbitration, the IFS preferably decomposes packets into discrete messages. As previously described, feed sources typically pack multiple update messages in a single packet. Note that each feed may employ a different packetization strategy, therefore, the pre-normalization engine in the IFS utilizes the packet parsing templates retrieved by the packet mapping engine. Packet parsing techniques amenable to implementation in hardware and parallel processors are known in the art as described in the above-referenced and incorporated U.S. Patent No. 7,921,046. If the feed associated with the packet utilizes FAST compression, then the pre-normalization engine must utilize the FAST decoding template in order to decompress and parse the packet into individual messages, as described in the above-referenced and incorporated U.S. Patent No. 7,921,046.
[0090] Once the packet is parsed into discrete messages, specific fields may be extracted from the messages in order to enable additional pre-normalization functions. Template- based parsing in offload engines is also addressed in the above-referenced and incorporated U.S. Patent No. 7,921,046. Discrete messages and message fields are passed to downstream functions. Note that the message parsing engine may only extract specific fields required for downstream functions, as dictated by the templates included in the meta-data for the packet. For example, the parser may only extract the symbol field in order to enable symbol-based routing and repackaging. For some feeds, the symbol mapping function may require extraction of the order reference number in book update events. This can also be specified by the parsing template.
[0091] Note that the message parsing logic can be configured to preserve the original
structure of the message. Extracted fields, such as symbols and order reference numbers, can be added to the meta-data that accompanies the packet as it propagates through the IFS. By preserving the message structure, downstream consumer applications need not be changed when an IFS is introduced in the market data network. For example, an existing feed handler for the NASDAQ TotalView feed need not change, as the format of the messages it processes still conforms to the feed specification. If the symbol-routing and repackaging function is applied, the existing feed handler will simply receive packets with messages associated with the symbol range for which it is responsible, but the message formats will conform to the exchange specification. This function is described in more detail below.
[0092] The pre-normalization logic can also be configured to offload normalization logic from downstream consumers. For example, the parsing logic can be configured to perform FAST decompression and FIX parsing. Per the parsing templates in the meta-data, the fields in each message can be configured to a prescribed native data type. For example, an ASCII- encoded price field can be converted into a signed 32-bit integer, an ASCII-encoded string can be mapped to a binary index value, etc. The type-converted fields can then be aligned on byte or word boundaries in order to facilitate efficient consumption by consumers. The pre-normalization logic can maintain a table of downstream consumers capable of receiving the pre-normalized version of the feed. For example, the IFS may transmit pre-normalized messages on ports 3 through 8, but transmit the raw messages on ports 9 through 12.
[0093] For some feeds, the IFS can be configured to append fields to the raw message,
allowing consuming applications to be extended to leverage the additional fields to reap performance gains, without disrupting the function of existing consumers. For example, the IFS may append the MIC, DSIC, LIC, and binary symbol index to the message. Additional appended fields may include, but are not limited to, message-based sequence numbers and high-resolution IFS transmit timestamps.
[0094] As previously mentioned, the IFS can be configured to perform a symbol mapping function. The symbol mapping function assigns a binary symbol index to the financial instrument associated with the update event. This index provides a convenient way for downstream functions and consuming applications to perform processing on a per symbol basis. An efficient technique for mapping instrument symbols using parallel processing resources in offload engines is described in the above-referenced and incorporated U.S. Patent No. 7,921 ,046. Note that some feeds provide updates on a per-order basis and some update events do not contain the instrument symbol, but only an order reference number. As shown in Figure 28, feed consumers can maintain a table of active orders in order to map an order reference number to an active order to buy or sell the financial instrument identified by the associated symbol. Note that events that report a new active order include a reference to the symbol for the financial instrument. In this case, the symbol is mapped to a symbol ID. The order information and symbol ID are then added to the active order table. When subsequent order-referenced modify or delete events (that do not contain a symbol)_are received, the order reference number is used to lookup the order's entry in the active order table that includes the symbol ID. Thus, as shown in Figure 28, a demultiplexer (DEMUX) can receive streaming parsed messages that include a symbol reference or an order reference to identify a message or event type. This type data can determine whether the parsed message is passed to the output line feeding the symbol lookup operation or the output line feeding the order lookup operation. As shown, data for new orders can be passed from the symbol lookup to the order lookup for updating the active order table. A multiplexer (MUX) downstream from the symbol lookup and order lookup operations can merge the looked up data (symbol ID, order information, as appropriate) with the parsed messages for delivery downstream. An efficient technique for mapping order reference numbers to the mapped symbol index using parallel processing resources in offload engines is described in the above-referenced and incorporated WO Pub. WO 2010/077829. In order to perform the symbol mapping function, the computational resources in the IFS can include dedicated high-speed memory interfaces.
[0095] As part of the pre-normalization function, the IFS may also assign one or more high- precision timestamps. For example, a timestamp may be assigned when the IFS receives a packet, a timestamp may be assigned immediately prior to transmitting a packet, etc. The high-precision timestamp preferably provides nanosecond resolution. In order to provide synchronized timestamps with downstream consumers, the time source used to assign the timestamps should be disciplined with a high-precision time synchronization protocol.
Example protocols include the Network Time Protocol (NTP) and the Precision Time Protocol (PTP). The protocol engine can be co-resident with the offload engines in the IFS, but is preferably implemented in a control processor that disciplines a timer in the offload engines. As part of the pre-normalization function, the IFS may also assign additional sequence numbers. For example, the IFS may assign a per-message, per-symbol sequence number. This would provide a monotonically increasing sequence number for each instrument. These additional timestamps and sequence numbers may be appended to raw message formats or included in the pre-normalized message format, as described above.
[0096] 6. Symbol-Based Routing and Repackaging:
[0097] The symbol-based routing allows the IFS to deliver updates for a prescribed set of symbols to downstream components in the electronic trading platform. As shown in Figure 16, the IFS can act as a subscription based routing and filtering engine for latency-sensitive applications that consume the raw or pre-normalized updates directly from the IFS.
Similarly, the IFS can facilitate a horizontal scaling strategy by striping the incoming raw feed data by symbol within the market data feed network itself. This allows the IFS to deliver the updates for the prescribed symbol range to downstream feed handler or ticker plant components, without having to rely on additional processing capabilities in those components to perform this function. This can dramatically reduce data delivery latency and increase the processing capacity of those components.
[0098] Figure 20 depicts an exemplary processing module configured to perform symbol- routing and repackaging. Such a module is preferably implemented as a pipelined processing engine. As shown in Figure 20, the symbol-routing and repackaging function first utilizes the symbol index to lookup an interest list in the interest list table. Note that additional fields such as the market identification code (MIC) and data source identification code (DSIC) may be used in addition to the symbol index to lookup an interest list. Similar to the interest-based filtering and replication discussed in the above-referenced and incorporated U.S. Patent No. 7,921,046, the interest list is stored in the form of a bit vector where the position of each bit corresponds to a downstream consumer. For the IFS, a downstream consumer may be a physical output port, a multicast group, a specific host or server, a specific application (such as a feed handler), etc. The scope of a "consumer" depends on the downstream platform architecture. Associated with each consumer is a message queue that contains the messages destined for the consumer. A fair scheduler ensures that each of the message queues receives fair service. Packetization logic reads multiple updates from the selected message queue and packages the updates into a packet for transmission on the prescribed output port, using the prescribed network address and transport port. Messages can be combined into an outgoing Ethernet frame with appropriate MAC-level, and optionally IP-level headers.
[0099] Preferably, the packetization logic constructs maximally sized packets: the logic reads as many messages as possible from the queue until the maximum packet size is reached or the message queue is empty. Note that packetization strategy and destination parameters may be specified via packaging parameters stored in a table. The packetization logic simply performs a lookup using the queue number that it is currently servicing in order to retrieve the appropriate parameters. The interest list and packaging parameter tables are preferably managed by configuration, control, and table management logic hosted on a co-resident control processor.
[00100] Note that the messages in the newly constructed packets may have been transmitted by their concomitant feed sources in different packets or in the same packet with other messages that are now excluded. This is an example of the IFS constructing a customized "feed" for downstream consumers.
[00101] If downstream consumers are equipped with network interface devices that allow for custom protocol implementation, e.g. an FPGA connected directly to the physical network link, then additional optimizations may be implemented by the packetization logic. For example, the Ethernet MAC-level (and above) headers and CRC trailer may be stripped off any packet. By doing so, unnecessary overhead can be removed from packets, reducing packet sizes, reducing data transmission latency, and reducing the amount of processing required to consume the packets. As shown in Figure 16, this optimization may apply to latency-sensitive trading applications, feed handlers, or ticker plants. [00102] 7. Depth Price Aggregation and Synthetic Quotes:
[00103] With sufficient processing and memory resources, additional data normalization
functions may be performed by the IFS, and thus offloaded from platform components such as feed handlers, virtual order book engines, and ticker plants. One such function is price- normalization for order-based depth of market feeds. As described in the above-referenced and incorporated U.S. Patent No. 7,921,046, WO Pub. WO 2010/077829, and U.S. Pat. App. Ser. No. 13/316,332, a number of market data feeds operate at the granularity of individual orders to buy or sell a financial instrument. The majority of real-time updates represent new orders, modifications to existing orders, or deletions of existing orders. As described in these incorporated references, a significant number of market data applications choose to consume the order-based depth of market feeds simply due to the reduced data delivery latency relative to top-of-book or consolidated feeds. However, the applications typically do not require visibility into the individual orders, but rather choose to view pricing information as a limited-depth, price-aggregated book, or as a top-of-book quote. In the above- referenced and incorporated U.S. Patent No. 7,921,046, WO Pub. WO 2010/077829, and U.S. Pat. App. Ser. No. 13/316,332, a number of techniques are disclosed for efficiently performing price aggregation in parallel processing elements such as reconfigurable hardware devices. The same methods can be applied in the context of an intelligent feed switch to offload price aggregation from downstream consumers. For example, rather than consuming the NASDAQ Totalview feed in its raw order-referenced format, downstream consumers can consume price-aggregated updates reflecting new price points, changes to existing price points, and deletions of price points from the book. This can reduce the number of update events to downstream consumers.
[00104] Note that price aggregation may be performed on a per-symbol, per-market basis (e.g.
NASDAQ market only), or on a per-symbol, pan-market basis (e.g. NASDAQ, NYSE, BATS, ARCA, Direct Edge) to facilitate virtual order book views.
[00105] A further reduction in the number of updates consumed by downstream consumers can be achieved by performing size filtering. Size filtering is defined as the suppression of an update if the result of the update is a change in aggregate volume (size) at a pre-existing price point, where the amount of the change relative to the most recent update transmitted to consumers is less than a configured threshold. Note that the threshold may be relative to the current volume, e.g. a change in size of 50%. [00106] Again, if sufficient processing and memory resources are deployed within the IF S, a synthetic quote engine can be included. As described in the above-referenced and incorporated U.S. Patent No. 7,921,046, WO Pub. WO 2010/077829, and U.S. Pat. App. Ser. No. 13/316,332, price-aggregated entries can be sorted into a price book view for each symbol. The top N levels of the price-aggregated represent a top-of-book quote. Note that N is typically one (i.e. only the best bid and offer values), but N may be set to be a small value such as three (3) to enhance the quote with visibility into the next N-l price levels in the book. The techniques described in these incorporated referenced can be used to efficiently sort price-aggregated updates into price books and generate top-of-book quotes when an entry in the top N levels changes using parallel processing resources.
[00107] 8. Event Caching:
[00108] As previously described, the IFS is capable of only transmitting updates for symbols for which downstream consumers are interested using the symbol-based routing described above. If a consumer wishes to add a symbol to its set of interest, the consumer would need to wait until a subsequent quote event is transmitted by the feed source in order to receive the current pricing for the associated financial instrument. A simple form of a cache can be efficiently implemented in the IFS in order to allow downstream consumers to immediately receive current pricing data for a financial instrument if its symbol is dynamically added to its set of interest during a trading session. For feeds that provide top-of-book quote updates and last trade reports, the IFS can maintain a simply last event cache that stores the most recent quote and most recent trade event received on a per-symbol, per-market basis.
Specifically, a table of events is maintained where an entry is located using the symbol index, MIC, and MSIC. When the set of interest changes for a given downstream consumer, the current quote and trade events in the event cache are transmitted to the consumer. This allows the consumer to receive the current bid, offer, and last traded price information for the instrument.
[00109] If sufficient processing resources exist in the IFS, a full last value cache (LVC) can be maintained as described in the above-referenced and incorporated U.S. Patent No.
7,921,046.
[00110] 9. Data Quality Monitoring:
[00111] The IFS can be also be configured to monitor a wide variety of data quality metrics on a per-symbol, per-market basis. A list of data quality metrics includes but is not limited to: • Line gap: packet loss experienced on the line carrying updates for the symbol.
• Line dead: the input feed line is detected to be in a "dead" state where no data is being received.
• Locked market: the best bid and offer prices for the instrument on the given market are identical
• Crossed market: the best bid price is larger than the best offer price for the instrument on the given market
[00112] The data quality can be reflected in an enumerated value and included in messages transmitted to downstream consumers as an appended field, as previously described. These enumerated data quality states can be used by the IFS and/or downstream consumers to perform a variety data quality mitigation operations.
[00113] 10. Data Source Failover:
[00114] An example of a data quality mitigation operation is to provide data source failover.
As previously described, there may be multiple data sources for market data updates from a given market, hence the need for a data source identification code (DSIC). Rather specify a specific <symbol, market, data source> tuple when establishing interest in an instrument, downstream consumers may specify a <symbol, market> tuple where the "best" data source is selected by the IFS. A prioritized list of data sources for each market is specified in the control logic. When the data quality associated with the current preferred data source for a market transitions to "poor" quality state, the IFS automatically transitions to the next highest-priority data source for the market. The data quality states that constitute "poor" quality are configured in the control logic. When a data source transition occurs, the control logic alters the interest list entries associated with affected instruments and downstream consumers. Note that if a higher-priority data source transitions out of a "poor" quality state, the IFS automatically transitions back to the higher-priority data source. Preferably, the IFS is configured to apply hysteresis to the data source failover function to prevent thrashing between data sources. Note that data source failover may rely on the presence of other functions within the IFS such as synthetic quote generation if failover is to be supported between depth of market feeds and top-of-book quote feeds.
[00115] 11. Monitoring, Configuration, and Control: [00116] The monitoring, configuration, and control logic described is preferably hosted on a co-resident processor in the IFS. This logic may interface with applications in the electronic platform or remote operations applications. In one embodiment of the IFS, control messages are received from an egress port. This allows one or more applications in the electronic trading platform to specify symbol routing parameters, packet and message parsing templates, prioritized lists of data sources, gap reporting and mitigation parameters, etc.
[00117] In addition, a variety of statistics counters and informational registers are maintained by the offload engines that can be accessed by the control logic in the IFS such as per-line packet and message counters, packet and message rates, gap counters and missing sequence registers, packet size statistics, etc. These statistics are made available to the external world via common mechanisms in the art, including SNMP, HTML, etc.
[00118] 12. Feed Generation:
[00119] The IFS can also be used by feed sources (exchanges and consolidated feed vendors) to offload many of the functions required in feed generation. These tasks are largely the inverse of those performed by feed consumers. Specifically, the IFS can be configured to encode updates using prescribed encoding templates and transmit the updates on specified multicast groups, output ports, etc. Other functions that are applicable to feed generation include high-resolution timestamping, rate monitoring, and data quality monitoring.
[00120] C. Intelligent Distribution Switch:
[00121] The same methods and apparatuses can be applied to the task of distributing data throughout the electronic trading platform. As shown in Figure 24, an Intelligent
Distribution Switch (IDS) can be positioned downstream of market data normalization components in the electronic trading platform. The IDS can be used to offload distribution functions from normalization components such ticker plants, to offload data consumption and management functions from downstream consumers such as trading applications, and to introduce new capabilities into the distribution network in the electronic trading platform. Examples of distribution capabilities are described in the above-referenced and incorporated U.S. Pat. App. Ser. No. 61/570,670.
[00122] The IDS architecture can be one of the previously described variants shown in Figures 21 , 22, and 23. Note that the number of switch ports and amount of interconnect bandwidth between internal devices (FPGAs, switch ASICS, memory, etc.) may be provisioned differently for an IDS application, relative to an IFS application. [00123] As shown in Figure 25, the IDS may host one or more distribution functions. The IDS can be used to offload the task of interest-based distribution. The IDS can maintain a mapping from instrument symbol to interest list, an example of such a mapping being described in the above-referenced and incorporated U.S. Patent No. 7,921,046. If point-to- point transmission protocols are in use, then the IDS makes the requisite copies of the update event and addresses each event for the specified consumer. By offloading this function, upstream components such as ticker plants only need to propagate a single copy of each update event. This reduces the processing resource requirement, or allows the processing resources previously dedicated to interest list maintenance and event replication to be redeployed for other purposes.
[00124] Data source failover may also be performed by the IDS. Like the previously
described data source failover function performed in the IFS, the IDS allows downstream consumers to specify a prioritized list of normalized data sources. When the preferred source becomes unavailable or the data quality transitions to an unacceptable state, the IDS switches to the next highest priority normalized data source.
[00125] The IDS may also perform customized computations a per-consumer basis. Example computations include constructing user-defined Virtual Order Books, computing basket computations, computing options prices (and implied volatilities) and generating user- defined Best Bid and Offer (BBO) quotes (see the above-referenced and incorporated U.S. Patent Nos. 7,840,482 and 7,921,046, U.S. Pat. App. Pub. 2009/0182683, and WO Pub. WO 2010/077829 for examples of hardware-accelerated processing modules for such tasks). By performing these functions in an IDS at the "edge" of the distribution network allows the functions to be customized on a per consumer basis. Note that a ticker plant distributing data to hundreds of consumers may not have the processing capacity to perform hundreds of customized computations, one for each consumer. Examples of other customized per consumer computations include: liquidity target Net Asset Value (NAV) computations, future/spot price transformations, and currency conversions.
[00126] Additionally, the IDS may host one or more of the low latency data distribution
functions described in the above-referenced and incorporated U.S. Pat. App. Ser. No.
61/570,670. In one embodiment, the IDS may perform all of the functions of an Edge Cache. In another embodiment, the IDS may perform all of the functions of a Connection Multiplexer. As such, the IDS includes at least one instance of a multi-class distribution engine (MDE) that includes some permutation of Critical Transmission Engine, Adaptive Transmission Engine, or Metered Transmission Engine. [00127] Like the customized per consumer computations, the IDS may also perform per consumer protocol bridging. For example, the upstream connection from the IDS to a ticker plant may use a point-to-point Remote Direct Memory Access (RDMA) protocol. The IDS may be distributing data to a set of consumers via point-to-point connections using the Transmission Control Protocol (TCP) over Internet Protocol (IP), and distributing data to another set of consumers via a proprietary reliable multicast protocol over Unreliable Datagram Protocol (UDP).
[00128] 1. Low Overhead Communication Protocols:
[00129] Note that if intelligent FPGA NICs are used in the consuming machines, then a direct FPGA-to-FPGA wire path exists between FPGA in the Switch and the FPGA in the NIC. This eliminates the need for Ethernet frame headers, IP headers, CRCs, inter-frame spacing and other overhead, and allows the FPGA in the switch to communicate directly with the FPGA in the NIC, without being constrained to specific communication protocols.
[00130] D. Non-Financial Embodiments
[00131] It should be understood that the offload processing techniques described herein can also be applied to data other than financial market data. For example, the packet
reorganization techniques described in connection with Figures 3-6 can be applied to one more data feeds of non-financial data. Figures 29-32 illustrate such non-financial examples.
[00132] In the embodiment of Figure 29, data packets from a plurality of data feeds arrive on an input link to the offload processor, and the offload processor 300 is configured to provide consumer-specific repackaging of the incoming data packets. Thus, however the messages of the incoming packets may have been organized, the outgoing packets can organize the messages on a consumer-specific or other basis. Moreover, it should be understood that the incoming data packets may correspond to only a single data feed.
[00133] Figure 30 depicts an embodiment where the offload processor 300 receives multiple incoming data feeds on multiple input links and provides repackaging for a single output link.
[00134] Figure 31 depicts an embodiment where the offload processor 300 receives one or more data feeds on a single input link and provides repackaging for multiple output links.
[00135] Figure 32 depicts an embodiment where the offload processor 300 receives multiple incoming data feeds on multiple input links and provides repackaging for a multiple output links. [00136] Examples of nonfinancial data feeds could be data feeds such as those from social networks (e.g., a Twitter data feed, a Facebook data feed, etc.), content aggregation feeds (e.g., RSS feeds), machine-readable news feeds, and others.
[00137] Figure 33 depicts how the offload processor 300 can deliver the outgoing reorganized data packets to a plurality of different data consumers.
[00138] The offload processor 300 can take the form of an intelligent feed switch 3400,
similar to as described above. Such a switch 3400 can reside in a data distribution network. The intelligent feed switch 3400 can be configured to provide any of a number of data processing operations on incoming messages within the data packets of the one or more incoming data feeds. In exemplary embodiments, these data processing operations can be hardware-accelerated data processing operations. Examples of hardware-accelerated data processing operations that can be performed include data processing operations such as data searching, regular expression pattern matching, approximate pattern matching,
encryption/decryption, compression/decompression, rule processing, data indexing, and others, such as those disclosed by U.S. Pat. Nos. 6,711,558, 7,139,743, 7,636,703,
7,702,629, 8,095,508 and U.S. Pat. App. Pubs. 2007/0237327, 2008/0114725,
2009/0060197, and 2009/0287628, the entire disclosures of each of which being
incorporated herein by reference. As previously noted, examples of suitable hardware acceleration platforms can include reconfigurable logic (e.g., FPGAs) and GPUs.
[00139] In an exemplary embodiment, the different data consumers may have a desire to monitor one or more data feeds for data of interest. For example, a consumer may be interested in being notified of or receiving all messages in a data feed that include a particular company name, person's name, sports team, and/or city. Moreover, different data consumers would likely have varying interests with regard to such monitoring efforts. The intelligent feed switch can be configured to perform search operations on the messages in one or more data feeds to find all messages which include data that matches one or more search terms. The messages that match the terms for a given data consumer can then be associated with that data consumer, and the intelligent feed switch can direct such messages to the interested data consumer. Figure 35 illustrates a process flow for such an operation. The intelligent feed switch can implement hardware-accelerated search capabilities as described in the above-referenced and incorporated patents and patent applications to implement the process flow of Figure 35. [00140] In another exemplary embodiment, different consumers may want different messages of interest to them encrypted in a certain fashion. Such encryption operations can also be implemented in the intelligent feed switch, preferably as hardware-accelerated encryption.
[00141] In yet another exemplary embodiment, different consumers may desire different data normalization/quality checking operations be performed on messages of interest to them. Once again, such operations could be implemented in the intelligent feed switch on a consumer-specific basis.
[00142] While the present invention has been described above in relation to exemplary
embodiments, various modifications may be made thereto that still fall within the invention's scope, as would be recognized by those of ordinary skill in the art. Such modifications to the invention will be recognizable upon review of the teachings herein. As such, the full scope of the present invention is to be defined solely by the appended claims and their legal equivalents.

Claims

WHAT IS CLAIMED IS:
1. A method for processing data, the method comprising:
receiving, in an offload processor, a plurality of data packets, each data packet of the plurality of received data packets comprising at least one message, wherein the messages are included in the received data packets according to a first criterion, the messages comprising at least one message data field;
the offload processor processing the received data packets to select a plurality of the messages according to a second criterion, the second criterion being different than the first criterion; and
grouping the selected messages into a plurality of outgoing data packets to thereby generate outgoing data packets where each outgoing data packet comprises messages that were commonly selected according to the second criterion.
2. The method of claim 1 wherein the received data packets correspond to a plurality of different data feeds.
3. The method of claim 2 further comprising:
the offload processor searching the messages for data that matches at least one search term to identify matching data within the messages with respect to the at least one search term; and
wherein the processing step comprises the offload processor sorting the messages having the matching data according to the second criterion.
4. The method of claim 3 wherein the at least one search term comprises a plurality of the search terms, each search term being associated with a data consumer such that the search terms are associated with a plurality of different data consumers;
wherein the searching step comprising the offload processor searching the messages for data that matches any of the search terms to identify matching data within the messages with respect to the search terms;
wherein the sorting step comprises the offload processor (1) associating the messages having the matching data with the data consumers that are associated with the search terms for which the matching data was found within the messages, and (2) sorting the messages having the matching data with respect to their associated data consumers; and
wherein the grouping step comprises grouping the sorted messages into the outgoing data packets such that each outgoing data packet comprises messages having matching data that are all associated with the same data consumer.
5. The method of any of claims 2-4 wherein the offload processor comprises a field
programmable gate array (FPGA).
6. The method of claim 5 wherein the data feeds include a plurality of different social network data feeds.
7. The method of claim 5 wherein the data feeds include a content aggregation feed.
8. The method of claim 5 wherein the data feeds include a machine-readable news feed.
9. The method of any of claims 3-8 wherein the searching step comprises the offload processor performing at least one member of the group consisting of an exact matching operation, an approximate match operation, and a regular expression pattern match operation on the messages.
10. The method of any of claims 2-9 further comprising:
the offload processing encrypting at least a portion of the selected messages.
11. The method of claim 10 wherein the encrypting step comprises the offload processor performing different encryption operations on selected messages for a plurality of different data consumers of the outgoing data packets.
12. The method of any of claims 2-11 further comprising:
the offload processing normalizing at least a portion of the selected messages.
13. The method of claim 12 wherein the normalizing step comprises the offload processor performing different normalization operations on selected messages for a plurality of different data consumers of the outgoing data packets.
14. The method of any of claims 1-13 wherein the grouping step comprises the offload processor performing the grouping step.
15. The method of claim 14 wherein the offload processor comprises at least one member of the group consisting of a reconfigurable logic device, a graphics processor unit (GPU), and a chip multiprocessor (CMP).
16. The method of any of claims 1-15 wherein the offload processor comprises a field programmable gate array (FPGA).
17. The method of any of claims 1-16 wherein at least a plurality of the received data packets comprise transmission control protocol (TCP) data packets, and wherein the processing step comprises the offload processor processing the received data packets to perform a TCP termination on the received TCP data packets.
18. The method of any of claims 1-17 wherein the processing step further comprises the offload processor performing data quality monitoring.
19. The method of any of claims 1-18 wherein the grouping step further comprises the offload processor generating the outgoing data packets such that the outgoing data packets utilize a different communication protocol relative to the received data packets.
20. The method of claim 19 further comprising the offload processor communicating the outgoing data packets to a data consumer.
21. The method of claim 20 wherein the offload processor comprises a field programmable gate array (FPGA), and wherein the data consumer comprises an FPGA, the method further comprising: the offload processor generating the outgoing data packets to include a communication protocol that removes standard protocol headers or standard protocol fields from the outgoing data packets that are communicated to the data consumer FPGA.
22. The method of any of claims 1-20 wherein the received data packets arrive at the offload processor such that the messages have already been grouped according to the first criterion.
23. The method of any of claims 1-22 further comprising the offload processor performing the processing step and the grouping step in parallel via a pipelined processing engine.
24. The method of any of claims 1-23 wherein the outgoing data packets comprise a plurality of unicast data packets.
25. The method of any of claims 1-24 wherein the outgoing data packets comprise a plurality of multicast data packets.
26. The method of any of claims 24-25 further comprising distributing the outgoing data packets destined for different consumers over a shared network link.
27. The method of any of claims 2-26 wherein the data feeds include a plurality of different social network data feeds.
28. The method of any of claims 2-27 wherein the data feeds include a content aggregation feed.
29. The method of any of claims 2-28 wherein the data feeds include a machine-readable news feed.
30. The method of any of claims 2-29 wherein the data feeds include a financial market data feed.
31. A method for processing financial market data from at least one financial market data feed, the method comprising:
receiving, in an offload processor, a plurality of data packets corresponding to at least one financial market data feed, each data packet of the plurality of received data packets comprising at least one financial market data message, the financial market data messages being included in the received data packets according to a first criterion, the financial market data messages comprising a plurality of data fields describing financial market data for a plurality of financial instruments;
the offload processor processing the received data packets to select a plurality of the financial market data messages according to a second criterion, the second criterion being different than the first criterion; and
grouping the selected financial market data messages into a plurality of outgoing data packets to thereby generate outgoing data packets where each outgoing data packet comprises financial market data messages that were commonly selected according to the second criterion.
32. The method of claim 31 wherein the data packets correspond to a plurality of financial market data feeds.
33. The method of claim 32 wherein each of at least a plurality of the received data packets comprise a plurality of the financial market data messages.
34. The method of claim 33 wherein the processing step comprises:
the offload processor parsing the received data packets into their constituent financial market data messages, the financial market data messages comprising data indicative of a plurality of symbols for the financial instruments to which the financial market data messages pertain;
the offload processor accessing an interest list, the interest list associating a plurality of data consumers with a plurality of financial instruments of interest to the data consumers; in response to the accessing step, the offload processor determining which data consumers are interested in which financial market data messages based on the symbol data of the financial market data messages; and
the offload processor transmitting a plurality of the financial market data messages to at least one data consumer based on the determining step.
35. The method of claim 33 further comprising the offload processor performing the method steps upstream from an electronic trading platform that serves as a data consumer for at least a plurality of the outgoing data packets, the offload processor thereby offloading processing tasks from the electronic trading platform.
36. The method of claim 35 wherein the first criterion comprises a financial market data feed for the financial market data messages.
37. The method of claim 36 wherein the second criterion comprises an identifier for the financial instruments.
38. The method of claim 37 wherein the financial instrument identifier comprises a financial instrument symbol.
39. The method of any of claim 34 wherein the transmitting step comprises:
the offload processor storing data for the financial market data messages in a plurality queues, each queue being associated with a data consumer such that the storing step comprises the offload processor storing data for a particular financial market data message in the queue that is associated with the data consumer determined to have an interest in that particular financial market data message.
40. The method of claim 39 wherein at least a plurality of the queues are further associated with a different set of financial instrument symbols, and wherein the storing step further comprises the offload processor storing data for a particular financial market data message in the queue that is associated with (1) the data consumer determined to have an interest in that particular financial market data message, and (2) the symbol set which encompasses the symbol data for the particular financial market data message.
41. The method of any of claims 31 -40 wherein the offload processor comprises a field programmable gate array (FPGA).
42. The method of any of claims 39-41 wherein the grouping step comprises the offload processor generating the outgoing data packets from commonly-queued financial market data, and wherein the transmitting step comprises the offload processor outputting the generated outgoing data packets.
43. The method of claim 42 wherein the grouping step further comprises:
the offload processor selecting a queue from which to generate an outgoing data packet; the offload processor accessing packaging parameter data that is associated with the selected queue; and
the offload processor generating an outgoing data packet from financial market data in the selected queue in accordance with the accessed packaging parameter data.
44. The method of any of claims 31-43 wherein the offload processor comprises at least one member of the group consisting of a reconfigurable logic device, a graphics processor unit (GPU), and a chip multi-processor (CMP).
45. The method of any of claims 31-44 wherein the grouping step comprises the offload processor performing the grouping step.
46. The method of any of claims 31-45 wherein the offload processor comprises a field programmable gate array (FPGA).
47. The method of any of claims 31-46 wherein the first criterion comprises a financial market data feed for the financial market data messages.
48. The method of any of claims 31-47 wherein the second criterion comprises an identifier for the financial instruments.
49. The method of claim 48 wherein the financial instrument identifier comprises a financial instrument symbol.
50. The method of any of claims 31-49 wherein the second criterion comprises a plurality of data consumers having a plurality of varied interests in receiving the financial market data messages.
51. The method of any of claims 31-50 wherein the processing step further comprises the offload processor performing packet mapping on the received data packets.
52. The method of claim 51 wherein the packet mapping performing step comprises: the offload processor determining a financial market data feed associated with a received data packet;
the offload processor accessing metadata associated with the determined financial market data feed, the metadata comprising data for enabling a parsing of that received data packet; and
the offload processor associating the accessed metadata with that received data packet.
53. The method of any of claims 31-52 wherein the processing step further comprises the offload processor performing at least one member of the group consisting of (1) line arbitration, (2) gap detection and (3) gap mitigation on the received data packets.
54. The method of any of claims 31-53 wherein the processing step further comprises the offload processor performing a normalization operation on the financial market data.
55. The method of claim 54 wherein the normalizing performing step further comprises the offload processor performing price normalization on the financial market data.
56. The method of claim 55 wherein the price normalization performing step comprises the offload processor performing aggregated price normalization on the financial market data.
57. The method of claim 56 wherein the aggregated price normalization performing step comprises the offload processor performing the aggregated price normalization on at least one member of the group consisting of (1) a per symbol/per market basis, and (2) a per symbol/pan market basis.
58. The method of any of claims 31-57 wherein the processing step further comprises the offload processor performing filtering on the financial market data, wherein the filtering comprises filtering according to at least one member of the group consisting of (1) a size associated with a financial market data message, (2) a removal condition that defines which financial market data messages should be removed from an outgoing data packet for a data consumer, (3) a symbol associated with a financial market data message, (4) a level or a position for an order corresponding to a financial market data message in an order book, (5) a market status relating to a financial market data message, and (6) a condition code or qualifier associated with a financial market data message.
59. The method of any of claims 31-58 wherein the processing step further comprises the offload processor maintaining an order book based on the financial market data.
60. The method of any of claims 31-59 wherein the processing step further comprises the offload processor generating synthetic quotes from the financial market data.
61. The method of any of claims 31-60 wherein the processing step further comprises the offload processor maintaining a last event cache based on the financial market data.
62. The method of any of claims 31-61 wherein the processing step further comprises the offload processor performing data quality monitoring.
63. The method of any of claims 31-62 wherein the processing step further comprises the offload processor appending additional data to the financial market data messages.
64. The method of any of claims 31-63 wherein the grouping step further comprises the offload processor generating the outgoing data packets such that the outgoing data packets utilize a different communication protocol relative to the received data packets.
65. The method of any of claims 31-64 further comprising the offload processor communicating the outgoing data packets to a data consumer.
66. The method of any of claims 31-65 wherein the offload processor comprises a field programmable gate array (FPGA), and wherein the data consumer comprises an FPGA, the method further comprising:
the offload processor generating the outgoing data packets to include a communication protocol that removes standard protocol headers or standard protocol fields from the outgoing data packets that are communicated to the data consumer FPGA.
67. The method of any of claims 31-66 wherein the received data packets arrive at the offload processor such that the financial market data messages have already been grouped according to the first criterion.
68. The method of any of claims 31-67 further comprising the offload processor performing the processing step and the grouping step in parallel via a pipelined processing engine.
69. The method of any of claims 31-68 further comprising the offload processor performing the method steps upstream from an electronic trading platform that serves as a data consumer for at least a plurality of the outgoing data packets, the offload processor thereby offloading processing tasks from the electronic trading platform.
70. The method of any of claims 31-69 wherein at least a plurality of the received data packets comprise transmission control protocol (TCP) data packets, and wherein the processing step comprises the offload processor processing the received data packets to perform a TCP termination on the received TCP data packets.
71. The method of any of claims 31 -70 wherein a plurality of the received data packets comprise Unreliable Datagram Protocol (UDP) data packets, and wherein the processing step further comprises the offload processor performing at least one member of the group consisting of (1) line arbitration, (2) gap detection and (3) gap mitigation on the received UDP data packets.
72. The method of any of claims 31-71 wherein the processing step further comprises the offload processor performing size filtering on the financial market data.
73. A method of providing data to a plurality of data consumers, the method comprising:
receiving, in an offload processor, a plurality of data packets corresponding to a plurality of data feeds, each of a plurality of the received data packets comprising a plurality of feed-specific messages, the messages comprising message data;
the offload processor processing the received data packets to depacketize the messages;
the offload processor analyzing the message data;
the offload processor selecting a plurality of the messages according to a criterion in response to the analyzing step; and
the offload processor packetizing the selected messages to generate a plurality of outgoing data packets for delivery to the data consumers, the outgoing data packets comprising criterion- specific messages such that at least a plurality of the outgoing data packets comprise message data from received data packets corresponding to different data feeds that are grouped into the same outgoing data packets.
74. The method of claim 73 further comprising:
the offload processor performing a protocol transformation to generate a plurality of outgoing data packets of a different protocol than the received data packets for delivery to the data consumers.
75. The method of any of claims 73-74 wherein the data feeds include a plurality of different social network data feeds.
76. The method of any of claims 73-75 wherein the data feeds include a content aggregation feed.
77. The method of any of claims 73-76 wherein the data feeds include a machine-readable news feed.
78. The method of any of claims 73-77 wherein the analyzing step comprises the offload processor searching the message data for data that matches at least one search term to identify matching data within the messages with respect to the at least one search term; and
wherein the sorting step comprises the offload processor sorting the messages having the matching data according to the second criterion.
79. The method of claim 78 wherein the at least one search term comprises a plurality of the search terms, each search term being associated with a data consumer such that the search terms are associated with a plurality of different data consumers;
wherein the searching step comprising the offload processor searching the message data for data that matches any of the search terms to identify matching data within the messages with respect to the search terms;
wherein the sorting step comprises the offload processor (1) associating the messages having the matching data with the data consumers that are associated with the search terms for which the matching data was found within the messages, and (2) sorting the messages having the matching data with respect to their associated data consumers; and
wherein the grouping step comprises grouping the sorted messages into the outgoing data packets such that each outgoing data packet comprises messages having matching data that are all associated with the same data consumer.
80. The method of any of claims 78-79 wherein the searching step comprises the offload processor performing at least one member of the group consisting of an exact matching operation, an approximate match operation, and a regular expression pattern match operation on the message data.
81. The method of any of claims 73 -80 further comprising:
the offload processing encrypting at least a portion of the selected messages.
82. The method of claim 81 wherein the encrypting step comprises the offload processor performing different encryption operations on selected messages for a plurality of different data consumers of the outgoing data packets.
83. The method of any of claims 73-82 further comprising:
the offload processing normalizing at least a portion of the selected messages.
84. The method of claim 83 wherein the normalizing step comprises the offload processor performing different normalization operations on selected messages for a plurality of different data consumers of the outgoing data packets.
85. The method of any of claims 73-84 wherein the offload processor comprises at least one member of the group consisting of a reconfigurable logic device, a graphics processor unit (GPU), and a chip multi-processor (CMP).
86. A method of providing financial market data to a plurality of data consumers, the method comprising:
receiving, in an offload processor, a plurality of data packets corresponding to a plurality of financial market data feeds, each of a plurality of the received data packets comprising a plurality of feed-specific financial market data messages, the financial market data messages comprising a plurality of data fields describing financial market data for a plurality of financial instruments;
the offload processor processing the received data packets to depacketize the financial market data messages;
the offload processor processing the financial market data of the depacketized financial market data messages to select financial market data according to a criterion; and
the offload processor packetizing the selected financial market data to generate a plurality of outgoing data packets for delivery to the data consumers, the outgoing data packets comprising criterion-specific financial market data such that at least a plurality of the outgoing data packets comprise financial market data from received data packets corresponding to different financial market data feeds that are grouped into the same outgoing data packets.
87. The method of claim 86 wherein a plurality of the received data packets comprise transmission control protocol (TCP) data packets.
88. The method of claim 87 wherein the protocol transformation performing step includes the offload processor performing a TCP termination on the received TCP data packets.
89. A method comprising:
receiving, in an offload processor, a plurality of data packets corresponding to a plurality of financial market data feeds, the received data packets comprising a plurality of financial market data messages, the financial market data messages comprising a plurality of data fields describing financial market data for a plurality of financial instruments;
the offload processor determining a financial market data feed associated with a received data packet; the offload processor accessing metadata associated with the determined financial market data feed, the metadata comprising data for enabling a parsing of that received data packet; and
the offload processor associating the accessed metadata with that received data packet.
90. The method of claim 89 wherein the offload processor comprises at least one member of the group consisting of a reconfigurable logic device, a graphics processor unit (GPU), and a chip multiprocessor (CMP).
91. The method of claim 90 wherein the determining step comprises the offload processor analyzing a multiplexed stream of the received data packets to determine the financial market data feed associated with each received data packet.
92. The method of any of claims 89-91 wherein the determining step comprises:
the offload processor accessing a mapping table based on data in a received data packet, the mapping table comprising data that associates a financial market data feed with packet data;
the offload processor determining the financial market data feed associated with that received data packet based on the accessed mapping table.
93. The method of claim 92 wherein the data in the received packet for accessing the mapping table comprises a tuple, wherein the tuple comprises at least two members of the group consisting of an IP source address, destination address, a protocol identifier, a source port number, and a destination port number.
94. The method of any of claims 89-93 wherein the metadata comprises at least one member of the group consisting of (1) a market identification code (MIC), (2) a data source identification code (DSIC), (3) a line identification code (LIC), and (4) a flag for identifying whether the determined financial market data feed employs FIX/FAST encoding.
95. The method of any of claims 89-94 wherein the metadata comprises a packet parsing template.
96. The method of any of claims 89-95 wherein the metadata comprises a financial market data message parsing template.
97. The method of any of claims 89-96 wherein the metadata comprises a data normalization template for financial market data within the financial market data messages.
98. The method of any of claims 89-97 wherein the associating step comprises the offload processor appending the accessed metadata with that received data packet.
99. The method of any of claims 89-97 wherein the associating step comprises the offload processor propagating the accessed metadata along a data path in association with that received data packet.
100. The method of any of claims 89-99 wherein the offload processor comprises a field programmable gate array (FPGA).
101. The method of any of claims 89-100 wherein at least a plurality of the received data packets comprise transmission control protocol (TCP) data packets, the method further comprising the offload processor performing a TCP termination on the received TCP data packets.
102. A method for processing financial market data from at least one financial market data feed, the method comprising:
receiving, in an offload processor, a plurality of data packets corresponding to at least one financial market data feed, each data packet of the plurality of received data packets comprising at least one financial market data message, the financial market data messages being included in the received data packets according to a first criterion, the financial market data messages comprising a plurality of data fields describing financial market data for a plurality of financial instruments;
the offload processor processing the received data packets to select a plurality of the financial market data messages according to a second criterion, the second criterion being different than the first criterion; and
transmitting the selected financial market data messages to at least one data consumer.
103. An apparatus comprising:
an offload processor configured with a capability to perform the method of any of claims 1-
102.
104. An intelligent switch for processing financial market data, the switch comprising:
a plurality of ports
switching logic; and
a processor;
wherein the switching logic and processor are co-resident within the intelligent switch;
at least one of the ports being configured to receive a plurality of incoming data packets, the incoming data packets comprising a plurality of financial market data messages, the financial market data messages comprising data that describes financial market data for a plurality of financial instruments;
at least another of the ports being configured to output a plurality of outgoing data packets, the outgoing data packets comprising data that describes at least a portion of the financial market data; wherein the switching logic is configured to determine a port for the outgoing data packets with reference to the incoming data packets; and
wherein the processor is configured to perform a processing operation on at least a portion of the data describing the financial market data, the processing operation comprising at least one selected from the group consisting of (1) a packet mapping operation, (2) a line arbitration operation, (3) a gap detection operation, (4) a packet parsing operation, (5) a message parsing operation, (6) a symbol mapping operation, (7) a sequencing operation, (8) a data normalization operation, (9) a symbol routing operation, (10) a repackaging operation, (11) a synthetic quote generation operation, (12) a last event caching operation, (13) an order book maintenance operation, (14) a data quality monitoring operation, (15) a field addition operation, and (16) a data distribution operation.
105. The switch of claim 104 wherein the switching logic is resident on the processor.
106. The switch of claim 105 wherein the processor comprises a field programmable gate array (FPGA).
107. The switch of claim 106 further comprising another FPGA, and wherein the switching logic is resident on an FPGA of the switch.
108. The switch of claim 107 wherein the FPGAs are configured to communicate with each other via a custom interface.
109. The switch of claim 107 wherein the FPGAs are configured to communicate with each other via a PCI-express interface.
110. The switch of claim 107 wherein the FPGAs are configured to communicate with each other via a XAUI interface.
111. The switch of claim 107 wherein the FPGAs are configured to communicate with each other via an Ethernet interface.
112. The switch of claim 104 wherein the switching logic is resident on an application specific integrated circuit (ASIC).
113. The switch of claim 112 wherein the processor is resident on a field programmable gate array (FPGA), wherein the ASIC and the FPGA are configured to communicate with each other via a custom interface.
114. The switch of claim 112 wherein the processor is resident on a field programmable gate array (FPGA), wherein the ASIC and the FPGA are configured to communicate with each other via a PCI- express interface.
115. The switch of claim 112 wherein the processor is resident on a field programmable gate array (FPGA), wherein the ASIC and the FPGA are configured to communicate with each other via a XAUI interface.
116. The switch of claim 112 wherein the processor is resident on a field programmable gate array (FPGA), wherein the ASIC and the FPGA are configured to communicate with each other via an Ethernet interface.
117. The switch of claim 104 further comprising a control processor for providing instructions to the processor for controlling the data processing operation in response to input from an external device.
118. The switch of any of claims 104 -117 wherein the processor comprises at least one member of the group consisting of a reconfigurable logic device, a graphics processor unit (GPU), and a chip multi-processor (CMP).
119. The switch of claim 118 wherein the data processing operation comprises a packet mapping operation.
120. The switch of claim 118 wherein the data processing operation comprises a line arbitration operation.
121. The switch of claim 118 wherein the data processing operation comprises a gap detection operation.
122. The switch of claim 118 wherein the data processing operation comprises a packet parsing operation.
123. The switch of claim 118 wherein the data processing operation comprises a message parsing operation.
124. The switch of claim 118 wherein the data processing operation comprises a symbol mapping operation.
125. The switch of claim 118 wherein the data processing operation comprises a sequencing operation.
126. The switch of claim 118 wherein the data processing operation comprises a data
normalization operation.
127. The switch of claim 118 wherein the data processing operation comprises a symbol routing operation.
128. The switch of claim 118 wherein the data processing operation comprises a repackaging operation.
129. The switch of claim 118 wherein the data processing operation comprises a synthetic quote generation operation.
130. The switch of claim 118 wherein the data processing operation comprises a last event caching operation.
131. The switch of claim 118 wherein the data processing operation comprises an order book maintenance operation.
132. The switch of claim 118 wherein the data processing operation comprises a data quality monitoring operation.
133. The switch of claim 118 wherein the data processing operation comprises a field addition operation.
134. The switch of claim 118 wherein the data processing operation comprises a data distribution operation.
135. The switch of claim 118 wherein the processor is further configured to perform a plurality of the data processing operations on at least a portion of the data describing the financial market data.
136. The switch of claim 118 wherein the processor comprises a pipelined processing engine that is configured to perform a plurality of the data processing operations in parallel.
137. The switch of claim 136 wherein the processor is configured to perform at least two members of the group consisting of (1) a packet mapping operation, (2) a line arbitration operation downstream from the packet mapping operation, (3) a packet parsing operation downstream from the packet mapping operation, (4) a message parsing operation downstream from the packet parsing operation, (5) a symbol mapping operation downstream from the message parsing operation, (6) a data normalization operation downstream from the symbol mapping operation, (7) a symbol routing operation downstream from the data normalization operation, and (8) a repackaging operation downstream from the symbol routing operation, the switch thereby being configured to generate a stream of customized outgoing data packets that organize the financial market data according to a criterion different than how the incoming data packets organized the financial market data.
138. The switch of claim 137 wherein the processor comprises a field programmable gate array (FPGA), the FPGA comprising pipelined firmware logic for performing the data processing operations in parallel with respect to successively received incoming data packets.
139. The switch of any of claims 104 -138 wherein the plurality of ports comprise a first, second, and third port.
140. The switch of any of claims 104 -139 wherein at least a plurality of the received data packets comprise transmission control protocol (TCP) data packets, and wherein the processor is further configured to perform a TCP termination on at least a plurality of the received TCP data packets.
141. The switch of claim 140 wherein a plurality of the received data packets comprise unreliable datagram protocol (UDP) data packets, and wherein the processor is further configured to perform at least one member of the group consisting of (1) a line arbitration operation and (2) a gap detection operation on the received UDP data packets.
142. A method comprising:
processing a plurality of data packets by the switch as set forth in any of claims 104-141, the data packets comprising a plurality of financial market data messages, the financial market data messages comprising data that describes financial market data for a plurality of financial instruments, and wherein at least a plurality of the received data packets comprise transmission control protocol (TCP) data packets.
143. The method of claim 142 wherein the switch is located in a data distribution network upstream from an electronic trading platform that consumes data output by the switch.
144. The method of any of claims 142-143 further comprising:
a latency-sensitive trading application receiving switched financial market data directly from the switch and performing a trading operation in response to the switched financial market data.
145. The method of claim 144 further comprising:
a hardware-accelerated ticker plant processing the financial market data messages to generate normalized financial market data; and
the latency-sensitive trading application further receiving normalized financial market data directly from the hardware-accelerated ticker plant and performing a trading operation in response to the switched financial market data and the normalized financial market data.
146. The method of claim 142 wherein the switch is located in a data distribution network downstream from an electronic trading platform that provides the switch with the incoming data packets.
147. A system comprising:
an intelligent switch for processing financial market data, the switch comprising (1) a plurality of ports, (2) switching logic, and (3) a processor, wherein the switching logic and processor are co-resident within the intelligent switch, wherein at least one of the ports is configured to receive a plurality of incoming data packets, the incoming data packets comprising a plurality of financial market data messages, the financial market data messages comprising data that describes financial market data for a plurality of financial instruments, wherein at least another of the ports being configured to output a plurality of outgoing data packets, the outgoing data packets comprising data that describes at least a portion of the financial market data, wherein the switching logic is configured to determine a port for the outgoing data packets with reference to the incoming data packets, and wherein the processor is configured to perform a processing operation on at least a portion of the data describing the financial market data, the processing operation comprising at least one selected from the group consisting of (1) a packet mapping operation, (2) a line arbitration operation, (3) a gap detection operation, (4) a packet parsing operation, (5) a message parsing operation, (6) a symbol mapping operation, (7) a sequencing operation, (8) a data normalization operation, (9) a symbol routing operation, (10) a repackaging operation, (11) a synthetic quote generation operation, (12) a last event caching operation, (13) an order book maintenance operation, (14) a data quality monitoring operation, (15) a field addition operation, and (16) a data distribution operation;
an electronic trading platform downstream from the intelligent switch, the electronic trading platform configured to consume data output by the switch,
a latency-sensitive trading application configured to receive processed financial market data directly from the intelligent switch and perform a trading operation in response to the processed financial market data from the intelligent switch; and
wherein the switch offloads at least a portion of the data processing operation from the electronic trading platform.
148. The system of claim 147 further comprising:
a hardware-accelerated ticker plant downstream from the intelligent switch and configured to normalize the processed financial market data from the intelligent switch; and
wherein the latency-sensitive trading application is further configured to receive the normalized financial market data directly from the hardware-accelerated ticker plant and perform a trading operation in response to the processed financial market data and the normalized financial market data.
149. A system comprising:
an intelligent switch for processing financial market data, the switch comprising (1) a plurality of ports, (2) switching logic, and (3) a processor, wherein the switching logic and processor are co-resident within the intelligent switch, wherein at least one of the ports is configured to receive a plurality of incoming data packets, the incoming data packets comprising a plurality of financial market data messages, the financial market data messages comprising data that describes financial market data for a plurality of financial instruments, wherein at least another of the ports being configured to output a plurality of outgoing data packets, the outgoing data packets comprising data that describes at least a portion of the financial market data, wherein the switching logic is configured to determine a port for the outgoing data packets with reference to the incoming data packets, and wherein the processor is configured to perform a processing operation on at least a portion of the data describing the financial market data, the processing operation comprising at least one selected from the group consisting of (1) a packet mapping operation, (2) a line arbitration operation, (3) a gap detection operation, (4) a packet parsing operation, (5) a message parsing operation, (6) a symbol mapping operation, (7) a sequencing operation, (8) a data normalization operation, (9) a symbol routing operation, (10) a repackaging operation, (11) a synthetic quote generation operation, (12) a last event caching operation, (13) an order book maintenance operation, (14) a data quality monitoring operation, (15) a field addition operation, and (16) a data distribution operation; a feed handler downstream from the intelligent switch, the feed handler configured to consume data output by the switch,
a latency-sensitive trading application configured to receive processed financial market data directly from the intelligent switch and perform a trading operation in response to the processed financial market data from the intelligent switch; and
wherein the switch offloads at least a portion of the data processing operation from the feed handler.
150. The system of claim 149 further comprising:
a hardware-accelerated ticker plant downstream from the intelligent switch and configured to normalize the processed financial market data from the intelligent switch; and
wherein the latency-sensitive trading application is further configured to receive the normalized financial market data directly from the hardware-accelerated ticker plant and perform a trading operation in response to the processed financial market data and the normalized financial market data.
151. An intelligent feed switch for processing data, the switch comprising:
a plurality of ports
switching logic; and
a processor;
wherein the switching logic and processor are co-resident within the intelligent switch; at least one of the ports being configured to receive a plurality of incoming feed-specific data packets, the feed-specific data packets corresponding to a plurality of different data feeds, the incoming feed-specific data packets comprising a plurality of messages, the messages comprising message data;
at least another of the ports being configured to output a plurality of outgoing data packets, the outgoing data packets comprising data that describes at least a portion of the message data; wherein the processor is configured to analyze the message data of the messages on a data consumer-specific basis and repacketize the messages into a plurality of outgoing data consumer- specific data packets; and
wherein the switching logic is configured to determine a port for the outgoing data packets with reference to the incoming data packets.
152. The switch of claim 151 wherein the processor is further configured to analyze the message data on the data consumer-specific basis by performing a plurality of data consumer-specific search operations on the message data with respect to a plurality of search terms, the search terms being associated with the data consumers to find message data of interest to the data consumers.
153. The switch of claim 152 wherein the search operations comprise at least one member of the group consisting of an exact matching operation, an approximate match operation, and a regular expression pattern match operation.
154. The switch of claim 152 wherein the data feeds include at least one social network data feed.
155. The switch of claim 152 wherein the data feeds include at least one content aggregation feed.
156. The switch of claim 152 wherein the data feeds include at least one machine-readable news feed.
157. The switch of any of claims 151-156 wherein the switching logic is resident on the processor.
158. The switch of claim 157 wherein the processor comprises a field programmable gate array (FPGA).
159. The switch of any of claims 151-158 wherein the processor is further configured to encrypt at least a portion of the messages.
160. The switch of 159 wherein the processor is further configured to perform different data consumer-specific encryption operations on message data.
161. The switch of any of claims 151 - 160 wherein the processor is further configured to normalize at least a portion of the messages.
162. The switch of claim 161 wherein the processor is further configured to perform different data consumer-specific normalization operations on message data.
163. The switch of any of claims 151-162 wherein the switching logic is resident on an application specific integrated circuit (ASIC).
164. The switch of any of claims 151-163 wherein the processor comprises at least one member of the group consisting of a reconfigurable logic device, a graphics processor unit (GPU), and a chip multi-processor (CMP).
165. The switch of any of claims 151-164 wherein the data feeds include at least one social network data feed.
166. The switch of any of claims 151 - 165 wherein the data feeds include at least one content aggregation feed.
167. The method of any of claims 151-166 wherein the data feeds include at least one machine- readable news feed.
168. The switch of any of claims 151-167 wherein the switching logic is resident on the processor.
169. The switch of any of claims 151 -168 wherein the processor comprises a field programmable gate array (FPGA).
170. The switch of claim 169 further comprising another FPGA, and wherein the switching logic is resident on an FPGA of the switch.
171. A system comprising:
an intelligent switch for processing financial market data, the switch comprising (1) a plurality of ports, (2) switching logic, and (3) a processor, wherein the switching logic and processor are co-resident within the intelligent switch, wherein at least one of the ports is configured to receive a plurality of incoming data packets, the incoming data packets comprising a plurality of financial market data messages, the financial market data messages comprising data that describes financial market data for a plurality of financial instruments, wherein at least another of the ports being configured to output a plurality of outgoing data packets, the outgoing data packets comprising data that describes at least a portion of the financial market data, wherein the switching logic is configured to determine a port for the outgoing data packets with reference to the incoming data packets, and wherein the processor is configured to perform a processing operation on at least a portion of the data describing the financial market data, the processing operation comprising a TCP termination operation;
an electronic trading platform downstream from the intelligent switch, the electronic trading platform configured to consume data output by the switch,
a latency-sensitive trading application configured to receive processed financial market data directly from the intelligent switch and perform a trading operation in response to the processed financial market data from the intelligent switch; and
wherein the switch offloads at least a portion of the data processing operation from the electronic trading platform.
172. The system of claim 171 further comprising:
a hardware-accelerated ticker plant downstream from the intelligent switch and configured to normalize the processed financial market data from the intelligent switch; and
wherein the latency-sensitive trading application is further configured to receive the normalized financial market data directly from the hardware-accelerated ticker plant and perform a trading operation in response to the processed financial market data and the normalized financial market data.
173. The system of any of claims 171-172 wherein the processing operation further comprises at least one selected from the group consisting of (1) a packet mapping operation, (2) a line arbitration operation, (3) a gap detection operation, (4) a packet parsing operation, (5) a message parsing operation, (6) a symbol mapping operation, (7) a sequencing operation, (8) a data normalization operation, (9) a symbol routing operation, (10) a repackaging operation, (11) a synthetic quote generation operation, (12) a last event caching operation, (13) an order book maintenance operation, (14) a data quality monitoring operation, (15) a field addition operation, and (16) a data distribution operation.
174. A system comprising:
an intelligent switch for processing financial market data, the switch comprising (1) a plurality of ports, (2) switching logic, and (3) a processor, wherein the switching logic and processor are co-resident within the intelligent switch, wherein at least one of the ports is configured to receive a plurality of incoming data packets, the incoming data packets comprising a plurality of financial market data messages, the financial market data messages comprising data that describes financial market data for a plurality of financial instruments, wherein at least another of the ports being configured to output a plurality of outgoing data packets, the outgoing data packets comprising data that describes at least a portion of the financial market data, wherein the switching logic is configured to determine a port for the outgoing data packets with reference to the incoming data packets, and wherein the processor is configured to perform a processing operation on at least a portion of the data describing the financial market data, the processing operation comprising a TCP termination operation;
a feed handler downstream from the intelligent switch, the feed handler configured to consume data output by the switch,
a latency-sensitive trading application configured to receive processed financial market data directly from the intelligent switch and perform a trading operation in response to the processed financial market data from the intelligent switch; and wherein the switch offloads at least a portion of the data processing operation from the feed handler.
175. The system of claim 174 further comprising:
a hardware-accelerated ticker plant downstream from the intelligent switch and configured to normalize the processed financial market data from the intelligent switch; and
wherein the latency-sensitive trading application is further configured to receive the normalized financial market data directly from the hardware-accelerated ticker plant and perform a trading operation in response to the processed financial market data and the normalized financial market data.
176. The system of any of claims 174-175 wherein the processing operation further comprises at least one selected from the group consisting of (1) a packet mapping operation, (2) a line arbitration operation, (3) a gap detection operation, (4) a packet parsing operation, (5) a message parsing operation, (6) a symbol mapping operation, (7) a sequencing operation, (8) a data normalization operation, (9) a symbol routing operation, (10) a repackaging operation, (11) a synthetic quote generation operation, (12) a last event caching operation, (13) an order book maintenance operation, (14) a data quality monitoring operation, (15) a field addition operation, and (16) a data distribution operation.
PCT/US2013/033889 2012-03-27 2013-03-26 Offload processing of data packets WO2013148693A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
EP13767579.9A EP2832045A4 (en) 2012-03-27 2013-03-26 Offload processing of data packets
US14/195,550 US9990393B2 (en) 2012-03-27 2014-03-03 Intelligent feed switch
US14/195,510 US20140180904A1 (en) 2012-03-27 2014-03-03 Offload Processing of Data Packets Containing Financial Market Data
US14/195,462 US10650452B2 (en) 2012-03-27 2014-03-03 Offload processing of data packets
US14/195,531 US11436672B2 (en) 2012-03-27 2014-03-03 Intelligent switch for processing financial market data
US15/994,262 US10872078B2 (en) 2012-03-27 2018-05-31 Intelligent feed switch
US17/903,236 US20220414778A1 (en) 2012-03-27 2022-09-06 Intelligent Packet Switch

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201261616181P 2012-03-27 2012-03-27
US61/616,181 2012-03-27
US201361790254P 2013-03-15 2013-03-15
US61/790,254 2013-03-15
US13/833,098 US10121196B2 (en) 2012-03-27 2013-03-15 Offload processing of data packets containing financial market data
US13/833,098 2013-03-15

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/833,098 Continuation-In-Part US10121196B2 (en) 2012-03-27 2013-03-15 Offload processing of data packets containing financial market data

Related Child Applications (4)

Application Number Title Priority Date Filing Date
US14/195,462 Continuation US10650452B2 (en) 2012-03-27 2014-03-03 Offload processing of data packets
US14/195,550 Continuation US9990393B2 (en) 2012-03-27 2014-03-03 Intelligent feed switch
US14/195,510 Continuation US20140180904A1 (en) 2012-03-27 2014-03-03 Offload Processing of Data Packets Containing Financial Market Data
US14/195,531 Continuation US11436672B2 (en) 2012-03-27 2014-03-03 Intelligent switch for processing financial market data

Publications (1)

Publication Number Publication Date
WO2013148693A1 true WO2013148693A1 (en) 2013-10-03

Family

ID=49261170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/033889 WO2013148693A1 (en) 2012-03-27 2013-03-26 Offload processing of data packets

Country Status (2)

Country Link
EP (1) EP2832045A4 (en)
WO (1) WO2013148693A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396222B2 (en) 2006-11-13 2016-07-19 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
AU2014272791B2 (en) * 2013-05-31 2017-01-12 Nasdaq Technology Ab Apparatus, system, and method of elastically processing message information from multiple sources
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data
US11935120B2 (en) 2020-06-08 2024-03-19 Liquid-Markets GmbH Hardware-based transaction exchange

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6484209B1 (en) * 1997-10-31 2002-11-19 Nortel Networks Limited Efficient path based forwarding and multicast forwarding
US6710702B1 (en) * 1999-11-22 2004-03-23 Motorola, Inc. Method and apparatus for providing information to a plurality of communication units in a wireless communication system
US20060215691A1 (en) * 2005-03-23 2006-09-28 Fujitsu Limited Network adaptor, communication system and communication method
US20070174841A1 (en) 2006-01-26 2007-07-26 Exegy Incorporated & Washington University Firmware socket module for FPGA-based pipeline processing
US20070294157A1 (en) 2006-06-19 2007-12-20 Exegy Incorporated Method and System for High Speed Options Pricing
US20080243675A1 (en) 2006-06-19 2008-10-02 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20090182683A1 (en) 2008-01-11 2009-07-16 Exegy Incorporated Method and System for Low Latency Basket Calculation
US20090287628A1 (en) 2008-05-15 2009-11-19 Exegy Incorporated Method and System for Accelerated Stream Processing
WO2010077829A1 (en) 2008-12-15 2010-07-08 Exegy Incorporated Method and apparatus for high-speed processing of financial market depth data
US20110040776A1 (en) * 2009-08-17 2011-02-17 Microsoft Corporation Semantic Trading Floor
US20110145130A1 (en) * 2000-11-17 2011-06-16 Scale Semiconductor Flg, L.L.C. Global electronic trading system
US20120246052A1 (en) 2010-12-09 2012-09-27 Exegy Incorporated Method and Apparatus for Managing Orders in Financial Markets

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987432A (en) * 1994-06-29 1999-11-16 Reuters, Ltd. Fault-tolerant central ticker plant system for distributing financial market data
US7219125B1 (en) * 2002-02-13 2007-05-15 Cisco Technology, Inc. Method and apparatus for masking version differences in applications using a data object exchange protocol
US7869442B1 (en) * 2005-09-30 2011-01-11 Nortel Networks Limited Method and apparatus for specifying IP termination in a network element

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6484209B1 (en) * 1997-10-31 2002-11-19 Nortel Networks Limited Efficient path based forwarding and multicast forwarding
US6710702B1 (en) * 1999-11-22 2004-03-23 Motorola, Inc. Method and apparatus for providing information to a plurality of communication units in a wireless communication system
US20110145130A1 (en) * 2000-11-17 2011-06-16 Scale Semiconductor Flg, L.L.C. Global electronic trading system
US20060215691A1 (en) * 2005-03-23 2006-09-28 Fujitsu Limited Network adaptor, communication system and communication method
US7954114B2 (en) 2006-01-26 2011-05-31 Exegy Incorporated Firmware socket module for FPGA-based pipeline processing
US20070174841A1 (en) 2006-01-26 2007-07-26 Exegy Incorporated & Washington University Firmware socket module for FPGA-based pipeline processing
US20110178957A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20110178919A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20110184844A1 (en) 2006-06-19 2011-07-28 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US7840482B2 (en) 2006-06-19 2010-11-23 Exegy Incorporated Method and system for high speed options pricing
US20110040701A1 (en) 2006-06-19 2011-02-17 Exegy Incorporated Method and System for High Speed Options Pricing
US20110178912A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US7921046B2 (en) 2006-06-19 2011-04-05 Exegy Incorporated High speed processing of financial information using FPGA devices
US20110178917A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20080243675A1 (en) 2006-06-19 2008-10-02 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20110179050A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20110178911A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20110178918A1 (en) 2006-06-19 2011-07-21 Exegy Incorporated High Speed Processing of Financial Information Using FPGA Devices
US20070294157A1 (en) 2006-06-19 2007-12-20 Exegy Incorporated Method and System for High Speed Options Pricing
US20090182683A1 (en) 2008-01-11 2009-07-16 Exegy Incorporated Method and System for Low Latency Basket Calculation
US20090287628A1 (en) 2008-05-15 2009-11-19 Exegy Incorporated Method and System for Accelerated Stream Processing
WO2010077829A1 (en) 2008-12-15 2010-07-08 Exegy Incorporated Method and apparatus for high-speed processing of financial market depth data
US20110040776A1 (en) * 2009-08-17 2011-02-17 Microsoft Corporation Semantic Trading Floor
US20120246052A1 (en) 2010-12-09 2012-09-27 Exegy Incorporated Method and Apparatus for Managing Orders in Financial Markets

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2832045A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9396222B2 (en) 2006-11-13 2016-07-19 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US10191974B2 (en) 2006-11-13 2019-01-29 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data
US11449538B2 (en) 2006-11-13 2022-09-20 Ip Reservoir, Llc Method and system for high performance integration, processing and searching of structured and unstructured data
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US10872078B2 (en) 2012-03-27 2020-12-22 Ip Reservoir, Llc Intelligent feed switch
US10963962B2 (en) 2012-03-27 2021-03-30 Ip Reservoir, Llc Offload processing of data packets containing financial market data
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data
AU2014272791B2 (en) * 2013-05-31 2017-01-12 Nasdaq Technology Ab Apparatus, system, and method of elastically processing message information from multiple sources
US11935120B2 (en) 2020-06-08 2024-03-19 Liquid-Markets GmbH Hardware-based transaction exchange

Also Published As

Publication number Publication date
EP2832045A4 (en) 2015-11-25
EP2832045A1 (en) 2015-02-04

Similar Documents

Publication Publication Date Title
US10872078B2 (en) Intelligent feed switch
US20220414778A1 (en) Intelligent Packet Switch
US10963962B2 (en) Offload processing of data packets containing financial market data
US10650452B2 (en) Offload processing of data packets
US20140180904A1 (en) Offload Processing of Data Packets Containing Financial Market Data
EP2832045A1 (en) Offload processing of data packets
US11563672B2 (en) Financial network
US11374777B2 (en) Feed processing
US9904931B2 (en) FPGA matrix architecture
TWI525570B (en) Field programmable gate array for processing received financial orders and method of using the same
US8868461B2 (en) Electronic trading platform and method thereof
US20130159449A1 (en) Method and Apparatus for Low Latency Data Distribution
JP2007234014A (en) Event multicast platform for scalable content base
US20210359952A1 (en) Technologies for protocol-agnostic network packet segmentation
WO2022031880A1 (en) Local and global quality of service shaper on ingress in a distributed system
US20190228471A1 (en) Apparatus and a method for creating a high speed financial market data message stream
CN109617833A (en) The NAT Data Audit method and system of multithreading user mode network protocol stack system
AU2011201428B2 (en) Single threaded system for matching, computation and broadcasting of market data for stock exchange
CN113162864B (en) RoCE network flow control method, device, equipment and storage medium
US20230316399A1 (en) Electronic Trading System and Method based on Point-to-Point Mesh Architecture
Bulda et al. Dynamic verification of input and output data streams for market data aggregation and quote dissemination systems (Ticker Plant)
CN117560433A (en) DPU (digital versatile unit) middle report Wen Zhuaifa order preserving method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13767579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013767579

Country of ref document: EP