US20210350426A1 - Architecture for data processing and user experience to provide decision support - Google Patents

Architecture for data processing and user experience to provide decision support Download PDF

Info

Publication number
US20210350426A1
US20210350426A1 US17/313,958 US202117313958A US2021350426A1 US 20210350426 A1 US20210350426 A1 US 20210350426A1 US 202117313958 A US202117313958 A US 202117313958A US 2021350426 A1 US2021350426 A1 US 2021350426A1
Authority
US
United States
Prior art keywords
data
company
forecast
user
consumer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/313,958
Inventor
Damian Ariel Scavo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NowcastingAI Inc
Original Assignee
NowcastingAI Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NowcastingAI Inc filed Critical NowcastingAI Inc
Priority to US17/313,958 priority Critical patent/US20210350426A1/en
Assigned to Nowcasting.ai, Inc. reassignment Nowcasting.ai, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SCAVO, DAMIAN ARIEL
Publication of US20210350426A1 publication Critical patent/US20210350426A1/en
Priority to US17/726,357 priority patent/US20230070176A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2465Query processing support for facilitating data mining operations in structured databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/258Data format conversion from or to a database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0633Workflow analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/12Accounting

Definitions

  • the present disclosure relates to systems and methods for measuring data (e.g., revenue) and determining present and future trends, and providing a recommendation based on the measured data and determined trends.
  • data e.g., revenue
  • data analysts typically study and analyze trends in economic data manually. For example, these analysts may study data in order to determine when to buy or sell a stock manually, or what the unemployment rate looks like at any given period.
  • the data used to determine these metrics may be unreliable due to the lack of availability of such proprietary information.
  • data which should not necessarily be correlated with a particular economic metric may not have been filtered out when analyzing the data, thereby introducing inaccuracies within the analysis.
  • the server apparatus may include means for storing collected data and historical data, means for executing operations to achieve functions operations associated with the features disclosed in the detailed description.
  • a system for processing data to generate an output, the system comprising: an automated tagging system that receives data from a plurality of alternate data providers, each of the plurality of data providers having different types of data; wherein the automated tagging system standardizes the different types received from the different data providers by performing filtering, de-duplication, normalization, and classification; wherein the automated tagging system performs updating and feedback by providing an artificial intelligence system that employs neural networks that use back propagation for periodic updates to a training phase; a company financial data unit that provides published information comprising annual reports, press releases, information from the social media of spokespersons or executives, and published pricing information; a modelling system that receives the standardized data of the automated tagging system and the company financial data, the modelling system including one or more handle generators and forecast builders that are applied to the artificial intelligence-based system that employs the neural networks to generate, as an output, a forecast; a revenue prediction unit that receives the output as the forecast, comprising one or more revenue predictions, to generate a
  • the different types of data comprise financial transaction information, location information, or consumer behavior information.
  • the input to the modelling system includes proprietary data associated with the merger and acquisition information of the company, and a fiscal quarters calendar of the company.
  • the final output comprises a recommendation to buy, hold or sell, as a transaction, a score, or an execution instruction associated with the transaction without involving the user.
  • a system for processing data to generate an output, the system comprising: a plurality of inputs, each of the inputs being received from a data source, and each of the inputs being of a different type; a big data system configured to receive each of the plurality inputs, the big data system including, a plurality of adapters corresponding to the plurality of inputs, each of the adapters configured to normalize, deduplicate and classify the data received from the plurality of inputs, to generate outputs, and a modeling system that receives the generated outputs, and provides the generated outputs to a plurality of multi-panel generators, multi-forecast builders and forecaster modules, to generate a revenue prediction; and an access point and provides the revenue prediction in the format of a file, an API, a console or a custom format.
  • data miners and analysts supervise and update the plurality of adapters.
  • outputs of the plurality of adapters is combined with external data comprising financial reports or other publicly available information associated with historical or present characteristics of an entity.
  • a computer-implemented method of processing data comprising: a first phase associated with data processing, the first phase comprising, receiving input data from a plurality of sources of different types, normalizing the received data by mapping the received input data from organic formats in which the was received, to a standardized data format that provides consistency across the organic formats by accounting for the differences in the input data from the plurality of sources of different types, and deduplication, to generate normalized data; applying tagging rules to the normalized data, the tagging rules comprising rules that are specific to a credit card or debit card, geo-fencing rules for GPS data, and rules associated with browser history or application usage, to generated tagged data; a second phase associated with development of a panel and calibration, the second phase comprising, performing panelization by establishing a sample of users as a panel based on one or more of input data churn rate, user transaction patterns, census data balancing, or another rules that associates a characteristic of user transaction behavior with a transaction grouping the
  • the input data comprises timeseries data from multiple vendors, including credit card transactions, debit card transactions or other electronic purchase transactions.
  • the tagging comprises, for a series of financial transactions, labeling each of the financial transactions by applying the tagging rules to the series of financial transactions, wherein the tagging rules include an inclusive filter or an exclusive filter, and further comprising applying natural language processing based on neural networks.
  • FIG. 1 illustrates functions of an architecture according to an example implementation of the present application.
  • FIG. 2 illustrates a structure of an architecture according to an example implementation of the present application.
  • FIG. 3 illustrates a process associated with an architecture according to an example implementation of the present application.
  • FIG. 4 illustrates example alternative data types according to an example implementation of the present application.
  • FIG. 5 illustrates example alternative dataset processing according to an example implementation of the present application.
  • FIG. 6 illustrates example automatic tagging according to an example implementation of the present application.
  • FIG. 7 illustrates user experiences according to an example implementation of the present application for a watchlist.
  • FIG. 8 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for the watchlist.
  • FIG. 9 illustrates user experiences according to an example implementation of the present application for a forecast analysis.
  • FIG. 10 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for the forecast analysis.
  • FIG. 11 illustrates user experiences according to an example implementation of the present application for the forecast analysis.
  • FIG. 12 illustrates user experiences according to an example implementation of the present application for an alert chain.
  • FIG. 13 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for the alert chain.
  • FIG. 14 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for automated trading.
  • FIGS. 15-16 illustrates processes associated with generation of the mobile user experiences according to an example implementation of the present application for automated trading.
  • FIGS. 17-21 illustrate various example implementations.
  • FIGS. 22A-22B illustrate various example implementations.
  • FIGS. 23-24 illustrate various example implementations.
  • FIG. 25 illustrates an example environment according to an example implementation of the present application.
  • FIG. 26 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • a purchaser when considering purchasing a particular stock, a purchaser such as a trader may consider corporate information related to the company associated with the stock. This corporate information may include, at a very high level, how much revenue a company is taking in, how much money the company lost for a particular period of time (e.g., quarter, month, year).
  • the cleaned, normalized data may be present data that is compared to historical data, to provide an analysis of current trends.
  • While the above example implementation describes determining trends related to the stock market, other metrics may be determined (e.g., obtained deterministically), including a regional, national, or universal unemployment rate, public and not public company revenues and market shares, consumer behavior across several companies, a basket of stocks (e.g., more than one particular stock such as an entire stock portfolio), electronic indices, restaurant indices, how particular sectors in the workforce are performing, inflation, and trends for mutual funds. In particular, any data or information which may have an economic impact may be measured.
  • metrics may be determined (e.g., obtained deterministically), including a regional, national, or universal unemployment rate, public and not public company revenues and market shares, consumer behavior across several companies, a basket of stocks (e.g., more than one particular stock such as an entire stock portfolio), electronic indices, restaurant indices, how particular sectors in the workforce are performing, inflation, and trends for mutual funds.
  • any data or information which may have an economic impact may be measured.
  • aspects of the example implementation are directed to apparatuses, methods and systems for receiving a plurality of data inputs from different data providers having different types of data.
  • the different types of data are received in a big data system, which standardizes the data by performing normalization, de-duplication, and classification, and optionally other processes.
  • the standardized data is provided to a multi panel generator that builds a panel, builds a forecast, and generates a forecast.
  • the forecast is output to an access point, where a user may access the forecast.
  • the user experience may involve receiving the result of the forecast as a recommendation on a watchlist showing multiple entities, a detailed performance and recommendation report for one of the multiple entities, and/or an alert chain that provides the user with an alert that may be triggered by one or more conditions.
  • the data is processed continuously and in real time, such that the user is automatically provided with real-time updates to the recommendations, watchlist and/or alert chain.
  • the user may have the opportunity to execute a transaction, either manually or automatically, as well as to provide real-time feedback on the system.
  • an architecture is provided, as explained in greater detail below. Further, the user experience associated with the architecture is also described in greater detail.
  • FIGS. 1-3 for example implementations described herein relate to an architecture, including system elements and associated functions, as well as operations.
  • FIG. 1 illustrates a schematic description of the architecture according to an example implementation.
  • data is received from a plurality of alternate data providers.
  • the details of the plurality of alternate data providers will be described below in greater detail.
  • some of the data providers may include financial transaction information providers, location information such as GPS data, consumer behavior information such as social media or online publication information, or other data.
  • an automated tagging system receives the data from the plurality of data providers, and performs tagging, or labeling of the data. More specifically, the data may be standardized across the different types of data that were received from the different data providers.
  • the standardization processes may include, but are not limited to, filtering, de-duplication, normalization, and classification or labeling. Further details of the automated tagging system are provided.
  • manual or automatic processes may be provided for the automatic tagging system to be updated, and for checking and auditing of the process.
  • the input to the updating and supervising process may be provided to a manual source, such as a data miner or an analyst.
  • an artificial intelligence system that employs machine learning, such as neural networks, may use back propagation techniques for periodic updates to the training phase.
  • financial data associated with a company is provided. More specifically, published information such as annual reports, press releases, information from the social media of spokespersons or executives, published pricing information, and other information as would be understood by those skilled in the art is provided.
  • the standardized data of the automated tagging system 103 and the company financial data 107 are provided to a modeling system 109 .
  • the modeling system 109 includes one or more handle generators and forecast builders, which are applied to a model.
  • the model may include, but is not limited to an artificial intelligence-based system that applies machine learning, in the form of neural networks, to receive the standardized data of the automated tagging system 103 and the company financial data 107 , apply them to the neural network, and generate, as an output, a forecast.
  • an input to the modeling system may include, but is not limited to, proprietary data associated with the merger and acquisition information of the company, and the fiscal quarters calendar of the company.
  • an output of the forecast is provided as one or more revenue predictions.
  • the company financial data that is based on the standardized data, as well as the proprietary data, is fed into the machine learning model to generate the revenue predictions.
  • a final output is provided in the form of a recommendation.
  • the final output may be a recommendation to buy, hold or sell, as a transaction.
  • the final output may be a score.
  • the final output may be the execution of the transaction itself, without involving the user.
  • a rule-based or other deterministic approach may be employed, in combination with the probabilistic approach of the machine learning, as disclosed above.
  • FIG. 2 illustrates an architecture 200 according to the example implementations.
  • Inputs 201 include data providers 201 - 207 provided to a big data system 211 , which is in turn associated with an access point 213 .
  • data is received as inputs 201 from a plurality of N data providers 203 - 207 .
  • data source the data is acquired from one or more data provider, such as N data providers 203 - 209 , as well as any other data providers as would be understood by those skilled in the art.
  • N corresponding adapters 215 - 221 are is provided.
  • the adapters 215 - 221 may normalize 223 , deduplicate 225 and classify 227 the data, as described herein.
  • data is provided by the data providers 203 - 209 , and processed by the data adapters 215 - 221 .
  • the output of the data adapters 215 - 221 is provided to an automatic tagging system, such that resulting output data has been normalized, de-duplicated and classified, as disclosed above.
  • These aspects of the example implementations are automated; however, data miners and analysts may optionally supervise the process, and continue to update the process.
  • the resulting output data may be combined with company financial data, such as financial reports or other publicly available information associated with characteristics of the company, either historical or present.
  • the modeling system may include plural multi-panel generators 229 , 231 , 233 , multi-forecast builders 235 , 237 , 239 , and forecast modules 241 , 243 , 245 .
  • the modeling system generates, as its output forecast, a revenue prediction, based on financial data, proprietary data and machine learning. More specifically, the financial data and proprietary data includes the above described inputs of the automated tagging system and the company financial data.
  • the data may include financial reports, merger and acquisition information, and quarterly fiscal performance.
  • the output is the forecast, which may be provided in the format of a file, an API, a console or a custom format to the output. Accordingly, the output is provided to a user at the user access point.
  • Example user experiences associated with the access point are described in greater detail herein.
  • FIG. 3 illustrates a process 300 associated with the foregoing structures and functions according to the example implementations. More specifically, the process is divided into a first phase 301 associated with data processing, a second phase 303 associated with development of the panel and calibration, and a third phase 305 , associated with creating and applying a prediction model, and generating a forecast.
  • the input data is from a plurality of sources of different types.
  • the input data may include timeseries data from multiple vendors, such as credit card transactions, debit card transactions or other electronic purchase transactions.
  • the data is normalized. More specifically, the input data of operation 307 is mapped from the organic vendor data format in which it was received, to an internal data format that provides consistency across vendors. For example but not by way of limitation, the normalization involves accounting for the differences in the different alternate data sets, and standardizing that data to a common standard. Additional aspects may include the duplication or other processes to standardize the data.
  • the tagging may involve the application of tagging rules on the normalized data.
  • the rules may include, but are not limited to, rules that are specific to a credit card or debit card, geo-fencing rules for GPS data, rules associated with browser history or application usage, or other rules as would be understood by those skilled in the art. For example, but not by way of limitation, for a series of financial transactions, each of the financial transactions may be labeled, or tagged.
  • one or more tagging rules 313 may be applied.
  • the tagging rules may be considered to be an inclusive filter, or an exclusive filter. Further, in addition to a rule-based approach such as a filter, a natural language processing approach may be taken that incorporates artificial intelligence, such as machine learning models that involve neural networks.
  • a panelization operation 315 is performed.
  • a sample of users is established as a panel.
  • the users are chosen to match criteria associated with the panel.
  • the panel may be created in a manner that is associated with the input data churn rate.
  • Other examples of channelization rules are shown at 317 , and those rules may be based on user transaction patterns, census data balancing, or other rules that associate a characteristic of user transaction behavior with a transaction.
  • a grouping process is performed at 319 . More specifically, the panels are grouped by symbol. Brands associated with the data in the tagging process are assigned to symbols associated with a company. It is noted that this data may change over time, due to mergers, acquisitions, spinoffs, bankruptcies, rebranding, listing, delisting or other financial events. Thus, the events are assigned in real time to be included at the time of assigning the brand to the symbol at 319 .
  • the symbol may be chosen from a database, such as those shown at 321 .
  • one or more corrections are applied to the grouped. For example but not by way of limitation, as shown in 325 , patterns that are specifically associated with a financial institution are applied to calibrate the grouping. Examples of such patterns may include weekend postings of information, posting delays typically associated with a financial institution, as well as pending but not yet posted transactions. Additionally, as shown at 327 , anomalies are removed. Examples of anomalies that are removed may include, but are not limited to anomalous transactions or anomalous users.
  • a prediction model is generated.
  • historical data associated with a company such as the fundamentals of the company and the stock price, as well as historical measurements that may have been previously were historically provided by the example implementations, are used as training data. Accordingly, the prediction model is trained based on this training data, as shown at 331 .
  • a forecast is generated.
  • features associated with the open quarter 335 are applied, to derive a prediction of future stock price for future company fundamental metric. While the features are disclosed as being directed to an open quarter, example implement patients are not limited thereto, and other open periods may be substituted therefor.
  • the hardware provides for continuous large-scale data inputs, such that the model is continuously receiving data and be updated automatically and in real-time.
  • the processing capability of the hardware must be sufficient to permit such processing.
  • a GPU may be used for the artificial intelligence processing.
  • an NPU neural processing unit
  • One or both of these units may be used in a processor that is located remotely, such as a distributed server system, for a cloud computing system.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • the foregoing ETL infrastructure may also be applied to the process of insight extraction.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • the architecture is configured to receive a plurality of alternative data sets.
  • examples of the alternative data inputs 400 may include, but are not limited to, credit card transactions and debit card transactions for one, mobile device usage information 403 , geolocation data 405 , social data and sentiment data 407 , and web traffic and Internet data 409 .
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM.
  • the number of running instances may be 1.
  • the foregoing ETL infrastructure may also be applied to the process of insight extraction.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • FIG. 5 illustrates the processing of the alternative data sets according to the example implementations. More specifically, at 500 , the receiving and processing of the alternative data sets is disclosed.
  • the alternative data sets 501 may include, but are not limited to, data having alternative types. For example but not by way of limitation, GPS data 503 , data associated with financial transactions 505 , vaccination data 507 , satellite image data 509 , app usage data 511 , and browsing history information 513 are all types of data that might be a part of the alternative data sets 501 .
  • the alternative data sets 501 are not limited to the foregoing example patients, and other data sets may also be included as would be understood by those skilled in the art.
  • the multiple sources of data have an ETL (extract, transform load) operation performed, to extract, transform and load the data. Accordingly, the features associated with the data types one through m at 515 are extracted into corresponding features one through m as shown at 517 .
  • ETL extract, transform load
  • the process is subjected to a modeling operation.
  • the modeling operation includes, as shown at 521 , selection of the model, selection of the features, performing the training phase on the model, and performing model testing.
  • the modeling operation 519 may be performed on an artificial intelligence system that uses neural networks and machine learning.
  • a validation step is performed, also known as backtesting.
  • backtesting For example, the historical data associated with stock prices, company fundamentals, and historical decision or events, as disclosed at 525 , are applied to the model that was generated.
  • a determination is made, based on the validation of 523 , whether the application of the historical information successfully validates the model. If the model was not successfully validated, or in other words the backtesting results were not found to be acceptable, the process returns to 519 , and the modeling is again performed.
  • the operation proceeds. More specifically the operation proceeds to the modeling operation 527 and the forecasting operation 529 , as discussed above.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • FIG. 6 illustrates automatic tagging 600 in accordance with the example implementations.
  • the transactions are received and the alternate data sets are processed as explained above.
  • normalization is performed. More specifically, features are extracted from the transactions that are relevant for the brand classifier.
  • brand classification is performed. More specifically, the extracted features of the normalization 603 are applied to a classification layered model. More specifically, a rule- based approach is applied that performs the extraction in a deterministic manner, and the rule based approach is mixed with an artificial intelligence approach, such as machine learning based on neural networks, that is probabilistic in nature. Accordingly, the brand classification 605 is performed in a mixed deterministic and probabilistic model.
  • a verification step is performed to verify that the classifier is accurate. More specifically, a sample of the classified data is verified, to confirm that the labeling was correctly applied with respect to the brand. If necessary, the classifier is retrained. Optionally, this verification and retraining operation at 607 may be performed iteratively, until the brand classification has been verified to a threshold confidence level.
  • the brands are classified with respect to companies.
  • the companies may be private companies, publicly, or other traded companies organizations having similar features to public or private companies.
  • a mixture of rule-based, deterministic operations and probabilistic, artificial intelligence operations such as machine learning and neural networks, are employed so that the brands are classified to companies.
  • the company classifier is verified to ensure that sampled data has been accurately labeled with respect to the classification of the company. If necessary retraining, and optionally iterative free training, may be performed until the company classification has been verified to a threshold confidence level.
  • the tagged transaction is considered to have been generated at 613 .
  • the labeling is performed in a manner such that the data has been automatically labeled.
  • An output of this process may be used in the panel generation, forecast building, and forecast.
  • the example implementations may have various advantages and benefits over the related art approaches.
  • the related art approaches may suffer from problems or disadvantages, such as incorrect tagging of names that are common (e.g., DENNYS or SPRINT), names that are short (e.g., AMC or BOOT), names that are specific (e.g., TACO or TARGET), and names that are similar (e.g., FIVE BELOW or BIG FIVE).
  • problems or disadvantages such as incorrect tagging of names that are common (e.g., DENNYS or SPRINT), names that are short (e.g., AMC or BOOT), names that are specific (e.g., TACO or TARGET), and names that are similar (e.g., FIVE BELOW or BIG FIVE).
  • entities may be omitted if they do not follow a clear standard.
  • Examples of the lack of use of a clear standard resulting in company omissions includes the use of abbreviations instead of the full name, a change in the name over time, slight difference in names due a difference of timing of the acquisition of the store, or typographical errors in the name of the store.
  • Other examples of related art errors include the assignment of the transactions to the wrong ticker (company indicia), even if the transactions clearly are associated with a different company.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • the outputs of the example implementations described herein may be provided in a user experience, in a manner that provides for the user to visualize information generated by the architecture.
  • a user may be provided with a watchlist that is generated to provide the user with decision-support information associated with the outputs of the architecture, such as a prediction, forecast, recommendation or the like.
  • decision-support information associated with the outputs of the architecture, such as a prediction, forecast, recommendation or the like.
  • detailed analysis of an entry on the watchlist may be provided, along with a specific recommendation, and detailed metrics regarding the basis of the recommendation, and optionally compared with a recommendation provided by an external benchmark.
  • example implementations described herein may be used to provide a chain of alerts, or optionally, a decision support or automated decision tool, which combines a deterministic rules-based approach with the above described aspects, including but not limited to the art of artificial intelligence approaches.
  • Data processing associated with user experiences may be processed using the hardware disclosed herein. More specifically, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • the foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • FIG. 7 illustrates a user experience associated with the example implementations.
  • the user experience provides a watchlist of companies, which may be selected by a user, for example, or suggested to the user based on preferences associated with the user.
  • the watch list includes in a first column 701 a name of the company, including the company name and trading symbol.
  • a stock price is provided, along with information on the performance, such as change in share price as total amount and percentage.
  • a type of critical data associated with the performance that is being used as an input in the example implementations is identified.
  • the first row is associated with a first company, which may be a large retail company having physical and online presence for retail sales, for which the revenues is a critical indicator of performance.
  • a number of other users associated with the product is a critical indicator of performance.
  • a third row which is a retail restaurant that users may visit, same-store visits is provided as a critical indicator of performance.
  • a fourth row which is a large automotive manufacturer, with international presence, US revenues is provided as a critical indicator of performance.
  • the fifth row which is a social media company engagement of users (e.g., time spent on the platform) is provided as a critical indicator of performance.
  • a performance associated with the critical indicator of performance is shown.
  • the critical indicator of revenues are showing a performance increase of 0.4%; for the second row the number of users is showing an increase of 3%; for the third row, same-store sales is showing a decrease of 10%; or the fourth column, the US revenues are showing an increase of 6.2%; and for the last row, user engagement is showing an increase of 3%.
  • the example user experience provides a user with information on a critical indicator of performance, as well as the actual performance based on the input data as explained above.
  • the system based on the critical indicator and the performance, the system generates a score in yet another column 709 .
  • the score provides an indication for the user of the performance of the company, which the user can apply as a form of a recommendation. For example, but not by way of limitation, in the case of the first row in FIG. 7 , a growth of 0.4% revenue is associated with a score of neutral, whereas in the second row, an increase in the number of users of 3% is associated with over performing. In the third row, a 10% decrease in same store sales is associated with an underperforming score. In the fourth row, an increase of 6.2% in US revenues is associated with an over performing score. In the last row, an increase of 3% engagement is associated with an over performing score.
  • the user experience provides the user with information on the critical indicator of performance, which is determined by the system based on the type of company and the available data among the plurality of the data streams. Further, the actual performance of the critical indicator for each of the companies is provided, along with a determination of the score.
  • the information of the first row such as revenue information may be generated based on the input credit card information that is received as a data source, as explained elsewhere in this disclosure. That information may be used to determine a revenue associated with the company, and may be used to calculate the performance.
  • a numerical score such as a performance rating from 1 to 10
  • This information and score is generated based on the input data that is provided, as well as observations of what is characterized as an appropriate score relative to that company.
  • each company may have a different score determination based on company attributes such as industry or company size, to determine the amount of variation in performance that is necessary to provide the score.
  • the example implementations described herein provide a method of determining a critical indicator, as shown in the watchlist.
  • historical data may be applied in an artificial intelligence machine learning model in order to determine the criteria having the highest correlation with respect to a change in the stock price.
  • some related art approaches may attempt to directly measure a grossed number of users in order to determine revenue. However, such related our approaches can be accurate, because a doubling of the number of users does not necessarily correlate to a doubling of the amount of revenue. The reason this may be true is because the new users may not have the same representation, use pattern or consumer preferences as the initial or earlier users.
  • GPS data is received as an input, and is also stabilized in the automated tagging system, so that the data can be representative.
  • the data must be stabilized not just to show the overall number of users, but to show the amount of time that a user spends in an online app, because if the new user spends more time or less time in the app than the current users, this may be reflective of a different user behavior with respect to revenue, purchase, advertisements or the like, and can make the data more accurate.
  • the number of users may increase, although those new users may not be representative of the earlier users.
  • the time between the announcement of the merger and the completion of the merger process several months or years may pass, during which publicly available information may be incorporated into these example implementations and thus adjust the critical indicator, performance indications, and or recommendations to calibrate for such information.
  • Legal or regulatory blockages such as antitrust or export control, may stall or block the merger, such that the presence of certain terms in publicly available information associated with the merger can be used to adjust the forecasting and recommendation.
  • the predictive aspect of the tool may provide a forecast for performance after the merger, based on similar patterns that were used to train the model as to how to characterize, process and generate an output prediction for such models.
  • the change in pandemic status of the coronavirus from a more severe situation to a less severe situation may influence consumer preference to return to in person dining.
  • the inclusion of such information may be more sensitive for certain industries or companies.
  • the industries of travel, leisure, dining or others may be more sensitive as compared with other industries; the present example implementations are capable of performing data stabilization to account for such changes.
  • the data lacks the necessary accuracy to determine the critical indicator with a sufficiently high degree of confidence (e.g., clustering to better represent the user population).
  • FIG. 8 illustrates a process for generating and updating information to the watch list, in accordance with the example implementations.
  • the type of critical indicator as well as the score are continuously updated based on data inputs, on a real-time basis. For example, in the first row revenue is listed as a critical indicator type. However, if there are changes in the incoming data associated with the company, the example implementations may change the type of critical indicator from revenue to another critical indicator, such as same-store sales, customer base, or other critical indicator.
  • inputs are provided into the architecture described herein.
  • One pipeline may be associated with the signal for the revenues, and another may be associated with consumer spending, or indicators of company performance.
  • real-time data events from alternative data sources are provided.
  • Each of the pipelines is triggered by receiving new data that is relevant to the model.
  • the input of new data from alternate data sources is described as explained above.
  • These real-time data events are processed according to the example implementations with respect to the architecture to generate a signal based on the current data.
  • an updated signal is provided at 803 .
  • the new data that was received at 801 is provided to the model of the example implementations. Accordingly, a prediction is generated as described above, and is provided as an output signal.
  • a strength of the updated signal is evaluated.
  • the output signal which may be a prediction of the potential critical indicator
  • the updated signal may be a predicted revenue for a company, which is compared with a prediction generated by one or more analysts, or an analyst consensus, from available information.
  • a signal strength vector is generated, which is indicative of a relative degree of closeness between the updated signal and the benchmark signal.
  • the strength of the updated signal is classified. More specifically, the signal strength vector generated at 805 is classified into a recommendation.
  • the recommendation may be a one-dimensional actionable recommendation, such as hold, buy or sell.
  • the classification associated with the strength of signal is used to generate any updates, which are subsequently provided and displayed to the user in the user experience.
  • the user interface such as the web interface, a mobile interface, or a pushed alert may be updated with the results of the signal strength classification, as well as the watchlist update.
  • the user may be provided with a change in the critical indicator type, as well as a change in the recommendation, depending on which of the pipelines has the strongest vector.
  • plural pipelines may be blended or weighted based on the signal strength, to provide the blended result, with the critical indicator being listed as the most heavily weighted indicator.
  • plural indicators may all be listed, along with plural outputs of the results, followed by a recommendation based on a matrix operation or multiplexing of the strength vector, weighted or unweighted, associated with each of the pipelines for each of the signals for each of the companies.
  • the user may be provided with real-time changes in critical indicator, performance and recommendation that are actionable.
  • the recommendations may also be provided in the detailed analysis of a company, or a chain of alerts and may be used for the recommendation or actual action automatically being taken.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • any of the companies shown in the foregoing user experience of FIG. 7 may be selected for providing a further summary.
  • the fourth row is selected, and further information associated with the forecast is provided.
  • a recommendation 905 is provided, in this case “strong buy” based on a decrease in the revenue.
  • Further details 903 are provided to the user, such as the relative impact of revenue on the stock price (e.g., correlation between revenue and stock price), and the prediction accuracy for the stock associated with the example implementations.
  • Information on the forecast based on the analyst expectation is provided, in comparison to the forecast based on the real data measurement, as well as the score associated with the performance that is based on the real data measurement.
  • Charts 907 are provided showing a comparison of the revenue based on real data measurement, as compared with the expected revenues, which may be based on published information that was provided by the company. Additionally, information of the stock prices also provided.
  • FIG. 10 illustrates a process 1000 for generating an output that visualizes a difference between expected values and actual values according to the example implementations.
  • measured sales are provided.
  • the measure sale may include company sale that are calculated, such as in the second phase 303 as shown in FIG. 3 , and disclosed above.
  • expected sales are provided.
  • the expected sales may be based on analyst prediction, analyst consensus, industry publication, or other available information associated with the providing of expected sales information, as opposed to actual sales information.
  • the measured sales are aggregated based on time series into intervals that are comparable with interval associated with the expected sales signal. For example but not by way of limitation, if the expected sales signal has a known time interval such as daily, weekly, hourly, quarterly, etc., the measured sales are aggregated into comparable time periods.
  • a scaling operation is performed. More specifically, because the measured sales provide information from a sample of the population, the measured sales need to be scaled. For example but not by way of limit deletion, the scaling may involve extrapolating a sample representative population associated with the sample, and similarly to extrapolation of the aggregated sales of that population of the sample to the representative population.
  • the periods associated with the scaled, aggregated information of measured sales from 1001 , 1005 and 1007 are aligned with the period of the expected sales of 1003 . Accordingly, the aggregated, scale measured sales are joined with the expected sales, by time interval.
  • a visualization operation is performed. Accordingly, an output may be provided to a user in the form of a bar plot 1013 , a line plot 1015 or any other visualization technique 1017 th as would be understood by those skilled in the art.
  • Visualization techniques are not limited to a single approach, and different visualization techniques may be combined, appended, mixed or blended. Further, instead a graphic visualization approach, a text output may be provided in the form of a narrative chart that simply displays the result.
  • FIG. 11 illustrates an example graphical visualization associated with the example implantations described above. More specifically, at 1100 , a chart is provided that displays sales revenue on a weekly basis over a 12 week period. At 1101 , the aggregated information of the measured sales, which has been aligned to a weekly period, is displayed. Additionally, at 1103 , the expected sales, provided on a weekly interval, is displayed. Accordingly, a user may be provided with a display that illustrates a difference between the measured sales as associated with the example implementations herein, and the expected sales, as provided by analysts, analyst consensus, etc.
  • the charts 907 include a revenue comparison.
  • the user can visualize, and understand by looking at the detail information and 903 , that the measured revenue is about 6.2% higher than the forecast based on the expectation by the analysts.
  • the example implementations provide a determination that the entity is over performing as compared with the analyst expectation.
  • the example implementations generate a recommendation of “strong buy” at 905 , in the manner explained above.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • alerts are provided to the user.
  • the selected alerts are listed in rows, divided into the alerts specific to the user 1201 and an aggregated list of the most popular alerts 1213 across all of the users.
  • a column is provided for the name and stock symbol of the company, as well as the type 1203 of information that is the basis for the alert, the condition 1205 that triggers the alert, and the frequency 1207 of the alert to be provided.
  • the user is provided with a tool 1209 to set the alert to active or inactive, to a tool 1211 to edit the alert, and to delete the alert.
  • the alert may be based on the performance being greater than 5%.
  • plural alerts are provided to be associated with a single company, in this case the price moving above 5% and the score having a value of “over performing”.
  • a time-based alert may be provided, such as the user being provided at an alert 10 days before an earning call, as shown with respect to the third row. Accordingly, the user is provided with a chain of alerts that may be triggered by one or more conditions associated with a company.
  • users may be able to view the most popular alerts 1213 that are being generated across the user base, as well as the number of users 1215 that are generating and using those alerts. Detailed information such as type, condition and frequency each of those popular alerts is provided. The user is provided with an option to add those alerts to their own user alert chain as shown by the “+” symbol 1217 .
  • FIG. 13 illustrates operations 1300 associated with the processing of the chain of alerts.
  • a type, condition, performance, price and score can be processed as data, to determine whether to provide the user with an alert. Further, the user may view and consider the alerts being used by the overall community of users, to benefit from alerts generated by others. According to the example implementations, the user may subscribe to such a rule, and thus be alerted in response to changes in the signals described above.
  • a signal update is performed. More specifically, as explained above with respect to FIG. 8 at 803 , signal updating is performed.
  • output of the watchlist may be provided, including but not limited to the signal strength vector and the signal strength classification.
  • the output of 1301 such as the absolute values of the signal, or relative values of a signal as compared with the previous signal update are applied to the deterministic rules. For example but not by way of limitation, if a revenue has increased by 2% as compared with the previous signal update, and the user set a rule to provide an alert when the revenue has increased by 2% or more for a given company, the rule may be triggered. Similar or other deterministic rules, as disclosed above with respect to FIG. 12 , are processed in real time as the updated information of 1301 is provided. Accordingly, the probabilistic output of the watchlist is applied to the deterministic rule base of the user, and an alert is either triggered or not triggered with each real-time update.
  • a notification generation operations performed. More specifically, based on the user rule-based, a custom notification payload is generated for the user. For example but not by way of limitation, if a user has determined that for a revenue that has increased by 2% or more compared to the previous signal update, the user should receive an alert to sell a stock cut, such a notification payload is generated.
  • the notification is distributed to the user.
  • the distribution channel may be set to one or more modes, including but not limited to mobile application notification, email, simplified messaging service, or other communications means. Accordingly, a notification is pushed to a user in the desired manner of the user, containing the recommendation associated with the rule, which is triggered based on the real-time signal update provided to the rule-base.
  • example implementations may provide a recommendation, in real time, based on the real-time measurement of data collection, price checking and calculation, to provide the addition, the example implementations are not limited to recommendations.
  • the foregoing example implementations may be integrated with approaches that provide for automatic trading, such that the user need not provided input for and execution of a decision.
  • FIG. 14 illustrates a process 1400 associated with decision execution according to the example implementations. More specifically, instead of the user receiving and reviewing the signal classification and signal strength vector to receive an alert, an automatic system is provided that applies artificial intelligence techniques such as machine learning in a neural network to convert the signal classification and signal strength vector into a decision signal that can be submitted for execution of the decision. For example but not by way of limitation, the signal may be converted to a trading order for a brokerage service.
  • artificial intelligence techniques such as machine learning in a neural network to convert the signal classification and signal strength vector into a decision signal that can be submitted for execution of the decision.
  • the signal may be converted to a trading order for a brokerage service.
  • measured data is received from a plurality of alternate data sources, and processed on a real-time, automatic basis to provide real-time updates for the critical indicators.
  • the updates are input into a trading model equal but not by way of limitation, the trading model may include an artificial intelligence approach such as machine learning in a neural network to provide trading decision-making.
  • the trading model may be a neural network that is trained on historical stock prices and a history of signal updates associated with the measured signal according to the example Tatian.
  • the input to the trading model is the signal data, and the current state of the user account. For example, if the user has open or pending orders associated with a purchase or sale, the model takes this information into consideration.
  • the trading model 1403 provides a decision or an action, instead of an alert or a recommendation as is done in other example implementations.
  • an order is created.
  • the order may be created by submitting an automated request to a seller, including but not limited to a brokerage service, and optionally confirming the order with the user.
  • a seller including but not limited to a brokerage service
  • this may optionally be provided by a notification distribution channel, as explained above with respect to FIG. 13 .
  • the system performs an operation to determine whether user review is required. If no user review is required, the order is executed at 1409 .
  • the order created at 1405 may be executed with a seller, such as a brokerage service.
  • the confirmation request is sent at 1411 , as explained above through a notification distribution channel. If the user confirms the order at 1411 , the order is executed at 1409 . If the user does not confirm the order of 1411 , the order is considered to be rejected, and cancel at 1413 .
  • an operation is performed to update the user portfolio, so as to indicate that the order has been executed.
  • the user may be provided with a report via the communication or distribution channels explained above, to confirm that the order was executed, to provide an update of the portfolio, and/or to remind the user of any pending or open orders.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • the foregoing example implementation of the automatic trading aspects is performed continuously and in real time.
  • plural orders may be simultaneously executed, or may be in different stages of execution.
  • the user may decide to execute some orders, reject other orders, and to not make it make patient keep still other orders pending.
  • confirmation is not required by the user, automatic execution may continue without providing the user with any review prior to the execute acute the notification of pending order or current portfolio immediately after the execution.
  • Implementation when review is not required by the user, another subject that is not military. A require, a change in availability on, that is the identity for justice, or that of entity side of order exceeds a prescribed on individual or periodic purchase amount.
  • the foregoing example implementations automatically perform a determination of the critical indicator type, provide a calculation of performance, and generate a score, for each of the listed companies.
  • determination of the leading indicator, and comparison of performance is provided on a real-time basis.
  • a mixture of rule-based, deterministic approach is blended with a probabilistic approach, such as the artificial intelligence neural network approaches described herein.
  • the architecture may be executed on a big data server, such as in a server farm or in the cloud. Accordingly, the signals are processed as they are received, in real-time, in contrast to related art approaches, which perform batch processing. Further, cloud computing may also be used.
  • the signal processing associated with the example implementations may be performed on a standalone machine. Due to the present example implementations being able to provide real-time information, a user may be able to receive the recommendation and execute a decision in a timely manner that provides for a significant impact on the return on investment. In contrast, related art approaches do not take into consideration the processing of the real-time measured data, instead only use expected data information.
  • FIGS. 15-16 illustrate such example implementations associated with a mobile device.
  • the foregoing example implementations may also be used to provide users with a discount for goods and or services associated with a company.
  • the discount may take the form of a rebate, a reward, a price reduction or other form of benefit or compensation to a user.
  • the discount may be provided to a user based on the amount for number of transactions by the user on the account of the company shown in the example user experience, for which the example implementations are performing the foregoing operations on the foregoing structures.
  • a feedback loop may be provided. More specifically, a user may provide input on the outcomes and recommendations of the model, which may be used to calibrate the model. For example, but not by way of limitation, if, based on the critical parameter or indicator, combined with a performance, a recommendation has been set with which the user disagrees, the user may provide a suggestion for feedback into the system. The user may suggest that the critical indicator is not appropriate, and instead suggest another critical indicator, as well as being able to suggest that the recommendation is not a desirable recommendation. According to one example implementation, a user may indicate that an appropriate critical indicator of social media or an online service may not be user engagement or number of users alone, but may instead be related to online advertising, clicks or some other parameter. This feedback may be used by the system to retrain or recalibrate the forecasting tools, so as to adjust according to consumer preferences and demands.
  • an automated trading tool may be provided that implements certain recommendations under certain circumstances.
  • the instruction from the user in the user preferences may indicate that a particular stock should be bought or sold when one or more of the conditions in the alert chain have been met.
  • the user may create a completely different and separate alert chain for automatically trading, as well as for providing alerts.
  • sales and purchases may be executed by the user automatically, thus avoiding any delay, and immediately providing such instruction.
  • Such a system may be valuable during certain use cases.
  • the automated trading tool could be valuable when the user is traveling or on vacation, during peak periods of activity or intense financial news, such that the user cannot quickly or in a timely manner act on alerts or recommendations, or simply for user convenience.
  • other publicly available data associated with a company may also be used.
  • the public facing announcements, social media posts, public presentations, or other information may be sensed or detected for leaders of a company associated with one or more social media accounts, industry events, news releases or publications, or other information. This information may in combined into the sentiment analysis.
  • the example implementations may have various advantages and/or benefits.
  • the example implementations may provide a way of determining performance automatically and in real time.
  • Related art approaches may conduct manual research and generate performance indication manually over a period of many hours in many days, and an investment advisor may manually determine a score.
  • the related art approaches do not have any way of taking disparate data from different data sources, performing operations on the data such as normalization, de-duplication, classification and optionally others, applying the refined data to a forecasting tool, and generating a forecast or recommendation based on a determination that a particular indicator is critical, all automatically and all in real time.
  • a panel may be selected according to the following selection process.
  • An optimal panel to be used for forecast may be generated. More specifically, the selection process includes performing a filter operation on the candidate users for the panel.
  • the filter operation may be performed by the application of a filter, so as to generate a panel meeting one or more criteria.
  • the criteria may include, but are not limited to, the panel including users that are representative of a population (e.g., US population) for their geolocation, with a substantially stable number of purchases from a starting date to an ending date (e.g., 2011 to the current date), with a consistent number of transactions.
  • outliers and duplicates may also be removed.
  • a selection process 1700 may be performed. Initially, the pool of candidate users is a fragmented panel consisting of 60 million users at 1701 . After a first filter process performs the filtering operation such that the pool of candidate users is representative of the population for their geolocation, the filtered pool is narrowed to 15 million users at 1703 . Subsequently, another filter operation is performed to confirm a stable number of purchases over a time interval, further narrowing the pool of candidate users to 4 million users at 1705 . At 1707 , outliers and duplicates may be removed, and a filter operation may be performed for a consistent number of transactions, to produce an optimal panel of 1.5 million users.
  • FIG. 18 illustrates a data pipeline according to an example implementation.
  • data normalization is performed. Data that is received from different institutions may have different structures. In order to properly use the data from the different institutions, the data must be normalized with a fixed format. More specifically, the data is transformed from a text format into a distributed database that is designed to operate on top of the transactions.
  • a data cleaning operation is performed. More specifically, the data that is not required or desired for the data processing pipeline 1800 is removed. For example but not by way of limitation, duplicated transactions, duplicated accounts and duplicated users are identified and removed. Further, incomplete data, such as incomplete transactions with missing data, are also removed from the data. Accordingly, and as explained above with respect to panel selection, the base of users that follows the selection process and filter operation is maintained in the user database, and the outliers and duplicates are removed.
  • a classification operation 1805 is performed.
  • each of the transactions in the database is classified.
  • the classification is performed by analyzing the description of the transaction, and associating a correct merchant name with the description.
  • the quality of the association directly and critically influences the quality of the correlation and rejections that will be described further below. More specifically, to reach a desired accuracy, such as close to 100%, a combination of automated machine learning algorithms is combined with manual human controls.
  • an example extract is provided.
  • an average monthly volume of processing may be more than 200 million transactions; each of those transactions must be associated with a correct company.
  • FIG. 19 illustrates the example extract. As can be seen in the example extract 1900 , the name of the merchant is found in each of the transactions.
  • a modeling operation 1807 is performed. More specifically, the modeling operation 1807 generates forecasts. As inputs, the modeling operation 1807 applies data output from the classification 1805 , as well as third-party input, for example Bloomberg data 1809 . In the modeling operation 1807 , many forecasts are combined, to obtain an optimal results. The modeling operation 1807 is specialized to a category associated with the company. Further, the modeling operation 1807 compensates for various bias factors, such as seasonality and the like.
  • FIG. 20 illustrates an example implementation 2000 associated with the modeling operation 1807 . More specifically, a plurality of sets of panels 2001 , subject to the foregoing operations of the data pipeline 1800 , are provided to the plurality of corresponding sets of forecasts 2003 ; outputs of the sets of forecasts 2003 are provided to plural assemblers 2005 , to generate a final forecast 2007 .
  • different categories of companies may require different algorithms to generate the forecasts.
  • the algorithms must incorporate the different behavior of consumers for the different structures of revenue within a company.
  • Some of the categories may include, but are not limited to, fully owned restaurants, franchise restaurants, supermarket chains, insurance, and other companies.
  • the modeling operation 1807 also provides for bias correction.
  • the panel data may include historical data of several years, from thousands of users that shared their data in exchange for a token or reward. Such an approach to obtaining the data allowed for a more complete understanding of the bias impacted by the panels that are not randomized, and algorithms for the correction of the bias.
  • FIG. 21 illustrates a user experience 2100 associated with obtaining the information.
  • a user is provided with an input screen to input data and enter a referral code, as well as a number of points that may be associated with completing the survey.
  • the completion of the survey by the user results in the number of points being increased, and an option for selection of a bank where the points may be deposited, as well as a privacy statement.
  • bias correction To perform bias correction, historical data points are analyzed, to determine how the bias behaves across time. According to one example implementation, 9 algorithms were created for adjusting the bias in the data. Then, those 9 algorithms were combined with three other algorithms that are used for tickers, and that are less impacted by the bias.
  • an output of the modeling is provided to make predictions. More specifically, the data is aggregated and inserted in a database. The data in the database can be accessed and used by other rules, such as traders, and further retrieved for validation, back testing, etc.
  • the data pipeline 1800 includes anomaly detection 1813 - 1819 . More specifically, an output of each of the elements of the data pipeline is subject to anomaly detection, to identify anomalies that may compromise final predictions. As an example of the anomaly detection 1813 - 1819 , dataflow anomalies in the pipeline chain may examine technical anomalies as well as data anomalies.
  • anomalies in the behavior of a company for which forecasting is being performed may be analyzed and provided, including, but not limited to the following:
  • a different duration of a quarter such as changing from 90 days to 97 days
  • representativeness is considered. More specifically, according to a specific example, in 2019 the US population was about 330 million individuals, with the average family size being 3.14 members, such that there are roughly 105 million families, some studies indicating the number to be as high as 128 million families. In the example implementations, the ratio between the number of US households and the best panel is roughly 70 to 85. Further, the ratio between the declared revenues of companies and the total amount of purchases in the panel is between about 70 and 90. This ratio is consistent with the expected value, and is consistent across different companies, such that the proportion of consumers is properly maintained the panels according to example implementations.
  • FIGS. 22A and 22B illustrate various examples of comparison with data.
  • FIG. 22A includes franchising examples
  • FIG. 22B includes examples of companies with a history of acquisitions, or anomalies.
  • FIG. 23 illustrates a comparison between results according to the example implementations and the related art approaches. As can be seen in the column indicated as “DF” for the example implementations, and “1010 Data” for the related art approaches, a substantial difference in the forecasting results shows substantially improved performance for the example implementations.
  • FIG. 24 illustrates the technical basis according to statistical methods for the determination of the panel size, and the measurement of the real statistical error.
  • the Error % in the forecast is proportional to the sample standard deviation of the purchases and inverse to the square root of the purchase number and the average amount.
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • FIG. 25 shows an example environment suitable for some example implementations.
  • Environment 2500 includes devices 2510 - 2555 , and each is communicatively connected to at least one other device via, for example, network 2560 (e.g., by wired and/or wireless connections). Some devices may be communicatively connected to one or more storage devices 2540 and 2545 .
  • An example of one or more devices 2510 - 2555 may be computing devices 2600 described in FIG. 26 , respectively.
  • Devices 2505 - 2555 may include, but are not limited to, a computer 2510 (e.g., a laptop computing device) having a monitor, a mobile device 2515 (e.g., smartphone or tablet), a television 2520 , a device associated with a vehicle 2525 , a server computer 2530 , computing devices 2535 and 2550 , storage devices 2540 and 2545 , and smart watch or other smart device 2555 .
  • devices 2510 - 2525 and 2555 may be considered user devices associated with the users of the enterprise.
  • Devices 2530 - 2550 may be devices associated with service providers (e.g., used by the external host to provide services as described above and with respect to the collecting and storing data).
  • external data fetching may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container.
  • the data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service.
  • Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically.
  • Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language.
  • a batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical.
  • batch processing may not be performed with interactive processing.
  • the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results.
  • a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1.
  • an API is provided for data access.
  • the REST API which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service.
  • the service may include, but is not limited to, hardware such as 1 vCPU, 2 GB RAM, 10 GB SSD disk, and a minimum of two running instances.
  • the API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones.
  • the caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds.
  • CDN fast content delivery network
  • containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • FIG. 26 shows an example computing environment with an example computing device suitable for implementing at least one example embodiment.
  • Computing device 2605 in computing environment 2600 can include one or more processing units, cores, or processors 2610 , memory 2615 (e.g., RAM, ROM, and/or the like), internal storage 2620 (e.g., magnetic, optical, solid state storage, and/or organic), and I/O interface 2625 , all of which can be coupled on a communication mechanism or bus 2630 for communicating information.
  • Processors 2610 can be general purpose processors (CPUs) and/or special purpose processors (e.g., digital signal processors (DSPs), graphics processing units (GPUs), and others).
  • DSPs digital signal processors
  • GPUs graphics processing units
  • computing environment 2600 may include one or more devices used as analog-to-digital converters, digital-to-analog converters, and/or radio frequency handlers.
  • Computing device 2605 can be communicatively coupled to input/user interface 2635 and output device/interface 2640 .
  • Either one or both of input/user interface 2635 and output device/interface 2640 can be wired or wireless interface and can be detachable.
  • Input/user interface 2635 may include any device, component, sensor, or interface, physical or virtual, which can be used to provide input (e.g., keyboard, a pointing/cursor control, microphone, camera, Braille, motion sensor, optical reader, and/or the like).
  • Output device/interface 2640 may include a display, monitor, printer, speaker, braille, or the like.
  • input/user interface 2635 and output device/interface 2640 can be embedded with or physically coupled to computing device 2605 (e.g., a mobile computing device with buttons or touch-screen input/user interface and an output or printing display, or a television).
  • computing device 2605 e.g., a mobile computing device with buttons or touch-screen input/user interface and an output or printing display, or a television.
  • Computing device 2605 can be communicatively coupled to external storage 2645 and network 2650 for communicating with any number of networked components, devices, and systems, including one or more computing devices of the same or different configuration.
  • Computing device 2605 or any connected computing device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • I/O interface 2625 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.11x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 2600 .
  • Network 2650 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computing device 2605 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media.
  • Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like.
  • Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
  • Computing device 2605 can be used to implement techniques, methods, applications, processes, or computer-executable instructions to implement at least one embodiment (e.g., a described embodiment).
  • Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media.
  • the executable instructions can be originated from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • Processor(s) 2610 can execute under any operating system (OS) (not shown), in a native or virtual environment.
  • OS operating system
  • one or more applications can be deployed that include logic unit 2655 , application programming interface (API) unit 2660 , input unit 2665 , output unit 2670 , service processing unit 2690 , and inter-unit communication mechanism 2695 for the different units to communicate with each other, with the OS, and with other applications (not shown).
  • API application programming interface
  • alternate data processing unit 2675 , tagging unit 2680 , and modeling/forecasting unit 2685 may implement one or more processes described above.
  • the described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided.
  • API unit 2660 when information or an execution instruction is received by API unit 2660 , it may be communicated to one or more other units (e.g., logic unit 2655 , input unit 2665 , output unit 2670 , service processing unit 2690 ).
  • input unit 2665 may use API unit 2660 to connect with other data sources so that the service processing unit 2690 can process the information.
  • Service processing unit 2690 performs the filtering of panelists, the filtering and cleaning/normalizing of data, and generation of the results, as explained above.
  • logic unit 2660 may be configured to control the information flow among the units and direct the services provided by API unit 2660 , input unit 2665 , output unit 2670 , alternate data processing unit 2675 , tagging unit 2680 , and modeling/forecasting unit 2685 in order to implement an embodiment described above.
  • the flow of one or more processes or implementations may be controlled by logic unit 2655 alone or in conjunction with API unit 2665 .

Abstract

A system for processing data to generate an output includes an automated tagging system that receives data from a plurality of alternate data providers, each of the plurality of data providers having different types of data; a company financial data unit that provides published information comprising annual reports, press releases, information from the social media of spokespersons or executives, and published pricing information; a modelling system that receives the standardized data of the automated tagging system and the company financial data, the modelling system including one or more handle generators and forecast builders that are applied to the artificial intelligence-based system that employs the neural networks to generate, as an output, a forecast; and a revenue prediction unit that receives the output as the forecast, comprising one or more revenue predictions, to generate a final output as a revenue prediction.

Description

    PRIORITY CLAIM TO RELATED APPLICATIONS
  • This application claims priority under 35 USC § 119(e) to U.S. Provisional Application Nos. 63/021,550 and 63/171,967, filed on May 7, 2020 and Apr. 7, 2021, respectively, the contents of which are incorporated herein by reference in their entirety.
  • BACKGROUND Field
  • The present disclosure relates to systems and methods for measuring data (e.g., revenue) and determining present and future trends, and providing a recommendation based on the measured data and determined trends.
  • Related Art
  • In related art systems, data analysts typically study and analyze trends in economic data manually. For example, these analysts may study data in order to determine when to buy or sell a stock manually, or what the unemployment rate looks like at any given period.
  • However, the data used to determine these metrics may be unreliable due to the lack of availability of such proprietary information. For example, data which should not necessarily be correlated with a particular economic metric may not have been filtered out when analyzing the data, thereby introducing inaccuracies within the analysis.
  • Thus, there is a need for reliable data, and reliable methods for analyzing the data to determine present and future trends in this data.
  • SUMMARY OF THE DISCLOSURE
  • Aspects of the present application may include a server apparatus. The server apparatus may include means for storing collected data and historical data, means for executing operations to achieve functions operations associated with the features disclosed in the detailed description.
  • According to an aspect, a system is provided for processing data to generate an output, the system comprising: an automated tagging system that receives data from a plurality of alternate data providers, each of the plurality of data providers having different types of data; wherein the automated tagging system standardizes the different types received from the different data providers by performing filtering, de-duplication, normalization, and classification; wherein the automated tagging system performs updating and feedback by providing an artificial intelligence system that employs neural networks that use back propagation for periodic updates to a training phase; a company financial data unit that provides published information comprising annual reports, press releases, information from the social media of spokespersons or executives, and published pricing information; a modelling system that receives the standardized data of the automated tagging system and the company financial data, the modelling system including one or more handle generators and forecast builders that are applied to the artificial intelligence-based system that employs the neural networks to generate, as an output, a forecast; a revenue prediction unit that receives the output as the forecast, comprising one or more revenue predictions, to generate a final output as a revenue prediction.
  • According to the above aspect, the different types of data comprise financial transaction information, location information, or consumer behavior information.
  • Further, the input to the modelling system includes proprietary data associated with the merger and acquisition information of the company, and a fiscal quarters calendar of the company.
  • Yet further, the final output comprises a recommendation to buy, hold or sell, as a transaction, a score, or an execution instruction associated with the transaction without involving the user.
  • Additionally, a rule-based or other deterministic approach is combined the neural network.
  • According to another aspect, a system is provided for processing data to generate an output, the system comprising: a plurality of inputs, each of the inputs being received from a data source, and each of the inputs being of a different type; a big data system configured to receive each of the plurality inputs, the big data system including, a plurality of adapters corresponding to the plurality of inputs, each of the adapters configured to normalize, deduplicate and classify the data received from the plurality of inputs, to generate outputs, and a modeling system that receives the generated outputs, and provides the generated outputs to a plurality of multi-panel generators, multi-forecast builders and forecaster modules, to generate a revenue prediction; and an access point and provides the revenue prediction in the format of a file, an API, a console or a custom format.
  • Further according to this aspect, data miners and analysts supervise and update the plurality of adapters.
  • Still further, the outputs of the plurality of adapters is combined with external data comprising financial reports or other publicly available information associated with historical or present characteristics of an entity.
  • According to yet another aspect, a computer-implemented method of processing data is provided, the method comprising: a first phase associated with data processing, the first phase comprising, receiving input data from a plurality of sources of different types, normalizing the received data by mapping the received input data from organic formats in which the was received, to a standardized data format that provides consistency across the organic formats by accounting for the differences in the input data from the plurality of sources of different types, and deduplication, to generate normalized data; applying tagging rules to the normalized data, the tagging rules comprising rules that are specific to a credit card or debit card, geo-fencing rules for GPS data, and rules associated with browser history or application usage, to generated tagged data; a second phase associated with development of a panel and calibration, the second phase comprising, performing panelization by establishing a sample of users as a panel based on one or more of input data churn rate, user transaction patterns, census data balancing, or another rules that associates a characteristic of user transaction behavior with a transaction grouping the panels by symbol to associate one or more brands of a company with the tagged data, wherein the grouping is performed in real time to incorporate mergers, acquisitions, spinoffs, bankruptcies, rebranding, listing or delisting associated with the company associated with the symbol; and applying one or more corrections to the grouped panels by applying one or more respective patterns associated with a financial institution to calibrate the grouping, wherein the respective patterns comprise weekend postings of information, posting delays typically associated with a financial institution, and pending but not yet posted transactions, removing anomalies associated with anomalous transactions or anomalous users; a third phase associated with creating and applying a prediction model, the third phase comprising, generating a prediction model, wherein training data comprises historical data associated with the company including financial parameters of the company, stock price, and historical measurements, generating a forecast by using the generated prediction model and features associated with a current time period, the forecast comprising a prediction of a future stock price.
  • Further, the input data comprises timeseries data from multiple vendors, including credit card transactions, debit card transactions or other electronic purchase transactions.
  • Still further, the tagging comprises, for a series of financial transactions, labeling each of the financial transactions by applying the tagging rules to the series of financial transactions, wherein the tagging rules include an inclusive filter or an exclusive filter, and further comprising applying natural language processing based on neural networks.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Exemplary implementation(s) of the present invention will be described in detail based on the following figures, wherein:
  • FIG. 1 illustrates functions of an architecture according to an example implementation of the present application.
  • FIG. 2 illustrates a structure of an architecture according to an example implementation of the present application.
  • FIG. 3 illustrates a process associated with an architecture according to an example implementation of the present application.
  • FIG. 4 illustrates example alternative data types according to an example implementation of the present application.
  • FIG. 5 illustrates example alternative dataset processing according to an example implementation of the present application.
  • FIG. 6 illustrates example automatic tagging according to an example implementation of the present application.
  • FIG. 7 illustrates user experiences according to an example implementation of the present application for a watchlist.
  • FIG. 8 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for the watchlist.
  • FIG. 9 illustrates user experiences according to an example implementation of the present application for a forecast analysis.
  • FIG. 10 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for the forecast analysis.
  • FIG. 11 illustrates user experiences according to an example implementation of the present application for the forecast analysis.
  • FIG. 12 illustrates user experiences according to an example implementation of the present application for an alert chain.
  • FIG. 13 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for the alert chain.
  • FIG. 14 illustrates processes associated with generation of the user experiences according to an example implementation of the present application for automated trading.
  • FIGS. 15-16 illustrates processes associated with generation of the mobile user experiences according to an example implementation of the present application for automated trading.
  • FIGS. 17-21 illustrate various example implementations.
  • FIGS. 22A-22B illustrate various example implementations.
  • FIGS. 23-24 illustrate various example implementations.
  • FIG. 25 illustrates an example environment according to an example implementation of the present application.
  • FIG. 26 illustrates an example computing environment with an example computer device suitable for use in some example implementations.
  • DETAILED DESCRIPTION
  • The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or operator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application.
  • With a more complex and dynamic economy, there is a need for accurate data reflecting exactly what is going on in the economy at any given point in time. More specifically, in order to accurately determine the current state of the economy, and to then be able to determine present and future trends in the economy, accurate data is required so that the recommendation itself is accurate.
  • For example, when considering purchasing a particular stock, a purchaser such as a trader may consider corporate information related to the company associated with the stock. This corporate information may include, at a very high level, how much revenue a company is taking in, how much money the company lost for a particular period of time (e.g., quarter, month, year).
  • However, this general, broad data is often not an accurate depiction of how a company is actually behaving. For example, revenue for a clothing company may appear to be high overall, but when the data is broken down, the revenue may be coming from online sales primarily. Typically, stores with higher revenue from online sales end up closing several brick and mortar stores, because those stores are no longer profitable. Thus, a trader who is considering to buy that clothing company's stock may not want to purchase that stock if they know that those stores will likely close at a future point in time.
  • Unfortunately, without accurate data, a trader has little way of knowing how a particular trend will look in the future. Therefore, by collecting data and cleaning and normalizing that data, this cleaned, normalized data may be compared to historical data to determine how a particular trend will look in the future. Similarly, a trader may also not be able to assess present information, to make a decision or recommendation. According to the present example implementations, the cleaned, normalized data may be present data that is compared to historical data, to provide an analysis of current trends.
  • While the above example implementation describes determining trends related to the stock market, other metrics may be determined (e.g., obtained deterministically), including a regional, national, or universal unemployment rate, public and not public company revenues and market shares, consumer behavior across several companies, a basket of stocks (e.g., more than one particular stock such as an entire stock portfolio), electronic indices, restaurant indices, how particular sectors in the workforce are performing, inflation, and trends for mutual funds. In particular, any data or information which may have an economic impact may be measured.
  • Aspects of the example implementation are directed to apparatuses, methods and systems for receiving a plurality of data inputs from different data providers having different types of data. The different types of data are received in a big data system, which standardizes the data by performing normalization, de-duplication, and classification, and optionally other processes.
  • The standardized data is provided to a multi panel generator that builds a panel, builds a forecast, and generates a forecast. The forecast is output to an access point, where a user may access the forecast. The user experience may involve receiving the result of the forecast as a recommendation on a watchlist showing multiple entities, a detailed performance and recommendation report for one of the multiple entities, and/or an alert chain that provides the user with an alert that may be triggered by one or more conditions.
  • In the foregoing example implementation, the data is processed continuously and in real time, such that the user is automatically provided with real-time updates to the recommendations, watchlist and/or alert chain. The user may have the opportunity to execute a transaction, either manually or automatically, as well as to provide real-time feedback on the system.
  • To implement the foregoing example implementations, an architecture is provided, as explained in greater detail below. Further, the user experience associated with the architecture is also described in greater detail.
  • Architecture
  • As shown in FIGS. 1-3, for example implementations described herein relate to an architecture, including system elements and associated functions, as well as operations.
  • FIG. 1 illustrates a schematic description of the architecture according to an example implementation. At 101, data is received from a plurality of alternate data providers. The details of the plurality of alternate data providers will be described below in greater detail. For example, but not by way of limitation, some of the data providers may include financial transaction information providers, location information such as GPS data, consumer behavior information such as social media or online publication information, or other data.
  • At 103, an automated tagging system is provided that receives the data from the plurality of data providers, and performs tagging, or labeling of the data. More specifically, the data may be standardized across the different types of data that were received from the different data providers. The standardization processes may include, but are not limited to, filtering, de-duplication, normalization, and classification or labeling. Further details of the automated tagging system are provided.
  • At 105, and updating and feedback process is provided. More specifically, manual or automatic processes may be provided for the automatic tagging system to be updated, and for checking and auditing of the process. Optionally, the input to the updating and supervising process may be provided to a manual source, such as a data miner or an analyst. Alternatively, in some example implementations an artificial intelligence system that employs machine learning, such as neural networks, may use back propagation techniques for periodic updates to the training phase.
  • At 107, financial data associated with a company is provided. More specifically, published information such as annual reports, press releases, information from the social media of spokespersons or executives, published pricing information, and other information as would be understood by those skilled in the art is provided.
  • At 109, the standardized data of the automated tagging system 103 and the company financial data 107 are provided to a modeling system 109. The modeling system 109 includes one or more handle generators and forecast builders, which are applied to a model. The model may include, but is not limited to an artificial intelligence-based system that applies machine learning, in the form of neural networks, to receive the standardized data of the automated tagging system 103 and the company financial data 107, apply them to the neural network, and generate, as an output, a forecast.
  • And as noted at 111, an input to the modeling system may include, but is not limited to, proprietary data associated with the merger and acquisition information of the company, and the fiscal quarters calendar of the company.
  • At 113, an output of the forecast is provided as one or more revenue predictions. As explained above, the company financial data that is based on the standardized data, as well as the proprietary data, is fed into the machine learning model to generate the revenue predictions.
  • At 115, a final output is provided in the form of a recommendation. For example, but not by way of limitation, the final output may be a recommendation to buy, hold or sell, as a transaction. Alternatively the final output may be a score. Still alternatively, the final output may be the execution of the transaction itself, without involving the user. Optionally, a rule-based or other deterministic approach may be employed, in combination with the probabilistic approach of the machine learning, as disclosed above.
  • FIG. 2 illustrates an architecture 200 according to the example implementations. Inputs 201 include data providers 201-207 provided to a big data system 211, which is in turn associated with an access point 213.
  • More specifically, as shown in 200, data is received as inputs 201 from a plurality of N data providers 203-207. Where the term “data source” is used herein, the data is acquired from one or more data provider, such as N data providers 203-209, as well as any other data providers as would be understood by those skilled in the art.
  • In the big data system 209, for each of the data providers 203-209, N corresponding adapters 215-221 are is provided. The adapters 215-221 may normalize 223, deduplicate 225 and classify 227 the data, as described herein.
  • As explained above, data is provided by the data providers 203-209, and processed by the data adapters 215-221. The output of the data adapters 215-221 is provided to an automatic tagging system, such that resulting output data has been normalized, de-duplicated and classified, as disclosed above. These aspects of the example implementations are automated; however, data miners and analysts may optionally supervise the process, and continue to update the process. The resulting output data may be combined with company financial data, such as financial reports or other publicly available information associated with characteristics of the company, either historical or present.
  • An output of the automatic tagging system and the company financial data is provided to a modeling system. The modeling system may include plural multi-panel generators 229, 231, 233, multi-forecast builders 235, 237, 239, and forecast modules 241, 243, 245. The modeling system generates, as its output forecast, a revenue prediction, based on financial data, proprietary data and machine learning. More specifically, the financial data and proprietary data includes the above described inputs of the automated tagging system and the company financial data. For example but not by way of limitation, the data may include financial reports, merger and acquisition information, and quarterly fiscal performance.
  • The output is the forecast, which may be provided in the format of a file, an API, a console or a custom format to the output. Accordingly, the output is provided to a user at the user access point. Example user experiences associated with the access point are described in greater detail herein.
  • FIG. 3 illustrates a process 300 associated with the foregoing structures and functions according to the example implementations. More specifically, the process is divided into a first phase 301 associated with data processing, a second phase 303 associated with development of the panel and calibration, and a third phase 305, associated with creating and applying a prediction model, and generating a forecast.
  • With respect to the first phase 301, at 307 input data is received. As noted above, and as explained herein, the input data is from a plurality of sources of different types. For example but not by way of limitation, the input data may include timeseries data from multiple vendors, such as credit card transactions, debit card transactions or other electronic purchase transactions.
  • At 309, the data is normalized. More specifically, the input data of operation 307 is mapped from the organic vendor data format in which it was received, to an internal data format that provides consistency across vendors. For example but not by way of limitation, the normalization involves accounting for the differences in the different alternate data sets, and standardizing that data to a common standard. Additional aspects may include the duplication or other processes to standardize the data.
  • At 311, brands associated with purchases are tagged. The tagging may involve the application of tagging rules on the normalized data. The rules may include, but are not limited to, rules that are specific to a credit card or debit card, geo-fencing rules for GPS data, rules associated with browser history or application usage, or other rules as would be understood by those skilled in the art. For example, but not by way of limitation, for a series of financial transactions, each of the financial transactions may be labeled, or tagged. To perform the tagging, one or more tagging rules 313 may be applied. The tagging rules may be considered to be an inclusive filter, or an exclusive filter. Further, in addition to a rule-based approach such as a filter, a natural language processing approach may be taken that incorporates artificial intelligence, such as machine learning models that involve neural networks.
  • Once the input data has been normalized and tagged, at the second phase 303, a panelization operation 315 is performed. According to the example implementations, a sample of users is established as a panel. The users are chosen to match criteria associated with the panel. For example, but not by way of limitation, the panel may be created in a manner that is associated with the input data churn rate. Other examples of channelization rules are shown at 317, and those rules may be based on user transaction patterns, census data balancing, or other rules that associate a characteristic of user transaction behavior with a transaction.
  • Once the panels have been generated at 315, a grouping process is performed at 319. More specifically, the panels are grouped by symbol. Brands associated with the data in the tagging process are assigned to symbols associated with a company. It is noted that this data may change over time, due to mergers, acquisitions, spinoffs, bankruptcies, rebranding, listing, delisting or other financial events. Thus, the events are assigned in real time to be included at the time of assigning the brand to the symbol at 319. The symbol may be chosen from a database, such as those shown at 321.
  • At 323, one or more corrections are applied to the grouped. For example but not by way of limitation, as shown in 325, patterns that are specifically associated with a financial institution are applied to calibrate the grouping. Examples of such patterns may include weekend postings of information, posting delays typically associated with a financial institution, as well as pending but not yet posted transactions. Additionally, as shown at 327, anomalies are removed. Examples of anomalies that are removed may include, but are not limited to anomalous transactions or anomalous users.
  • Once the panels have been fully generated in the second phase 303, the process proceeds to the third phase 305 as follows. At 329 a prediction model is generated. For example but not by way of limitation, in the training phase, historical data associated with a company, such as the fundamentals of the company and the stock price, as well as historical measurements that may have been previously were historically provided by the example implementations, are used as training data. Accordingly, the prediction model is trained based on this training data, as shown at 331.
  • At 333, a forecast is generated. Using the prediction model generated at 329, features associated with the open quarter 335 are applied, to derive a prediction of future stock price for future company fundamental metric. While the features are disclosed as being directed to an open quarter, example implement patients are not limited thereto, and other open periods may be substituted therefor.
  • The foregoing example implementations of the architecture may be run on the hardware as explained below with respect to FIGS. 26 and 27. Further, the hardware provides for continuous large-scale data inputs, such that the model is continuously receiving data and be updated automatically and in real-time. Thus, the processing capability of the hardware must be sufficient to permit such processing. For example but not by way of limitation, a GPU may be used for the artificial intelligence processing. Alternatively, or in combination with the GPU (graphical processing unit), an NPU (neural processing unit) may be used. One or both of these units may be used in a processor that is located remotely, such as a distributed server system, for a cloud computing system.
  • For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction.
  • Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Alternative Datasets
  • As explained above, the architecture is configured to receive a plurality of alternative data sets. As shown in FIG. 4, examples of the alternative data inputs 400, may include, but are not limited to, credit card transactions and debit card transactions for one, mobile device usage information 403, geolocation data 405, social data and sentiment data 407, and web traffic and Internet data 409.
  • The above-disclosed hardware implementations may be used to process the alternative datasets of FIG. 4, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources.
  • Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • FIG. 5 illustrates the processing of the alternative data sets according to the example implementations. More specifically, at 500, the receiving and processing of the alternative data sets is disclosed. The alternative data sets 501, may include, but are not limited to, data having alternative types. For example but not by way of limitation, GPS data 503, data associated with financial transactions 505, vaccination data 507, satellite image data 509, app usage data 511, and browsing history information 513 are all types of data that might be a part of the alternative data sets 501. However, the alternative data sets 501 are not limited to the foregoing example patients, and other data sets may also be included as would be understood by those skilled in the art.
  • At 515, the multiple sources of data have an ETL (extract, transform load) operation performed, to extract, transform and load the data. Accordingly, the features associated with the data types one through m at 515 are extracted into corresponding features one through m as shown at 517.
  • At 519, the process is subjected to a modeling operation. The modeling operation includes, as shown at 521, selection of the model, selection of the features, performing the training phase on the model, and performing model testing. The modeling operation 519 may be performed on an artificial intelligence system that uses neural networks and machine learning.
  • At 523, a validation step is performed, also known as backtesting. For example, the historical data associated with stock prices, company fundamentals, and historical decision or events, as disclosed at 525, are applied to the model that was generated. A determination is made, based on the validation of 523, whether the application of the historical information successfully validates the model. If the model was not successfully validated, or in other words the backtesting results were not found to be acceptable, the process returns to 519, and the modeling is again performed.
  • On the other hand, if the validation at operation 523 was successful, the operation proceeds. More specifically the operation proceeds to the modeling operation 527 and the forecasting operation 529, as discussed above.
  • The above-disclosed hardware implementations may be used to process the alternative datasets of FIG. 5, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Automatic Tagging
  • As the alternative data sets have been processed as explained above, and with respect to operation 101, automatic tagging 101 is performed on the data. FIG. 6 illustrates automatic tagging 600 in accordance with the example implementations.
  • At 601, the transactions are received and the alternate data sets are processed as explained above. At 603, normalization is performed. More specifically, features are extracted from the transactions that are relevant for the brand classifier.
  • At 605, brand classification is performed. More specifically, the extracted features of the normalization 603 are applied to a classification layered model. More specifically, a rule- based approach is applied that performs the extraction in a deterministic manner, and the rule based approach is mixed with an artificial intelligence approach, such as machine learning based on neural networks, that is probabilistic in nature. Accordingly, the brand classification 605 is performed in a mixed deterministic and probabilistic model.
  • At 607 a verification step is performed to verify that the classifier is accurate. More specifically, a sample of the classified data is verified, to confirm that the labeling was correctly applied with respect to the brand. If necessary, the classifier is retrained. Optionally, this verification and retraining operation at 607 may be performed iteratively, until the brand classification has been verified to a threshold confidence level.
  • At 609, the brands are classified with respect to companies. For example, the companies may be private companies, publicly, or other traded companies organizations having similar features to public or private companies. In a manner that is similar to operation 605 and 607 above, a mixture of rule-based, deterministic operations and probabilistic, artificial intelligence operations such as machine learning and neural networks, are employed so that the brands are classified to companies. Also similar to operation 607, at 611, the company classifier is verified to ensure that sampled data has been accurately labeled with respect to the classification of the company. If necessary retraining, and optionally iterative free training, may be performed until the company classification has been verified to a threshold confidence level.
  • At 613, once the brand classification 605 and the company classification 609 have been verified, the tagged transaction is considered to have been generated at 613. For example but not by way of limitation, the labeling is performed in a manner such that the data has been automatically labeled. An output of this process may be used in the panel generation, forecast building, and forecast.
  • The example implementations may have various advantages and benefits over the related art approaches. For example, but not by way of limitation, the related art approaches may suffer from problems or disadvantages, such as incorrect tagging of names that are common (e.g., DENNYS or SPRINT), names that are short (e.g., AMC or BOOT), names that are specific (e.g., TACO or TARGET), and names that are similar (e.g., FIVE BELOW or BIG FIVE). Further, entities may be omitted if they do not follow a clear standard. Examples of the lack of use of a clear standard resulting in company omissions includes the use of abbreviations instead of the full name, a change in the name over time, slight difference in names due a difference of timing of the acquisition of the store, or typographical errors in the name of the store. Other examples of related art errors include the assignment of the transactions to the wrong ticker (company indicia), even if the transactions clearly are associated with a different company.
  • The above-disclosed hardware implementations may be used to process the automatic tagging of FIG. 6, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • User Experience
  • The outputs of the example implementations described herein may be provided in a user experience, in a manner that provides for the user to visualize information generated by the architecture. For example but not by way of limitation, a user may be provided with a watchlist that is generated to provide the user with decision-support information associated with the outputs of the architecture, such as a prediction, forecast, recommendation or the like. Additionally, detailed analysis of an entry on the watchlist may be provided, along with a specific recommendation, and detailed metrics regarding the basis of the recommendation, and optionally compared with a recommendation provided by an external benchmark. Further, the example implementations described herein may be used to provide a chain of alerts, or optionally, a decision support or automated decision tool, which combines a deterministic rules-based approach with the above described aspects, including but not limited to the art of artificial intelligence approaches.
  • Data processing associated with user experiences may be processed using the hardware disclosed herein. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Watchlist
  • FIG. 7 illustrates a user experience associated with the example implementations. As shown at 700 the user experience provides a watchlist of companies, which may be selected by a user, for example, or suggested to the user based on preferences associated with the user. For example, but not by way of limitation, the watch list includes in a first column 701 a name of the company, including the company name and trading symbol. In another column 703, a stock price is provided, along with information on the performance, such as change in share price as total amount and percentage.
  • In yet another column 705, a type of critical data associated with the performance that is being used as an input in the example implementations is identified. As shown in FIG. 7, the first row is associated with a first company, which may be a large retail company having physical and online presence for retail sales, for which the revenues is a critical indicator of performance. For the second row, which is an online subscription company, a number of other users associated with the product is a critical indicator of performance. For a third row, which is a retail restaurant that users may visit, same-store visits is provided as a critical indicator of performance. For a fourth row, which is a large automotive manufacturer, with international presence, US revenues is provided as a critical indicator of performance. For the fifth row, which is a social media company engagement of users (e.g., time spent on the platform) is provided as a critical indicator of performance.
  • In still another column 707, a performance associated with the critical indicator of performance is shown. As shown in FIG. 7, for the first row the critical indicator of revenues are showing a performance increase of 0.4%; for the second row the number of users is showing an increase of 3%; for the third row, same-store sales is showing a decrease of 10%; or the fourth column, the US revenues are showing an increase of 6.2%; and for the last row, user engagement is showing an increase of 3%. Accordingly, the example user experience provides a user with information on a critical indicator of performance, as well as the actual performance based on the input data as explained above.
  • According to the present example implementations, based on the critical indicator and the performance, the system generates a score in yet another column 709. The score provides an indication for the user of the performance of the company, which the user can apply as a form of a recommendation. For example, but not by way of limitation, in the case of the first row in FIG. 7, a growth of 0.4% revenue is associated with a score of neutral, whereas in the second row, an increase in the number of users of 3% is associated with over performing. In the third row, a 10% decrease in same store sales is associated with an underperforming score. In the fourth row, an increase of 6.2% in US revenues is associated with an over performing score. In the last row, an increase of 3% engagement is associated with an over performing score.
  • Accordingly, the user experience according to the example implementations provides the user with information on the critical indicator of performance, which is determined by the system based on the type of company and the available data among the plurality of the data streams. Further, the actual performance of the critical indicator for each of the companies is provided, along with a determination of the score. For example, but not by way of limitation, the information of the first row, such as revenue information may be generated based on the input credit card information that is received as a data source, as explained elsewhere in this disclosure. That information may be used to determine a revenue associated with the company, and may be used to calculate the performance.
  • While the score shown in the drawings is a single word, such as neutral, over performing or underperforming, other scores may be substituted without departing from the inventive scope. For example, but not by way of limitation, a numerical score, such as a performance rating from 1 to 10, may be provided. This information and score is generated based on the input data that is provided, as well as observations of what is characterized as an appropriate score relative to that company. Thus, each company may have a different score determination based on company attributes such as industry or company size, to determine the amount of variation in performance that is necessary to provide the score.
  • The example implementations described herein provide a method of determining a critical indicator, as shown in the watchlist. For example, but not by way of limitation, historical data may be applied in an artificial intelligence machine learning model in order to determine the criteria having the highest correlation with respect to a change in the stock price. For example, but not by way of limitation, some related art approaches may attempt to directly measure a grossed number of users in order to determine revenue. However, such related our approaches can be accurate, because a doubling of the number of users does not necessarily correlate to a doubling of the amount of revenue. The reason this may be true is because the new users may not have the same representation, use pattern or consumer preferences as the initial or earlier users.
  • Similarly, the geographical location of use, users or other information is crucial to determining whether the data is representative of the users, and can correctly determine the critical indicator. As explained above with respect to the example implementations, GPS data is received as an input, and is also stabilized in the automated tagging system, so that the data can be representative. As another example, the data must be stabilized not just to show the overall number of users, but to show the amount of time that a user spends in an online app, because if the new user spends more time or less time in the app than the current users, this may be reflective of a different user behavior with respect to revenue, purchase, advertisements or the like, and can make the data more accurate.
  • In a similar manner, when a company acquires a target, the number of users may increase, although those new users may not be representative of the earlier users. During the time between the announcement of the merger and the completion of the merger process, several months or years may pass, during which publicly available information may be incorporated into these example implementations and thus adjust the critical indicator, performance indications, and or recommendations to calibrate for such information. Legal or regulatory blockages, such as antitrust or export control, may stall or block the merger, such that the presence of certain terms in publicly available information associated with the merger can be used to adjust the forecasting and recommendation. Further, the predictive aspect of the tool may provide a forecast for performance after the merger, based on similar patterns that were used to train the model as to how to characterize, process and generate an output prediction for such models.
  • As another example, for a stock associated with the restaurant, the change in pandemic status of the coronavirus from a more severe situation to a less severe situation may influence consumer preference to return to in person dining. The inclusion of such information may be more sensitive for certain industries or companies. In the case of coronavirus, the industries of travel, leisure, dining or others may be more sensitive as compared with other industries; the present example implementations are capable of performing data stabilization to account for such changes. Without the foregoing approaches of the example implementation which performed normalization, the duplication, classification and data stabilization, the data lacks the necessary accuracy to determine the critical indicator with a sufficiently high degree of confidence (e.g., clustering to better represent the user population).
  • FIG. 8 illustrates a process for generating and updating information to the watch list, in accordance with the example implementations. As it relates to the user experience described above, the type of critical indicator as well as the score are continuously updated based on data inputs, on a real-time basis. For example, in the first row revenue is listed as a critical indicator type. However, if there are changes in the incoming data associated with the company, the example implementations may change the type of critical indicator from revenue to another critical indicator, such as same-store sales, customer base, or other critical indicator.
  • More specifically, as shown at 800, inputs are provided into the architecture described herein. For example, but not by way of limitation, there is a separate pipeline for each type of signal associated with a critical indicator, for each company. Thus, in the above described example, for the first company, there may be multiple pipelines for each of the candidate critical indicators. One pipeline may be associated with the signal for the revenues, and another may be associated with consumer spending, or indicators of company performance.
  • More specifically, at 801 real-time data events from alternative data sources, as explained above are provided. Each of the pipelines is triggered by receiving new data that is relevant to the model. The input of new data from alternate data sources is described as explained above. These real-time data events are processed according to the example implementations with respect to the architecture to generate a signal based on the current data. Thus, an updated signal is provided at 803. More specifically, the new data that was received at 801 is provided to the model of the example implementations. Accordingly, a prediction is generated as described above, and is provided as an output signal.
  • At 805, a strength of the updated signal is evaluated. For example but not by way of limitation, the output signal, which may be a prediction of the potential critical indicator, is compared with benchmark consensus signals. According to one example implementation, the updated signal may be a predicted revenue for a company, which is compared with a prediction generated by one or more analysts, or an analyst consensus, from available information. Based on the comparison between the updated signal and the benchmark signal, a signal strength vector is generated, which is indicative of a relative degree of closeness between the updated signal and the benchmark signal.
  • At 807, the strength of the updated signal is classified. More specifically, the signal strength vector generated at 805 is classified into a recommendation. For example but not by way of limitation, the recommendation may be a one-dimensional actionable recommendation, such as hold, buy or sell.
  • At 809, the classification associated with the strength of signal is used to generate any updates, which are subsequently provided and displayed to the user in the user experience. More specifically, the user interface, such as the web interface, a mobile interface, or a pushed alert may be updated with the results of the signal strength classification, as well as the watchlist update. Thus, according to an example implementation, the user may be provided with a change in the critical indicator type, as well as a change in the recommendation, depending on which of the pipelines has the strongest vector. Optionally, plural pipelines may be blended or weighted based on the signal strength, to provide the blended result, with the critical indicator being listed as the most heavily weighted indicator. According to another example implementation, plural indicators may all be listed, along with plural outputs of the results, followed by a recommendation based on a matrix operation or multiplexing of the strength vector, weighted or unweighted, associated with each of the pipelines for each of the signals for each of the companies.
  • According to this example implementation, the user may be provided with real-time changes in critical indicator, performance and recommendation that are actionable. As explained below, the recommendations may also be provided in the detailed analysis of a company, or a chain of alerts and may be used for the recommendation or actual action automatically being taken.
  • The above-disclosed hardware implementations may be used to process the watchlist of FIGS. 7-8, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Performance and Recommendation
  • As shown in FIG. 9, according to another example user experience 900, any of the companies shown in the foregoing user experience of FIG. 7 may be selected for providing a further summary. Here, the fourth row is selected, and further information associated with the forecast is provided. A recommendation 905 is provided, in this case “strong buy” based on a decrease in the revenue.
  • Further details 903 are provided to the user, such as the relative impact of revenue on the stock price (e.g., correlation between revenue and stock price), and the prediction accuracy for the stock associated with the example implementations. Information on the forecast based on the analyst expectation is provided, in comparison to the forecast based on the real data measurement, as well as the score associated with the performance that is based on the real data measurement. Charts 907 are provided showing a comparison of the revenue based on real data measurement, as compared with the expected revenues, which may be based on published information that was provided by the company. Additionally, information of the stock prices also provided.
  • FIG. 10 illustrates a process 1000 for generating an output that visualizes a difference between expected values and actual values according to the example implementations.
  • At 1001, measured sales are provided. First applicable not by way of limitation, as disclosed above, the measure sale may include company sale that are calculated, such as in the second phase 303 as shown in FIG. 3, and disclosed above.
  • At 1003, expected sales are provided. The expected sales may be based on analyst prediction, analyst consensus, industry publication, or other available information associated with the providing of expected sales information, as opposed to actual sales information.
  • At 1005, for the measured sales obtained at 1001, and aggregation operation is performed. More specifically, the measured sales are aggregated based on time series into intervals that are comparable with interval associated with the expected sales signal. For example but not by way of limitation, if the expected sales signal has a known time interval such as daily, weekly, hourly, quarterly, etc., the measured sales are aggregated into comparable time periods.
  • At 1007, a scaling operation is performed. More specifically, because the measured sales provide information from a sample of the population, the measured sales need to be scaled. For example but not by way of limit deletion, the scaling may involve extrapolating a sample representative population associated with the sample, and similarly to extrapolation of the aggregated sales of that population of the sample to the representative population.
  • At 1009, the periods associated with the scaled, aggregated information of measured sales from 1001, 1005 and 1007 are aligned with the period of the expected sales of 1003. Accordingly, the aggregated, scale measured sales are joined with the expected sales, by time interval.
  • At 1011, a visualization operation is performed. Accordingly, an output may be provided to a user in the form of a bar plot 1013, a line plot 1015 or any other visualization technique 1017th as would be understood by those skilled in the art. Visualization techniques are not limited to a single approach, and different visualization techniques may be combined, appended, mixed or blended. Further, instead a graphic visualization approach, a text output may be provided in the form of a narrative chart that simply displays the result.
  • FIG. 11 illustrates an example graphical visualization associated with the example implantations described above. More specifically, at 1100, a chart is provided that displays sales revenue on a weekly basis over a 12 week period. At 1101, the aggregated information of the measured sales, which has been aligned to a weekly period, is displayed. Additionally, at 1103, the expected sales, provided on a weekly interval, is displayed. Accordingly, a user may be provided with a display that illustrates a difference between the measured sales as associated with the example implementations herein, and the expected sales, as provided by analysts, analyst consensus, etc.
  • Returning to FIG. 9, the charts 907 include a revenue comparison. Thus, the user can visualize, and understand by looking at the detail information and 903, that the measured revenue is about 6.2% higher than the forecast based on the expectation by the analysts. Thus, the example implementations provide a determination that the entity is over performing as compared with the analyst expectation. As a result, the example implementations generate a recommendation of “strong buy” at 905, in the manner explained above.
  • The above-disclosed hardware implementations may be used to process the evaluation and recommendation operations associated with FIGS. 9-10, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Alert Chain
  • As shown in FIG. 12, according to yet another user experience associated 1200 with the example implementation, alerts are provided to the user. The selected alerts are listed in rows, divided into the alerts specific to the user 1201 and an aggregated list of the most popular alerts 1213 across all of the users. As previously disclosed, a column is provided for the name and stock symbol of the company, as well as the type 1203 of information that is the basis for the alert, the condition 1205 that triggers the alert, and the frequency 1207 of the alert to be provided. The user is provided with a tool 1209 to set the alert to active or inactive, to a tool 1211 to edit the alert, and to delete the alert.
  • For example but not by way of limitation, as shown in the first row, the alert may be based on the performance being greater than 5%. As shown in the second row, plural alerts are provided to be associated with a single company, in this case the price moving above 5% and the score having a value of “over performing”. Similarly, a time-based alert may be provided, such as the user being provided at an alert 10 days before an earning call, as shown with respect to the third row. Accordingly, the user is provided with a chain of alerts that may be triggered by one or more conditions associated with a company.
  • As also shown in FIG. 12, users may be able to view the most popular alerts 1213 that are being generated across the user base, as well as the number of users 1215 that are generating and using those alerts. Detailed information such as type, condition and frequency each of those popular alerts is provided. The user is provided with an option to add those alerts to their own user alert chain as shown by the “+” symbol 1217.
  • FIG. 13 illustrates operations 1300 associated with the processing of the chain of alerts. For example, as explained above, a type, condition, performance, price and score can be processed as data, to determine whether to provide the user with an alert. Further, the user may view and consider the alerts being used by the overall community of users, to benefit from alerts generated by others. According to the example implementations, the user may subscribe to such a rule, and thus be alerted in response to changes in the signals described above.
  • At 1301, a signal update is performed. More specifically, as explained above with respect to FIG. 8 at 803, signal updating is performed. For example but not by way of limitation, and output of the watchlist may be provided, including but not limited to the signal strength vector and the signal strength classification.
  • At 1303, the output of 1301, such as the absolute values of the signal, or relative values of a signal as compared with the previous signal update are applied to the deterministic rules. For example but not by way of limitation, if a revenue has increased by 2% as compared with the previous signal update, and the user set a rule to provide an alert when the revenue has increased by 2% or more for a given company, the rule may be triggered. Similar or other deterministic rules, as disclosed above with respect to FIG. 12, are processed in real time as the updated information of 1301 is provided. Accordingly, the probabilistic output of the watchlist is applied to the deterministic rule base of the user, and an alert is either triggered or not triggered with each real-time update.
  • At 1305, a notification generation operations performed. More specifically, based on the user rule-based, a custom notification payload is generated for the user. For example but not by way of limitation, if a user has determined that for a revenue that has increased by 2% or more compared to the previous signal update, the user should receive an alert to sell a stock cut, such a notification payload is generated.
  • At 1307, the notification is distributed to the user. The distribution channel may be set to one or more modes, including but not limited to mobile application notification, email, simplified messaging service, or other communications means. Accordingly, a notification is pushed to a user in the desired manner of the user, containing the recommendation associated with the rule, which is triggered based on the real-time signal update provided to the rule-base.
  • While some example implementations may provide a recommendation, in real time, based on the real-time measurement of data collection, price checking and calculation, to provide the addition, the example implementations are not limited to recommendations. The foregoing example implementations may be integrated with approaches that provide for automatic trading, such that the user need not provided input for and execution of a decision.
  • FIG. 14 illustrates a process 1400 associated with decision execution according to the example implementations. More specifically, instead of the user receiving and reviewing the signal classification and signal strength vector to receive an alert, an automatic system is provided that applies artificial intelligence techniques such as machine learning in a neural network to convert the signal classification and signal strength vector into a decision signal that can be submitted for execution of the decision. For example but not by way of limitation, the signal may be converted to a trading order for a brokerage service.
  • At 1401, in accordance with the foregoing example implementations, measured data is received from a plurality of alternate data sources, and processed on a real-time, automatic basis to provide real-time updates for the critical indicators.
  • At 1403, the updates are input into a trading model equal but not by way of limitation, the trading model may include an artificial intelligence approach such as machine learning in a neural network to provide trading decision-making. According to an example implementation, the trading model may be a neural network that is trained on historical stock prices and a history of signal updates associated with the measured signal according to the example Tatian. The input to the trading model is the signal data, and the current state of the user account. For example, if the user has open or pending orders associated with a purchase or sale, the model takes this information into consideration. As an output, the trading model 1403 provides a decision or an action, instead of an alert or a recommendation as is done in other example implementations.
  • At 1405, an order is created. For example, the order may be created by submitting an automated request to a seller, including but not limited to a brokerage service, and optionally confirming the order with the user. For the user confirmation, this may optionally be provided by a notification distribution channel, as explained above with respect to FIG. 13.
  • At 1407, the system performs an operation to determine whether user review is required. If no user review is required, the order is executed at 1409. For example, the order created at 1405 may be executed with a seller, such as a brokerage service. Alternatively, if at 1407 it is determined that user review is required buyer, the confirmation request is sent at 1411, as explained above through a notification distribution channel. If the user confirms the order at 1411, the order is executed at 1409. If the user does not confirm the order of 1411, the order is considered to be rejected, and cancel at 1413.
  • At 1415, an operation is performed to update the user portfolio, so as to indicate that the order has been executed. Optionally, the user may be provided with a report via the communication or distribution channels explained above, to confirm that the order was executed, to provide an update of the portfolio, and/or to remind the user of any pending or open orders.
  • The above-disclosed hardware implementations may be used to process the alert chain disclosed in FIGS. 12-14, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • The foregoing example implementation of the automatic trading aspects is performed continuously and in real time. Thus, plural orders may be simultaneously executed, or may be in different stages of execution. To the extent that confirmation by the user is requested, the user may decide to execute some orders, reject other orders, and to not make it make patient keep still other orders pending. To the extent that confirmation is not required by the user, automatic execution may continue without providing the user with any review prior to the execute acute the notification of pending order or current portfolio immediately after the execution. Implementation, when review is not required by the user, another subject that is not military. A require, a change in availability on, that is the identity for justice, or that of entity side of order exceeds a prescribed on individual or periodic purchase amount.
  • The foregoing example implementations automatically perform a determination of the critical indicator type, provide a calculation of performance, and generate a score, for each of the listed companies. For the foregoing example implementation of the user experience, determination of the leading indicator, and comparison of performance is provided on a real-time basis. As explained above, a mixture of rule-based, deterministic approach is blended with a probabilistic approach, such as the artificial intelligence neural network approaches described herein. The architecture may be executed on a big data server, such as in a server farm or in the cloud. Accordingly, the signals are processed as they are received, in real-time, in contrast to related art approaches, which perform batch processing. Further, cloud computing may also be used. To the extent that the necessary processing power is available, the signal processing associated with the example implementations may be performed on a standalone machine. Due to the present example implementations being able to provide real-time information, a user may be able to receive the recommendation and execute a decision in a timely manner that provides for a significant impact on the return on investment. In contrast, related art approaches do not take into consideration the processing of the real-time measured data, instead only use expected data information.
  • While the foregoing example implementations are shown on a screen of a display such as a laptop or desktop computer, the example implementations are not limited to the or two in terms of the user experience. For example but not by way of limitation, a user experience associated with a mobile device may be substituted for or used in combination with the foregoing example user experiences. FIGS. 15-16 illustrate such example implementations associated with a mobile device.
  • The foregoing example implementations may also be used to provide users with a discount for goods and or services associated with a company. The discount may take the form of a rebate, a reward, a price reduction or other form of benefit or compensation to a user. For example but not by way of limitation, the discount may be provided to a user based on the amount for number of transactions by the user on the account of the company shown in the example user experience, for which the example implementations are performing the foregoing operations on the foregoing structures.
  • Optionally, according to the example implementations, a feedback loop may be provided. More specifically, a user may provide input on the outcomes and recommendations of the model, which may be used to calibrate the model. For example, but not by way of limitation, if, based on the critical parameter or indicator, combined with a performance, a recommendation has been set with which the user disagrees, the user may provide a suggestion for feedback into the system. The user may suggest that the critical indicator is not appropriate, and instead suggest another critical indicator, as well as being able to suggest that the recommendation is not a desirable recommendation. According to one example implementation, a user may indicate that an appropriate critical indicator of social media or an online service may not be user engagement or number of users alone, but may instead be related to online advertising, clicks or some other parameter. This feedback may be used by the system to retrain or recalibrate the forecasting tools, so as to adjust according to consumer preferences and demands.
  • Optionally, an automated trading tool may be provided that implements certain recommendations under certain circumstances. For example, but not by way of limitation, for items on the watchlist that are also on the alert chain, the instruction from the user in the user preferences may indicate that a particular stock should be bought or sold when one or more of the conditions in the alert chain have been met. Optionally, the user may create a completely different and separate alert chain for automatically trading, as well as for providing alerts. Thus, sales and purchases may be executed by the user automatically, thus avoiding any delay, and immediately providing such instruction. Such a system may be valuable during certain use cases. For example, but not by way of limitation, the automated trading tool could be valuable when the user is traveling or on vacation, during peak periods of activity or intense financial news, such that the user cannot quickly or in a timely manner act on alerts or recommendations, or simply for user convenience.
  • Also optionally, in addition to the company financial data being used as explained above, other publicly available data associated with a company may also be used. For example, but not by way of limitation, the public facing announcements, social media posts, public presentations, or other information may be sensed or detected for leaders of a company associated with one or more social media accounts, industry events, news releases or publications, or other information. This information may in combined into the sentiment analysis.
  • The foregoing example implementations may have various advantages and/or benefits. For example, but not by way of limitation, the example implementations may provide a way of determining performance automatically and in real time. Related art approaches may conduct manual research and generate performance indication manually over a period of many hours in many days, and an investment advisor may manually determine a score. However, the related art approaches do not have any way of taking disparate data from different data sources, performing operations on the data such as normalization, de-duplication, classification and optionally others, applying the refined data to a forecasting tool, and generating a forecast or recommendation based on a determination that a particular indicator is critical, all automatically and all in real time.
  • This distinction is crucial, because the information and recommendations must be provided in real time in order for a user to be able to effectively make decisions and execute the decisions in a timely manner, before the forecast has again changed due to the passage of time, the dispersion of information, new information or other events. The real-continuous determination and recalculation and re-forecasting provides information and recommendations that is precise, accurate and available for decision support to the user.
  • Data Cleaning and Pipelining
  • According to the foregoing example implementations, users are sourced to create a panel. As provided herein, a panel may be selected according to the following selection process.
  • An optimal panel to be used for forecast may be generated. More specifically, the selection process includes performing a filter operation on the candidate users for the panel. For example, but not by way of limitation, the filter operation may be performed by the application of a filter, so as to generate a panel meeting one or more criteria. According to the present example implementation, the criteria may include, but are not limited to, the panel including users that are representative of a population (e.g., US population) for their geolocation, with a substantially stable number of purchases from a starting date to an ending date (e.g., 2011 to the current date), with a consistent number of transactions. Optionally, outliers and duplicates may also be removed.
  • As shown in FIG. 17, a selection process 1700 may be performed. Initially, the pool of candidate users is a fragmented panel consisting of 60 million users at 1701. After a first filter process performs the filtering operation such that the pool of candidate users is representative of the population for their geolocation, the filtered pool is narrowed to 15 million users at 1703. Subsequently, another filter operation is performed to confirm a stable number of purchases over a time interval, further narrowing the pool of candidate users to 4 million users at 1705. At 1707, outliers and duplicates may be removed, and a filter operation may be performed for a consistent number of transactions, to produce an optimal panel of 1.5 million users.
  • FIG. 18 illustrates a data pipeline according to an example implementation. At 801, data normalization is performed. Data that is received from different institutions may have different structures. In order to properly use the data from the different institutions, the data must be normalized with a fixed format. More specifically, the data is transformed from a text format into a distributed database that is designed to operate on top of the transactions.
  • At 1803, a data cleaning operation is performed. More specifically, the data that is not required or desired for the data processing pipeline 1800 is removed. For example but not by way of limitation, duplicated transactions, duplicated accounts and duplicated users are identified and removed. Further, incomplete data, such as incomplete transactions with missing data, are also removed from the data. Accordingly, and as explained above with respect to panel selection, the base of users that follows the selection process and filter operation is maintained in the user database, and the outliers and duplicates are removed.
  • After the data cleaning 1803 has been completed, a classification operation 1805 is performed. In this operation, each of the transactions in the database is classified. The classification is performed by analyzing the description of the transaction, and associating a correct merchant name with the description. In this operation, the quality of the association directly and critically influences the quality of the correlation and rejections that will be described further below. More specifically, to reach a desired accuracy, such as close to 100%, a combination of automated machine learning algorithms is combined with manual human controls.
  • As an example of the foregoing classification operation 1805, an example extract is provided. According to one example implementation, an average monthly volume of processing may be more than 200 million transactions; each of those transactions must be associated with a correct company.
  • FIG. 19 illustrates the example extract. As can be seen in the example extract 1900, the name of the merchant is found in each of the transactions.
  • Once the cleaned data has been classified as explained above, a modeling operation 1807 is performed. More specifically, the modeling operation 1807 generates forecasts. As inputs, the modeling operation 1807 applies data output from the classification 1805, as well as third-party input, for example Bloomberg data 1809. In the modeling operation 1807, many forecasts are combined, to obtain an optimal results. The modeling operation 1807 is specialized to a category associated with the company. Further, the modeling operation 1807 compensates for various bias factors, such as seasonality and the like.
  • FIG. 20 illustrates an example implementation 2000 associated with the modeling operation 1807. More specifically, a plurality of sets of panels 2001, subject to the foregoing operations of the data pipeline 1800, are provided to the plurality of corresponding sets of forecasts 2003; outputs of the sets of forecasts 2003 are provided to plural assemblers 2005, to generate a final forecast 2007.
  • In the modeling operation 1807, different categories of companies may require different algorithms to generate the forecasts. For, but not by way of limitation, the algorithms must incorporate the different behavior of consumers for the different structures of revenue within a company. Some of the categories may include, but are not limited to, fully owned restaurants, franchise restaurants, supermarket chains, insurance, and other companies.
  • The modeling operation 1807 also provides for bias correction. For example but not by way of limitation, the panel data may include historical data of several years, from thousands of users that shared their data in exchange for a token or reward. Such an approach to obtaining the data allowed for a more complete understanding of the bias impacted by the panels that are not randomized, and algorithms for the correction of the bias.
  • FIG. 21 illustrates a user experience 2100 associated with obtaining the information. At 2101, a user is provided with an input screen to input data and enter a referral code, as well as a number of points that may be associated with completing the survey. At 2103, the completion of the survey by the user results in the number of points being increased, and an option for selection of a bank where the points may be deposited, as well as a privacy statement.
  • To perform bias correction, historical data points are analyzed, to determine how the bias behaves across time. According to one example implementation, 9 algorithms were created for adjusting the bias in the data. Then, those 9 algorithms were combined with three other algorithms that are used for tickers, and that are less impacted by the bias.
  • At 1811, an output of the modeling is provided to make predictions. More specifically, the data is aggregated and inserted in a database. The data in the database can be accessed and used by other rules, such as traders, and further retrieved for validation, back testing, etc.
  • Additionally, the data pipeline 1800 includes anomaly detection 1813-1819. More specifically, an output of each of the elements of the data pipeline is subject to anomaly detection, to identify anomalies that may compromise final predictions. As an example of the anomaly detection 1813-1819, dataflow anomalies in the pipeline chain may examine technical anomalies as well as data anomalies.
  • For example, but not by way of limitation, anomalies in the behavior of a company for which forecasting is being performed may be analyzed and provided, including, but not limited to the following:
  • 1. Acquisitions or divestitures, sourced from third-party news sources
  • 2. Changes in requirements that would result in different company behavior, such as accounting standards, sourced from, for example, but not by way of limitation, SEC (Securities and Exchange Commission) files.
  • 3. Change in a ratio between franchisees and stores owned individually
  • 4. A different duration of a quarter, such as changing from 90 days to 97 days
  • 5. Special sales promotions or other promotional activities for weeks, which may vary from quarter to quarter
  • 6. Releases of new products by companies.
  • According to an example implementation, representativeness is considered. More specifically, according to a specific example, in 2019 the US population was about 330 million individuals, with the average family size being 3.14 members, such that there are roughly 105 million families, some studies indicating the number to be as high as 128 million families. In the example implementations, the ratio between the number of US households and the best panel is roughly 70 to 85. Further, the ratio between the declared revenues of companies and the total amount of purchases in the panel is between about 70 and 90. This ratio is consistent with the expected value, and is consistent across different companies, such that the proportion of consumers is properly maintained the panels according to example implementations.
  • FIGS. 22A and 22B illustrate various examples of comparison with data. FIG. 22A includes franchising examples, and FIG. 22B includes examples of companies with a history of acquisitions, or anomalies.
  • FIG. 23 illustrates a comparison between results according to the example implementations and the related art approaches. As can be seen in the column indicated as “DF” for the example implementations, and “1010 Data” for the related art approaches, a substantial difference in the forecasting results shows substantially improved performance for the example implementations.
  • FIG. 24 illustrates the technical basis according to statistical methods for the determination of the panel size, and the measurement of the real statistical error.
  • From the central limit theorem, the theoretical % error for the forecast is given by
  • E % 1 N s x _
  • The Error % in the forecast is proportional to the sample standard deviation of the purchases and inverse to the square root of the purchase number and the average amount.
  • That means that the panel size is adequate to generate accurate predictions.
  • The real statistical error is even smaller, because we use the previously released revenues to correct and adjust the data.
  • The above-disclosed hardware implementations may be used to process the operations of FIGS. 18-24, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2GB RAM, 10GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Example Environment
  • FIG. 25 shows an example environment suitable for some example implementations. Environment 2500 includes devices 2510-2555, and each is communicatively connected to at least one other device via, for example, network 2560 (e.g., by wired and/or wireless connections). Some devices may be communicatively connected to one or more storage devices 2540 and 2545.
  • An example of one or more devices 2510-2555 may be computing devices 2600 described in FIG. 26, respectively. Devices 2505-2555 may include, but are not limited to, a computer 2510 (e.g., a laptop computing device) having a monitor, a mobile device 2515 (e.g., smartphone or tablet), a television 2520, a device associated with a vehicle 2525, a server computer 2530, computing devices 2535 and 2550, storage devices 2540 and 2545, and smart watch or other smart device 2555.
  • In some implementations, devices 2510-2525 and 2555 may be considered user devices associated with the users of the enterprise. Devices 2530-2550 may be devices associated with service providers (e.g., used by the external host to provide services as described above and with respect to the collecting and storing data).
  • The above-disclosed hardware implementations may be used in the environment of FIG. 25, as would be understood by those skilled in the art. More specifically, For example, but not by way of limitation, external data fetching according to the example implementations described herein may be performed by copying data from an external third party (e.g., vendor), and storing the data in a cloud storage container. The data fetching process may be managed by a scheduling server, and/or a serverless compute service that executes operations to manage the external data storage and the associated compute resources. Further, the extraction, transformation and loading of data as described herein may be executed by a batch management processor or service. Batch computing is the execution of a series of executable instructions (“jobs”) on one or more processors without manual intervention by a user, e.g., automatically. Input parameters maybe pre-defined through scripts, command-line arguments, control files, or job control language. A batch job may be associated with completion of preceding jobs, or the availability of certain inputs. Thus, the sequencing and scheduling of multiple jobs is critical. Optionally, batch processing may not be performed with interactive processing. For example, the batch management processor or service may permits a user to create a job queue and job definition, and then to execute the job definition and review the results. According to an example implementation a batch cluster includes 256 CPUs, and an ETL-dedicated server having 64 cores and 312 GB of RAM. The number of running instances may be 1. The foregoing ETL infrastructure may also be applied to the process of insight extraction. Further, an API is provided for data access. For example but not by way of limitation, the REST API, which conforms to a REST style architecture and allows for interaction with RESTful resources, may be executed on a service. The service may include, but is not limited to, hardware such as 1 vCPU, 2 GB RAM, 10 GB SSD disk, and a minimum of two running instances. The API may be exposed to the Internet via an application load balancer, which is elastic and permits configuration and routing of an incoming end-user to applications based in the cloud, optionally pushing traffic across multiple targets in multiple availability zones. The caching layer may be provided by a fast content delivery network (CDN) service, which may securely deliver the data described herein with low latency and high transfer speeds. According to the example implementations, containers may be run without having to manage servers or clusters of instances, such that there is no need to provision, configure, or scale clusters on virtual machines to execute operations associated with containers.
  • Example Computing Environment
  • FIG. 26 shows an example computing environment with an example computing device suitable for implementing at least one example embodiment. Computing device 2605 in computing environment 2600 can include one or more processing units, cores, or processors 2610, memory 2615 (e.g., RAM, ROM, and/or the like), internal storage 2620 (e.g., magnetic, optical, solid state storage, and/or organic), and I/O interface 2625, all of which can be coupled on a communication mechanism or bus 2630 for communicating information. Processors 2610 can be general purpose processors (CPUs) and/or special purpose processors (e.g., digital signal processors (DSPs), graphics processing units (GPUs), and others).
  • In some example embodiments, computing environment 2600 may include one or more devices used as analog-to-digital converters, digital-to-analog converters, and/or radio frequency handlers.
  • Computing device 2605 can be communicatively coupled to input/user interface 2635 and output device/interface 2640. Either one or both of input/user interface 2635 and output device/interface 2640 can be wired or wireless interface and can be detachable. Input/user interface 2635 may include any device, component, sensor, or interface, physical or virtual, which can be used to provide input (e.g., keyboard, a pointing/cursor control, microphone, camera, Braille, motion sensor, optical reader, and/or the like). Output device/interface 2640 may include a display, monitor, printer, speaker, braille, or the like. In some example embodiments, input/user interface 2635 and output device/interface 2640 can be embedded with or physically coupled to computing device 2605 (e.g., a mobile computing device with buttons or touch-screen input/user interface and an output or printing display, or a television).
  • Computing device 2605 can be communicatively coupled to external storage 2645 and network 2650 for communicating with any number of networked components, devices, and systems, including one or more computing devices of the same or different configuration. Computing device 2605 or any connected computing device can be functioning as, providing services of, or referred to as a server, client, thin server, general machine, special-purpose machine, or another label.
  • I/O interface 2625 can include, but is not limited to, wired and/or wireless interfaces using any communication or I/O protocols or standards (e.g., Ethernet, 802.11x, Universal System Bus, WiMax, modem, a cellular network protocol, and the like) for communicating information to and/or from at least all the connected components, devices, and network in computing environment 2600. Network 2650 can be any network or combination of networks (e.g., the Internet, local area network, wide area network, a telephonic network, a cellular network, satellite network, and the like).
  • Computing device 2605 can use and/or communicate using computer-usable or computer-readable media, including transitory media and non-transitory media. Transitory media include transmission media (e.g., metal cables, fiber optics), signals, carrier waves, and the like. Non-transitory media include magnetic media (e.g., disks and tapes), optical media (e.g., CD ROM, digital video disks, Blu-ray disks), solid state media (e.g., RAM, ROM, flash memory, solid-state storage), and other non-volatile storage or memory.
  • Computing device 2605 can be used to implement techniques, methods, applications, processes, or computer-executable instructions to implement at least one embodiment (e.g., a described embodiment). Computer-executable instructions can be retrieved from transitory media, and stored on and retrieved from non-transitory media. The executable instructions can be originated from one or more of any programming, scripting, and machine languages (e.g., C, C++, C#, Java, Visual Basic, Python, Perl, JavaScript, and others).
  • Processor(s) 2610 can execute under any operating system (OS) (not shown), in a native or virtual environment. To implement a described embodiment, one or more applications can be deployed that include logic unit 2655, application programming interface (API) unit 2660, input unit 2665, output unit 2670, service processing unit 2690, and inter-unit communication mechanism 2695 for the different units to communicate with each other, with the OS, and with other applications (not shown). For example, alternate data processing unit 2675, tagging unit 2680, and modeling/forecasting unit 2685 may implement one or more processes described above. The described units and elements can be varied in design, function, configuration, or implementation and are not limited to the descriptions provided.
  • In some example embodiments, when information or an execution instruction is received by API unit 2660, it may be communicated to one or more other units (e.g., logic unit 2655, input unit 2665, output unit 2670, service processing unit 2690). For example, input unit 2665 may use API unit 2660 to connect with other data sources so that the service processing unit 2690 can process the information. Service processing unit 2690 performs the filtering of panelists, the filtering and cleaning/normalizing of data, and generation of the results, as explained above.
  • In some examples, logic unit 2660 may be configured to control the information flow among the units and direct the services provided by API unit 2660, input unit 2665, output unit 2670, alternate data processing unit 2675, tagging unit 2680, and modeling/forecasting unit 2685 in order to implement an embodiment described above. For example, the flow of one or more processes or implementations may be controlled by logic unit 2655 alone or in conjunction with API unit 2665.
  • Although a few example implementations have been shown and described, these example implementations are provided to convey the subject matter described herein to people who are familiar with this field. It should be understood that the subject matter described herein may be implemented in various forms without being limited to the described example implementations. The subject matter described herein can be practiced without those specifically defined or described matters or with other or different elements or matters not described. It will be appreciated by those familiar with this field that changes may be made in these example implementations without departing from the subject matter described herein as defined in the appended claims and their equivalents.

Claims (22)

1. A system for processing data to generate an output, the system comprising:
memory that stores one or more software modules; and
at least one hardware processor that executes the one or more software modules to receive data from a plurality of data providers, wherein the data comprise a plurality of consumer transactions,
standardize the data received from the plurality of data providers into a common format,
for each of the plurality of consumer transactions in the data, automatically classify that consumer transaction into a brand using first artificial intelligence that employs a neural network trained using back propagation, and tag that consumer transaction with the brand into which that consumer transaction was classified,
receive published information for at least one company associated with one or more brands with which the plurality of consumer transactions have been tagged, and
generate a forecast of performance of the at least one company based on the published information and the plurality of consumer transactions that have been tagged with the one or more brands associated with the at least one company using second artificial intelligence that employs a neural network, and
generate a final output based on the forecast of performance, wherein the final output comprises a recommendation to buy, hold, or sell a stock of the at least one company.
2.-3. (canceled)
4. (canceled)
5. The system of claim 1, wherein a rule-based or other deterministic approach is used to extract features used by each neural network.
6. The system of claim 1, wherein the at least one hardware processor executes the one or more software modules to, for each of the plurality of data providers, apply one of a plurality of adapters to the data received from that data provider to normalize, deduplicate, and classify the data received from that data provider.
7.-11. (canceled)
12. The system of claim 1, wherein the plurality of consumer transactions comprise purchase transactions.
13. The system of claim 12, wherein the purchase transactions comprise a time series of one or both of debit card transactions and credit card transactions.
14. The system of claim 1, wherein the plurality of consumer transactions comprises consumer engagements with an application.
15. The system of claim 1, wherein the final output comprises an order to buy or sell a stock of the at least one company.
16. The system of claim 1, wherein the final output comprises a prediction of a future stock price of the at least one company.
17. The system of claim 1, wherein the final output comprises a prediction of a value of at least one metric of the at least one company.
18. The system of claim 17, wherein the at least one metric comprises revenue.
19. The system of claim 17, wherein the at least one metric comprises a number of users.
20. The system of claim 17, wherein the at least one metric comprises a time spent with an application.
21. The system of claim 17, wherein the at least one metric comprises a number of store visits.
22. The system of claim 1, wherein the final output comprises an indication of whether the at least one company is overperforming, underperforming, or neutral.
23. The system of claim 1, wherein the final output comprises a signal strength vector derived based on a comparison of the forecast of performance to a benchmark.
24. The system of claim 1, wherein generating the forecast of performance comprises:
generating at least one panel based on a subset of the plurality of consumer transactions that satisfies one or more criteria; and
applying the neural network of the second artificial intelligence to the at least one panel.
25. The system of claim 1, wherein generating the forecast of performance comprises:
generating a plurality of panels, wherein each of the plurality of panels is based on a subset of the plurality of consumer transactions that satisfies one or more criteria;
using the second artificial intelligence to generate a forecast of an indicator of performance for each of the plurality of panels; and
assembling the forecasts of the indicators of performance for the plurality of panels into the forecast of performance based on weightings associated with the indicators of performance.
26. A method comprising using at least one hardware processor to:
receive data from a plurality of data providers, wherein the data comprise a plurality of consumer transactions;
standardize the data received from the plurality of data providers into a common format;
for each of the plurality of consumer transactions in the data, automatically classify that consumer transaction into a brand using first artificial intelligence that employs a neural network trained using back propagation, and tag that consumer transaction with the brand into which that consumer transaction was classified;
receive published information for at least one company associated with one or more brands with which the plurality of consumer transactions have been tagged; and
generate a forecast of performance of the at least one company based on the published information and the plurality of consumer transactions that have been tagged with the one or more brands associated with the at least one company using second artificial intelligence that employs a neural network; and
generate a final output based on the forecast of performance, wherein the final output comprises a recommendation to buy, hold, or sell a stock of the at least one company.
27. A non-transitory computer-readable medium having instructions stored thereon, wherein the instructions, when executed by a processor, cause the processor to:
receive data from a plurality of data providers, wherein the data comprise a plurality of consumer transactions;
standardize the data received from the plurality of data providers into a common format;
for each of the plurality of consumer transactions in the data, automatically classify that consumer transaction into a brand using first artificial intelligence that employs a neural network trained using back propagation, and tag that consumer transaction with the brand into which that consumer transaction was classified;
receive published information for at least one company associated with one or more brands with which the plurality of consumer transactions have been tagged; and
generate a forecast of performance of the at least one company based on the published information and the plurality of consumer transactions that have been tagged with the one or more brands associated with the at least one company using second artificial intelligence that employs a neural network; and
generate a final output based on the forecast of performance, wherein the final output comprises a recommendation to buy, hold, or sell a stock of the at least one company.
US17/313,958 2020-05-07 2021-05-06 Architecture for data processing and user experience to provide decision support Abandoned US20210350426A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/313,958 US20210350426A1 (en) 2020-05-07 2021-05-06 Architecture for data processing and user experience to provide decision support
US17/726,357 US20230070176A1 (en) 2020-05-07 2022-04-21 Architecture for data processing and user experience to provide decision support

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202063021550P 2020-05-07 2020-05-07
US202163171967P 2021-04-07 2021-04-07
US17/313,958 US20210350426A1 (en) 2020-05-07 2021-05-06 Architecture for data processing and user experience to provide decision support

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/726,357 Continuation US20230070176A1 (en) 2020-05-07 2022-04-21 Architecture for data processing and user experience to provide decision support

Publications (1)

Publication Number Publication Date
US20210350426A1 true US20210350426A1 (en) 2021-11-11

Family

ID=78412860

Family Applications (8)

Application Number Title Priority Date Filing Date
US17/313,958 Abandoned US20210350426A1 (en) 2020-05-07 2021-05-06 Architecture for data processing and user experience to provide decision support
US17/315,122 Active US11205186B2 (en) 2020-05-07 2021-05-07 Artificial intelligence for automated stock orders based on standardized data and company financial data
US17/350,985 Active US11392858B2 (en) 2020-05-07 2021-06-17 Method and system of generating a chain of alerts based on a plurality of critical indicators and auto-executing stock orders
US17/351,021 Abandoned US20210350281A1 (en) 2020-05-07 2021-06-17 Method and system for applying a predictive model to generate a watchlist
US17/350,660 Active US11416779B2 (en) 2020-05-07 2021-06-17 Processing data inputs from alternative sources using a neural network to generate a predictive panel model for user stock recommendation transactions
US17/525,675 Pending US20220121992A1 (en) 2020-05-07 2021-11-12 Artificial intelligence for automated stock orders based on standardized data and company financial data
US17/726,357 Pending US20230070176A1 (en) 2020-05-07 2022-04-21 Architecture for data processing and user experience to provide decision support
US17/873,024 Pending US20230237372A1 (en) 2020-05-07 2022-07-25 Processing data inputs from alternative sources to generate a predictive signal

Family Applications After (7)

Application Number Title Priority Date Filing Date
US17/315,122 Active US11205186B2 (en) 2020-05-07 2021-05-07 Artificial intelligence for automated stock orders based on standardized data and company financial data
US17/350,985 Active US11392858B2 (en) 2020-05-07 2021-06-17 Method and system of generating a chain of alerts based on a plurality of critical indicators and auto-executing stock orders
US17/351,021 Abandoned US20210350281A1 (en) 2020-05-07 2021-06-17 Method and system for applying a predictive model to generate a watchlist
US17/350,660 Active US11416779B2 (en) 2020-05-07 2021-06-17 Processing data inputs from alternative sources using a neural network to generate a predictive panel model for user stock recommendation transactions
US17/525,675 Pending US20220121992A1 (en) 2020-05-07 2021-11-12 Artificial intelligence for automated stock orders based on standardized data and company financial data
US17/726,357 Pending US20230070176A1 (en) 2020-05-07 2022-04-21 Architecture for data processing and user experience to provide decision support
US17/873,024 Pending US20230237372A1 (en) 2020-05-07 2022-07-25 Processing data inputs from alternative sources to generate a predictive signal

Country Status (1)

Country Link
US (8) US20210350426A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220121992A1 (en) * 2020-05-07 2022-04-21 Nowcasting.ai, Inc. Artificial intelligence for automated stock orders based on standardized data and company financial data
US20230237044A1 (en) * 2022-01-24 2023-07-27 Dell Products L.P. Evaluation framework for anomaly detection using aggregated time-series signals
US11893008B1 (en) * 2022-07-14 2024-02-06 Fractal Analytics Private Limited System and method for automated data harmonization

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4268172A1 (en) * 2020-11-24 2023-11-01 VFD Saas Technology, Ltd. Artificial intelligence financial analysis and reporting platform
USD993273S1 (en) * 2021-08-24 2023-07-25 Nowcasting.ai, Inc. Display screen or portion thereof with graphical user interface for revenue details
USD993274S1 (en) * 2021-08-24 2023-07-25 Nowcasting.ai, Inc. Display screen or portion thereof with graphical user interface for alerts
USD993970S1 (en) * 2021-08-24 2023-08-01 Nowcasting.ai, Inc. Display screen or portion thereof with graphical user interface for a watchlist
US20230120747A1 (en) * 2021-10-20 2023-04-20 EMC IP Holding Company LLC Grey market orders detection
US20240046144A1 (en) * 2022-08-02 2024-02-08 Thoughtspot, Inc. Insight Mining Using Machine Learning

Family Cites Families (165)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5313560A (en) * 1990-05-11 1994-05-17 Hitachi, Ltd. Method for determining a supplemental transaction changing a decided transaction to satisfy a target
JPH04264957A (en) 1991-02-20 1992-09-21 Toshiba Corp Security sales decision making supporting device
CA2119921C (en) * 1994-03-23 2009-09-29 Sydney H. Belzberg Computerized stock exchange trading system
US5761442A (en) 1994-08-31 1998-06-02 Advanced Investment Technology, Inc. Predictive neural network means and method for selecting a portfolio of securities wherein each network has been trained using data relating to a corresponding security
US20080071588A1 (en) 1997-12-10 2008-03-20 Eder Jeff S Method of and system for analyzing, modeling and valuing elements of a business enterprise
US7539637B2 (en) * 1998-04-24 2009-05-26 Starmine Corporation Security analyst estimates performance viewing system and method
US6219694B1 (en) * 1998-05-29 2001-04-17 Research In Motion Limited System and method for pushing information from a host system to a mobile data communication device having a shared electronic address
US6249768B1 (en) 1998-10-29 2001-06-19 International Business Machines Corporation Strategic capability networks
US7571139B1 (en) 1999-02-19 2009-08-04 Giordano Joseph A System and method for processing financial transactions
US7818232B1 (en) 1999-02-23 2010-10-19 Microsoft Corporation System and method for providing automated investment alerts from multiple data sources
IL144999A0 (en) * 1999-02-24 2002-06-30 Cha Min Ho Automatic ordering method and system for trading of stock, bond, item, future index, option, index, current and so on
US6941287B1 (en) 1999-04-30 2005-09-06 E. I. Du Pont De Nemours And Company Distributed hierarchical evolutionary modeling and visualization of empirical data
US7966234B1 (en) * 1999-05-17 2011-06-21 Jpmorgan Chase Bank. N.A. Structured finance performance analytics system
AUPQ059399A0 (en) 1999-05-27 1999-06-17 Jupiter International (Australia) Pty Ltd Method and data process system for analysing and timing buy/sell decisions for a tradeable asset investment or security
US20030055768A1 (en) * 1999-07-02 2003-03-20 Anaya Ana Gabriela Alert delivery and delivery performance in a monitoring system
US20030040955A1 (en) * 1999-07-02 2003-02-27 The Nasdaq Stock Market, Inc., A Delaware Corporation Market monitoring architecture for detecting alert conditions
AU6118800A (en) 1999-07-23 2001-02-13 Netfolio, Inc. System and method for selecting and purchasing stocks via a global computer network
US6418419B1 (en) 1999-07-23 2002-07-09 5Th Market, Inc. Automated system for conditional order transactions in securities or other items in commerce
US6484151B1 (en) * 1999-07-23 2002-11-19 Netfolio, Inc. System and method for selecting and purchasing stocks via a global computer network
US6493681B1 (en) * 1999-08-11 2002-12-10 Proxytrader, Inc. Method and system for visual analysis of investment strategies
US7249080B1 (en) * 1999-10-25 2007-07-24 Upstream Technologies Llc Investment advice systems and methods
US7107232B2 (en) * 2000-02-16 2006-09-12 Morris Robert A Method and system for facilitating a sale
US20030018550A1 (en) * 2000-02-22 2003-01-23 Rotman Frank Lewis Methods and systems for providing transaction data
US6772132B1 (en) 2000-03-02 2004-08-03 Trading Technologies International, Inc. Click based trading with intuitive grid display of market depth
US7624172B1 (en) * 2000-03-17 2009-11-24 Aol Llc State change alerts mechanism
US9246975B2 (en) 2000-03-17 2016-01-26 Facebook, Inc. State change alerts mechanism
US20020007335A1 (en) * 2000-03-22 2002-01-17 Millard Jeffrey Robert Method and system for a network-based securities marketplace
WO2001084450A1 (en) * 2000-05-04 2001-11-08 American International Group, Inc. Method and system for initiating and clearing trades
US8010438B2 (en) * 2000-06-01 2011-08-30 Pipeline Financial Group, Inc. Method for directing and executing certified trading interests
US7212997B1 (en) * 2000-06-09 2007-05-01 Ari Pine System and method for analyzing financial market data
US7516097B2 (en) 2000-08-04 2009-04-07 Bgc Partners, Inc. Systems and methods for anonymous electronic trading
US7962398B1 (en) 2000-09-15 2011-06-14 Charles Schwab & Co. Method and system for executing trades in a user preferred security
US7392212B2 (en) * 2000-09-28 2008-06-24 Jpmorgan Chase Bank, N.A. User-interactive financial vehicle performance prediction, trading and training system and methods
US8301535B1 (en) * 2000-09-29 2012-10-30 Power Financial Group, Inc. System and method for analyzing and searching financial instrument data
US6604104B1 (en) 2000-10-02 2003-08-05 Sbi Scient Inc. System and process for managing data within an operational data store
US20020128954A1 (en) * 2000-10-24 2002-09-12 Regulus Integrated Solutions, Llc Electronic trade confirmation system and method
US7827087B2 (en) * 2001-04-24 2010-11-02 Goldman Sachs & Co. Automated securities trade execution system and method
US6636860B2 (en) * 2001-04-26 2003-10-21 International Business Machines Corporation Method and system for data mining automation in domain-specific analytic applications
US20020174081A1 (en) * 2001-05-01 2002-11-21 Louis Charbonneau System and method for valuation of companies
US20030004853A1 (en) * 2001-06-28 2003-01-02 Pranil Ram Graphical front end system for real time security trading
US7664695B2 (en) * 2001-07-24 2010-02-16 Stephen Cutler Securities market and market maker activity tracking system and method
US20030033179A1 (en) 2001-08-09 2003-02-13 Katz Steven Bruce Method for generating customized alerts related to the procurement, sourcing, strategic sourcing and/or sale of one or more items by an enterprise
US20030074297A1 (en) * 2001-10-04 2003-04-17 Philip Carragher Financial platform
JP2003187052A (en) 2001-10-09 2003-07-04 Kunio Ito Enterprise value evaluating system
AU2002349906A1 (en) * 2001-10-24 2003-05-06 Theodore C. Lee Automated financial market information and trading system
US7467108B2 (en) * 2002-01-18 2008-12-16 Ron Papka System and method for predicting security price movements using financial news
US20030167224A1 (en) * 2002-02-22 2003-09-04 Periwal Vijay K. Sequential execution system of trading orders
JP4034110B2 (en) * 2002-04-24 2008-01-16 富士通株式会社 Automatic product order processing system
WO2003107121A2 (en) 2002-06-18 2003-12-24 Tradegraph, Llc System and method for analyzing and displaying security trade transactions
US7433714B2 (en) * 2003-06-30 2008-10-07 Microsoft Corporation Alert mechanism interface
US7739182B2 (en) 2003-07-03 2010-06-15 Makor Issues And Rights Ltd. Machine learning automatic order transmission system for sending self-optimized trading signals
US7617136B1 (en) 2003-07-15 2009-11-10 Teradata Us, Inc. System and method for capturing, storing and analyzing revenue management information for the travel and transportation industries
US8396792B1 (en) 2003-09-10 2013-03-12 Propay Usa. Inc. Dynamically specifying a merchant identifier in an electronic financial transaction
US20050091146A1 (en) * 2003-10-23 2005-04-28 Robert Levinson System and method for predicting stock prices
US7966246B2 (en) 2003-10-23 2011-06-21 Alphacet, Inc. User interface for correlation of analysis systems
US7860774B1 (en) 2003-10-31 2010-12-28 Charles Schwab & Co., Inc. System and method for providing financial advice for an investment portfolio
US7734528B1 (en) * 2003-12-12 2010-06-08 Trading Technologies International, Inc. System and method for event-based trading
US20080288889A1 (en) 2004-02-20 2008-11-20 Herbert Dennis Hunt Data visualization application
US8200567B2 (en) * 2004-04-23 2012-06-12 Access Data Corporation Method of computerized monitoring of investment trading and associated system
US7739183B2 (en) 2004-06-03 2010-06-15 Voudrie Jeffrey D Real-time client portfolio management system
WO2005124632A2 (en) * 2004-06-08 2005-12-29 Rosenthal Collins, Group, Llc Method and system for providing electronic information for multi-market electronic trading
US7555257B2 (en) * 2004-07-30 2009-06-30 Microsoft Corporation Stock channel and news channel
US7698170B1 (en) 2004-08-05 2010-04-13 Versata Development Group, Inc. Retail recommendation domain model
US20060047590A1 (en) * 2004-08-26 2006-03-02 Timothy Anderson Real-time risk management trading system for professional equity traders with adaptive contingency notification
US7620586B2 (en) * 2004-09-08 2009-11-17 Rosenthal Collins Group, Llc Method and system for providing automatic execution of trading strategies for electronic trading
US7877309B2 (en) * 2004-10-18 2011-01-25 Starmine Corporation System and method for analyzing analyst recommendations on a single stock basis
US8380594B2 (en) * 2004-10-22 2013-02-19 Itg Software Solutions, Inc. Methods and systems for using multiple data sets to analyze performance metrics of targeted companies
US7519564B2 (en) * 2004-11-16 2009-04-14 Microsoft Corporation Building and using predictive models of current and future surprises
US20060117303A1 (en) * 2004-11-24 2006-06-01 Gizinski Gerard H Method of simplifying & automating enhanced optimized decision making under uncertainty
WO2006063016A2 (en) * 2004-12-09 2006-06-15 Rosenthal Collins Group, Llc Method and system for providing configurable features for graphical user interfaces for electronic trading
US20060224400A1 (en) * 2005-04-01 2006-10-05 Microsoft Corporation Business event notifications on aggregated thresholds
WO2006119272A2 (en) * 2005-05-04 2006-11-09 Rosenthal Collins Group, Llc Method and system for providing automatic exeuction of black box strategies for electronic trading
US7496531B1 (en) 2005-05-31 2009-02-24 Managed Etfs Llc Methods, systems, and computer program products for trading financial instruments on an exchange
US20070162365A1 (en) * 2005-07-27 2007-07-12 Weinreb Earl J Securities aid
JP2007041869A (en) 2005-08-03 2007-02-15 Digital Garage Inc Investment support system and method
US7734533B2 (en) * 2005-11-13 2010-06-08 Rosenthal Collins Group, Llc Method and system for electronic trading via a yield curve
US7657849B2 (en) * 2005-12-23 2010-02-02 Apple Inc. Unlocking a device by performing gestures on an unlock image
US8095416B2 (en) * 2006-03-21 2012-01-10 International Business Machines Corporation Method, system, and computer program product for the dynamic generation of business intelligence alert triggers
US20070282729A1 (en) 2006-05-01 2007-12-06 Carpenter Steven A Consolidation, sharing and analysis of investment information
US20080177670A1 (en) 2007-01-18 2008-07-24 William Joseph Reid Statistical system to trade selected capital markets
US8838495B2 (en) * 2007-06-01 2014-09-16 Ften, Inc. Method and system for monitoring market data to identify user defined market conditions
US8738487B1 (en) * 2007-08-14 2014-05-27 Two Sigma Investments, LLC Apparatus and method for processing data
US8026794B1 (en) * 2007-10-30 2011-09-27 United Services Automobile Association Systems and methods to deliver information to a member
US20100049664A1 (en) * 2008-08-21 2010-02-25 Yi-Hsuan Kuo Method and system for user-defined alerting of securities information
US20100100470A1 (en) 2008-10-16 2010-04-22 Bank Of America Corporation Financial planning tool
US20150220951A1 (en) * 2009-01-21 2015-08-06 Truaxis, Inc. Method and system for inferring an individual cardholder's demographic data from shopping behavior and external survey data using a bayesian network
US20130325681A1 (en) 2009-01-21 2013-12-05 Truaxis, Inc. System and method of classifying financial transactions by usage patterns of a user
KR20110001815A (en) * 2009-06-30 2011-01-06 에이제이아이 인코포레이티드 디/비/에이 주닥 Method and system for virtual stock trading
US9652803B2 (en) * 2009-10-20 2017-05-16 Trading Technologies International, Inc. Virtualizing for user-defined algorithm electronic trading
GB201000091D0 (en) * 2010-01-05 2010-02-17 Mura Michael E Numerical Modelling Apparatus for Pricing,Trading and Risk Assessment
US8352354B2 (en) * 2010-02-23 2013-01-08 Jpmorgan Chase Bank, N.A. System and method for optimizing order execution
US9501795B1 (en) * 2010-08-23 2016-11-22 Seth Gregory Friedman Validating an electronic order transmitted over a network between a client server and an exchange server with a hardware device
US8751436B2 (en) 2010-11-17 2014-06-10 Bank Of America Corporation Analyzing data quality
US11055754B1 (en) * 2011-01-04 2021-07-06 The Pnc Financial Services Group, Inc. Alert event platform
US8606681B2 (en) * 2011-03-04 2013-12-10 Ultratick, Inc. Predicting the performance of a financial instrument
US20130073413A1 (en) * 2011-09-21 2013-03-21 International Business Machines Corporation Automated Bidding Patience Tool
US9076181B2 (en) * 2011-09-21 2015-07-07 International Business Machines Corporation Auction overbidding vigilance tool
US20130117198A1 (en) * 2011-11-04 2013-05-09 Diane M. Hogan Intellectual property method for dollar cost averaging invesetment plan process for ongoing stock investing
US10031932B2 (en) * 2011-11-25 2018-07-24 International Business Machines Corporation Extending tags for information resources
US11257161B2 (en) * 2011-11-30 2022-02-22 Refinitiv Us Organization Llc Methods and systems for predicting market behavior based on news and sentiment analysis
KR101310356B1 (en) * 2012-02-06 2013-10-14 구민수 Method and appratus for transaction of securities
US9202227B2 (en) 2012-02-07 2015-12-01 6 Sense Insights, Inc. Sales prediction systems and methods
US10032180B1 (en) * 2012-10-04 2018-07-24 Groupon, Inc. Method, apparatus, and computer program product for forecasting demand using real time demand
US20140172751A1 (en) 2012-12-15 2014-06-19 Greenwood Research, Llc Method, system and software for social-financial investment risk avoidance, opportunity identification, and data visualization
US9798788B1 (en) 2012-12-27 2017-10-24 EMC IP Holding Company LLC Holistic methodology for big data analytics
US9420857B2 (en) * 2013-03-04 2016-08-23 Hello Inc. Wearable device with interior frame
US20140278884A1 (en) 2013-03-13 2014-09-18 Fifth Third Bancorp Financial Product Management and Bundling System
US20140279701A1 (en) * 2013-03-15 2014-09-18 Adviceware Asset allocation based system for individual investor portfolio selection
US10445828B2 (en) * 2013-03-20 2019-10-15 Sagar Dinesh Chheda Method and system for generating stock price alerts based on real-time market data
US9218574B2 (en) 2013-05-29 2015-12-22 Purepredictive, Inc. User interface for machine learning
WO2014205543A1 (en) * 2013-06-24 2014-12-31 Joseph Schmitt System and method for automated trading of financial interests
SG10201403898TA (en) * 2013-07-05 2015-02-27 Barrett Carter Keith Computer-implemented intelligence tool
US9747642B1 (en) 2013-08-12 2017-08-29 Financial Realizer, LLC Automated method of identifying stock indexes which are historically high or low relative to a plurality of macroeconomic indicators
US20150058195A1 (en) 2013-08-21 2015-02-26 Miami International Securities Exchange, LLC System and method for monitoring an equity rights transaction for strategic investors in a securities exchange
US20150120384A1 (en) 2013-10-25 2015-04-30 Mastercard International Incorporated Systems and methods for credit card demand forecasting using regional purchase behavior
US20150120849A1 (en) * 2013-10-30 2015-04-30 Qwasi, Inc. Systems and methods for push notification management
US20150220937A1 (en) 2014-01-31 2015-08-06 Mastercard International Incorporated Systems and methods for appending payment network data to non-payment network transaction based datasets through inferred match modeling
US10269077B2 (en) * 2014-06-09 2019-04-23 Visa International Service Association Systems and methods to detect changes in merchant identification information
US20160217366A1 (en) * 2015-01-23 2016-07-28 Jianjun Li Portfolio Optimization Using Neural Networks
US20160225017A1 (en) 2015-01-30 2016-08-04 Linkedln Corporation Size of prize predictive model
US20160292612A1 (en) * 2015-03-31 2016-10-06 Voya Services Company Forecast tool for financial service providers
US20180129961A1 (en) * 2015-05-12 2018-05-10 New York University System, method and computer-accessible medium for making a prediction from market data
US10380690B2 (en) * 2015-05-21 2019-08-13 Chicago Mercantile Exchange Inc. Dataset cleansing
US11004071B2 (en) 2015-09-09 2021-05-11 Pay with Privacy, Inc. Systems and methods for automatically securing and validating multi-server electronic communications over a plurality of networks
SG10201508083XA (en) 2015-09-29 2017-04-27 Mastercard International Inc Methods and apparatus for estimating potential demand at a prospective merchant location
CA3026275A1 (en) * 2015-10-08 2017-04-13 10353744 Canada Ltd. Investment management proposal system
WO2017105882A1 (en) 2015-12-17 2017-06-22 SpringAhead, Inc. Dynamic data normalitzation and duplicate analysis
JP6715048B2 (en) * 2016-03-23 2020-07-01 株式会社野村総合研究所 Goal achievement portfolio generation device, program and method
US10956823B2 (en) 2016-04-08 2021-03-23 Cognizant Technology Solutions U.S. Corporation Distributed rule-based probabilistic time-series classifier
US10963799B1 (en) * 2016-05-05 2021-03-30 Wells Fargo Bank, N.A. Predictive data analysis of stocks
US20190347733A1 (en) * 2016-06-21 2019-11-14 Sony Corporation Information processing apparatus, information processing method, and program
US20170372420A1 (en) 2016-06-28 2017-12-28 Newport Exchange Holdings, Inc. Computer based system and methodology for identifying trading opportunities associated with optionable instruments
US10497060B2 (en) * 2016-10-25 2019-12-03 Seyedhooman Khatami Systems and methods for intelligent market trading
EP3533023A1 (en) * 2016-10-25 2019-09-04 Wealth Wizards Limited Regulatory compliance system and method
US20190385237A1 (en) * 2016-11-30 2019-12-19 Planswell Holdings Inc. Technologies for automating adaptive financial plans
US10592912B1 (en) * 2016-12-06 2020-03-17 Xignite, Inc. Methods and systems for taking an electronic communication action in response to detecting a market condition
WO2018207259A1 (en) 2017-05-09 2018-11-15 日本電気株式会社 Information processing system, information processing device, prediction model extraction method, and prediction model extraction program
US10956986B1 (en) 2017-09-27 2021-03-23 Intuit Inc. System and method for automatic assistance of transaction sorting for use with a transaction management service
US10135936B1 (en) 2017-10-13 2018-11-20 Capital One Services, Llc Systems and methods for web analytics testing and web development
US11509634B2 (en) * 2017-10-27 2022-11-22 Brightplan Llc Secure messaging systems and methods
US10360633B2 (en) * 2017-10-27 2019-07-23 Brightplan Llc Secure messaging systems, methods, and automation
US10360631B1 (en) 2018-02-14 2019-07-23 Capital One Services, Llc Utilizing artificial intelligence to make a prediction about an entity based on user sentiment and transaction history
US20190287044A1 (en) * 2018-03-19 2019-09-19 Adorant Group LLC Extensible, adaptive, intelligent data collaboration platform
CN110809778A (en) * 2018-03-30 2020-02-18 加藤宽之 Stock price prediction support system and method
US11200581B2 (en) 2018-05-10 2021-12-14 Hubspot, Inc. Multi-client service system platform
US10565229B2 (en) 2018-05-24 2020-02-18 People.ai, Inc. Systems and methods for matching electronic activities directly to record objects of systems of record
US20190370731A1 (en) 2018-05-29 2019-12-05 International Business Machines Corporation Method to analyze perishable food stock prediction
US11361244B2 (en) 2018-06-08 2022-06-14 Microsoft Technology Licensing, Llc Time-factored performance prediction
US20200104931A1 (en) * 2018-09-28 2020-04-02 Strike Derivatives Inc. Electronic trade processing system and method
US11373199B2 (en) 2018-10-26 2022-06-28 Target Brands, Inc. Method and system for generating ensemble demand forecasts
US11915179B2 (en) * 2019-02-14 2024-02-27 Talisai Inc. Artificial intelligence accountability platform and extensions
US11645522B2 (en) * 2019-03-05 2023-05-09 Dhruv Siddharth KRISHNAN Method and system using machine learning for prediction of stocks and/or other market instruments price volatility, movements and future pricing by applying random forest based techniques
US11698269B2 (en) 2019-03-24 2023-07-11 Apple Inc. Systems and methods for resolving points of interest on maps
JP2022532230A (en) 2019-05-14 2022-07-13 エクセジー インコーポレイテッド Methods and systems for generating and delivering transaction signals from financial market data with low latency
US20210004716A1 (en) 2019-07-03 2021-01-07 Visa International Service Association Real-time global ai platform
US20210056636A1 (en) 2019-08-20 2021-02-25 Deep Forecast Inc Systems and methods for measurement of data to provide decision support
US20210090173A1 (en) * 2019-09-19 2021-03-25 Gerald C. Steffes Financial defensive condition alert, notification, and communication system and method
US11514523B2 (en) * 2019-10-28 2022-11-29 Fmr Llc AI-based real-time prediction engine apparatuses, methods and systems
US20210125207A1 (en) 2019-10-29 2021-04-29 Somnath Banerjee Multi-layered market forecast framework for hotel revenue management by continuously learning market dynamics
US20210133670A1 (en) 2019-11-05 2021-05-06 Strong Force Vcn Portfolio 2019, Llc Control tower and enterprise management platform with a machine learning/artificial intelligence managing sensor and the camera feeds into digital twin
US11625736B2 (en) 2019-12-02 2023-04-11 Oracle International Corporation Using machine learning to train and generate an insight engine for determining a predicted sales insight
US11328360B2 (en) * 2019-12-05 2022-05-10 UST Global Inc Systems and methods for automated trading
US20210350426A1 (en) * 2020-05-07 2021-11-11 Nowcasting.ai, Inc. Architecture for data processing and user experience to provide decision support
US11176495B1 (en) 2020-06-21 2021-11-16 Liquidity Capital M. C. Ltd. Machine learning model ensemble for computing likelihood of an entity failing to meet a target parameter
US11308556B2 (en) * 2020-08-14 2022-04-19 TradeVision2020, Inc. Presenting trading data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220121992A1 (en) * 2020-05-07 2022-04-21 Nowcasting.ai, Inc. Artificial intelligence for automated stock orders based on standardized data and company financial data
US11416779B2 (en) * 2020-05-07 2022-08-16 Nowcasting.ai, Inc. Processing data inputs from alternative sources using a neural network to generate a predictive panel model for user stock recommendation transactions
US20230237044A1 (en) * 2022-01-24 2023-07-27 Dell Products L.P. Evaluation framework for anomaly detection using aggregated time-series signals
US11893008B1 (en) * 2022-07-14 2024-02-06 Fractal Analytics Private Limited System and method for automated data harmonization

Also Published As

Publication number Publication date
US20230237372A1 (en) 2023-07-27
US11392858B2 (en) 2022-07-19
US20210350394A1 (en) 2021-11-11
US20210350281A1 (en) 2021-11-11
US11416779B2 (en) 2022-08-16
US20210350460A1 (en) 2021-11-11
US20220121992A1 (en) 2022-04-21
US11205186B2 (en) 2021-12-21
US20210350465A1 (en) 2021-11-11
US20230070176A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
US20210350426A1 (en) Architecture for data processing and user experience to provide decision support
US20220343432A1 (en) Machine learning architecture for risk modelling and analytics
Alexander Bayesian methods for measuring operational risk
US10963427B2 (en) Data conversion and distribution systems
US20220108238A1 (en) Systems and methods for predicting operational events
US20210056636A1 (en) Systems and methods for measurement of data to provide decision support
US20220108402A1 (en) Systems and methods for predicting operational events
Alcaide et al. Modelling it brand values supplied by consultancy service companies: Empirical evidence for differences
JP7053077B1 (en) Methods and systems to support single-user action decision making
US20220108241A1 (en) Systems and methods for predicting operational events
US20220108240A1 (en) Systems and methods for predicting operational events
Ekster et al. Alternative Data in Investment Management: Usage, Challenges, and Valuation
CA3081254C (en) Data conversion and distribution systems
US20210350259A1 (en) Data processing systems and methods to provide decision support
US20240144654A1 (en) System and method for automated construction of data sets for retraining a machine learning model
WO2024000152A1 (en) A system and a method for analysing a market of exchangeable assets
Boehrns Accounting implications derived from consumer big data
US11295397B1 (en) Systems, methods, and computer program products for matching service consumers and providers
Galbraith Real-Time Transaction Data for Nowcasting and Short-Term Economic Forecasting
Veldkamp Information Asymmetry in Public Funding: Implications for Innovation Stimulation by the Dutch Government

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOWCASTING.AI, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SCAVO, DAMIAN ARIEL;REEL/FRAME:057175/0589

Effective date: 20210503

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION