SFT-106-A-PCT SOFTWARE-DEFINED VEHICLE CROSS-REFERENCE TO RELATED APPLICATIONS [00001] This application claims priority to U.S. provisional patent application 63/461,807, filed 25 April 2023. This application claims priority to U.S. provisional patent application 63/466,116, filed 12 May 2023. This application claims priority to U.S. provisional patent application 63/532,415, filed 13 August 2023. This application claims priority to U.S. provisional patent application 63/535,744, filed 31 August 2023. This application claims priority to U.S. provisional patent application 63/610,876, filed 15 December 2023. This application claims priority to U.S. provisional patent application 63/621,536, filed 16 January 2024. This application claims priority to U.S. provisional patent application 63/625,609, filed 26 January 2024. Each patent application referenced above is hereby incorporated by reference as if fully set forth herein in its entirety. TECHNICAL FIELD [00002] The present disclosure relates to transportation and related methods and systems including an intelligent digital twin system that creates, manages, and provides digital twins for transportation systems using sensor data and other data, quantum computing methods and systems, including a set of quantum computing services, and biology-based systems and methods for communicating and/or handling data. BACKGROUND [00003] A digital twin is a digital informational construct about a machine, physical device, system, process, person, etc. Once created, the digital twin can be used to represent the machine in a digital representation of a real-world system. The digital twin is created such that it is identical in form and behavior of the corresponding machine. Additionally, the digital twin may mirror the status of the machine within a greater system. For example, sensors may be placed on the machine to capture real-time (or near real-time) data from the physical object to relay it back to a remote digital twin. [00004] Some digital twins may be used to simulate or otherwise mimic the operation of a machine or physical device within a virtual world. In doing so, the digital twins may display structural components of the machine, show steps in lifecycle and/or design, and be viewable via a user interface. [00005] The proliferation of sensor, network, and communication technologies in transportation systems generates vast amounts of data. This data can be useful in predicting the need for maintenance and for classifying potential issues in the transportation systems. There are, however,
SFT-106-A-PCT many unexplored uses for transportation system sensor data that can improve the operation and uptime of the transportation systems and provide transportation entities with agility in responding to conditions before the conditions can increase in severity. [00006] Acquiring large data sets from thousands, or potentially millions of devices (containing large numbers of sensors) distributed across multiple locations has become more typical. For example, there is a proliferation of Radio Frequency Identification (RFID) tags to individual goods in retail stores. The challenge is that this vast number of data streams overwhelms both the ability to transmit the data and the ability to create effective automated centralized decisions. There exists a need in the art for biology-based communications and data handling. [00007] Transportation enterprises that rely on subject matter experts may struggle to capture the knowledge of these subject matter experts when they move on to another enterprise or leave the workforce. There exists a need in the art to capture subject matter expertise and to use the captured subject matter expertise in guiding newer workers or mobile electronic transportation entities to perform transportation service-related tasks. SUMMARY [00008] Among other things, provided herein are methods, systems, components, processes, modules, blocks, circuits, sub-systems, articles, and other elements (collectively referred to in some cases as the “platform” or the “system,” which terms should be understood to encompass any of the above except where context indicates otherwise) that individually or collectively enable advances in transportation systems. [00009] In embodiments, a system for representing a set of operating states of a vehicle to a user of the vehicle includes a portion of the vehicle having a vehicle operating state; a digital twin system receiving vehicle parameter data from one or more inputs to determine the vehicle operating state; and an interface for the digital twin system to present the vehicle operating state to the user of the vehicle. [00010] In embodiments, the vehicle operating state is a vehicle maintenance state. In embodiments, the vehicle operating state is a vehicle energy utilization state. In embodiments, the vehicle operating state is a vehicle navigation state. In embodiments, the vehicle operating state is a vehicle component state. In embodiments, the vehicle operating state is a vehicle driver state. In embodiments, inputs for the digital twin system include at least one of an on-board diagnostic system, a telemetry system, a vehicle-located sensor, or a system external to the vehicle. [00011] In embodiments, the system includes an identity management system to manage a set of identities and roles of a user of the vehicle. In embodiments, the identity management system includes capabilities to view, modify and configure the digital twin system is based on an identity
SFT-106-A-PCT from the set of identities of the user of the vehicle. In embodiments, the digital twin system is populated via an API from an edge intelligence system of the vehicle that provides 5G connectivity to a system external to the vehicle. In embodiments, the digital twin system is populated via an API from an edge intelligence system of the vehicle that provides internal 5G connectivity to a set of sensors and data sources of the vehicle. In embodiments, the digital twin system is populated via an API from an edge intelligence system of the vehicle that provides 5G connectivity to an onboard artificial intelligence system. [00012] In embodiments, the digital twin system is automatically configured by an artificial intelligence system based on a training set of usage activity by a set of digital twin users. In embodiments, the digital twin system is automatically configured by an artificial intelligence system based on a training set of usage activity by a driver user. In embodiments, the digital twin system is automatically configured by an artificial intelligence system based on a training set of usage activity by a rider user. [00013] In embodiments, the system includes a first neural network to detect a detected satisfaction state of a rider user occupying the vehicle through analysis of data gathered from sensors deployed in the vehicle for gathering physiological conditions of the rider user; and a second neural network to optimize, for achieving a favorable satisfaction state of the rider user, an operational parameter of the vehicle in response to the detected satisfaction state of the rider user. [00014] In embodiments, the detected satisfaction state of the rider user is a detected emotional state of the rider user. In embodiments, the favorable satisfaction state of the rider user is a favorable emotional state of the rider user. In embodiments, the first neural network is a recurrent neural network and the second neural network is a radial basis function neural network. In embodiments, at least one of the neural networks is a hybrid neural network and includes a convolutional neural network. In embodiments, the second neural network optimizes the operational parameter based on a correlation between a vehicle operating state and a rider satisfaction state of the rider user. In embodiments, the second neural network optimizes the operational parameter in real time responsive to the detecting of the detected satisfaction state of the rider user by the first neural network. In embodiments, the first neural network comprises a plurality of connected nodes that form a directed cycle, the first neural network further facilitating bi-directional flow of data among the connected nodes. In embodiments, the operational parameter that is optimized affects at least one of: a route of the vehicle, in-vehicle audio contents, a speed of the vehicle, an acceleration of the vehicle, a deceleration of the vehicle, a proximity to objects along the route, and a proximity to other vehicles along the route. [00015] In embodiments, a method for representing a set of states of a vehicle to a user of the vehicle includes obtaining parameters data of one or more components of the vehicle from one or
SFT-106-A-PCT more inputs; updating a digital twin of the vehicle with the parameters data to generate one or more operating states of the vehicle; providing an interface to represent to the user of the vehicle the one or more operating states of the vehicle. [00016] In embodiments, the one or more vehicle operating states include one or more of a vehicle maintenance state, a vehicle energy utilization state, a vehicle navigation state, a vehicle component state, or a vehicle driver state. In embodiments, inputs for the digital twin system include one or more of an on-board diagnostic system, a telemetry system, a vehicle-located sensor, or a system external to the vehicle. [00017] In embodiments, the method includes managing a set of identities and roles of a user of the vehicle; and configuring the digital twin system based on an identity from the set of identities of the user of the vehicle. [00018] In embodiments, the method includes populating the digital twin system via an API from an edge intelligence system of the vehicle configured with one or more of 5G connectivity to a system external to the vehicle, internal 5G connectivity to a set of sensors and data sources of the vehicle, or 5G connectivity to an onboard artificial intelligence system. [00019] In embodiments, the method includes automatically configuring the digital twin system with an artificial intelligence system based on a training set of usage activity by a set of digital twin users. [00020] In embodiments, the method includes detecting a detected satisfaction state of a rider user occupying the vehicle through analysis, using a first neural network, of data gathered from sensors deployed in the vehicle for gathering physiological conditions of the rider user; and optimizing to achieve a favorable satisfaction state of the rider user an operational parameter of the vehicle in response to the detected satisfaction state of the rider user using a second neural network. [00021] In embodiments, the detected satisfaction state of the rider user is a detected emotional state of the rider user. In embodiments, the favorable satisfaction state of the rider user is a favorable emotional state of the rider user. [00022] In embodiments, the first neural network is a recurrent neural network and the second neural network is a radial basis function neural network. In embodiments, at least one of the neural networks is a hybrid neural network and includes a convolutional neural network. In embodiments, optimizing the operational parameter with the second neural network is based on a correlation between a vehicle operating state and a rider user satisfaction state of the rider user. In embodiments, optimizing the operational parameter with the second neural network is responsive to the detecting of the detected satisfaction state of the rider user by the first neural network. [00023] According to some embodiments of the present disclosure, methods and systems are provided herein for updating properties of digital twins of transportation entities and digital twins
SFT-106-A-PCT of transportation systems, such as, without limitation, based on the effect of collected vibration data on a set of digital twin dynamic models such that the digital twins provide a computer- generated representation of the transportation entity or system. [00024] According to some embodiments of the present disclosure, a method for updating one or more properties of one or more transportation system digital twins is disclosed. The method includes receiving a request to update one or more properties of one or more transportation system digital twins; retrieving the one or more transportation system digital twins required to fulfill the request from a digital twin datastore; retrieving one or more dynamic models required to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as input data to determine one or more output values; and updating the one or more properties of the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models. [00025] In embodiments, the request is received from a client application that corresponds to a transportation system or one or more transportation entities within the transportation system. [00026] In embodiments, the request is received from a client application that supports a network connected sensor system. [00027] In embodiments, the request is received from a client application that supports a vibration sensor system. [00028] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00029] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance, worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00030] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system.
SFT-106-A-PCT [00031] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the one or more properties indicated in the request and a respective type of the one or more transportation system digital twins. [00032] In embodiments, the one or more dynamic models are identified using a lookup table. [00033] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00034] According to some embodiments of the present disclosure, a method for updating one or more bearing vibration fault level states of one or more transportation system digital twins is disclosed. The method includes receiving a request from a client application to update one or more bearing vibration fault level states of one or more transportation system digital twins; retrieving the one or more transportation system digital twins required to fulfill the request from a digital twin datastore; retrieving one or more dynamic models required to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as input data to calculate output values that represent the one or more bearing vibration fault level states; and updating the one or more bearing vibration fault level states of the one or more transportation system digital twins based on the output values of the one or more dynamic models. [00035] In embodiments, the one or more bearing vibration fault level states are selected from the group consisting of normal, suboptimal, critical, and alarm. [00036] In embodiments, the client application corresponds to a transportation system or one or more transportation entities within the transportation system. [00037] In embodiments, the client application supports a network connected sensor system. [00038] In embodiments, the client application supports a vibration sensor system. [00039] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00040] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance, worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00041] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial
SFT-106-A-PCT vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00042] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00043] In embodiments, the one or more dynamic models are identified using a lookup table. [00044] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00045] According to some embodiments of the present disclosure, a method for updating one or more vibration severity unit values of one or more transportation system digital twins is disclosed. The method includes receiving a request from a client application to update one or more vibration severity unit values of one or more transportation system digital twins; retrieving the one or more transportation system digital twins required to fulfill the request from a digital twin datastore; retrieving one or more dynamic models required to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more vibration severity unit values; and updating the one or more vibration severity unit values of the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models. [00046] In embodiments, vibration severity units represent displacement. [00047] In embodiments, vibration severity units represent velocity. [00048] In embodiments, vibration severity units represent acceleration. [00049] In embodiments, the client application corresponds to a transportation system or one or more transportation entities within the transportation system. [00050] In embodiments, the client application supports a network connected sensor system. [00051] In embodiments, the client application supports a vibration sensor system. [00052] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00053] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance,
SFT-106-A-PCT worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00054] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00055] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00056] In embodiments, the one or more dynamic models are identified using a lookup table. [00057] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00058] According to some embodiments of the present disclosure, a method for updating one or more probability of failure values of one or more transportation system digital twins is disclosed. The method includes receiving a request from a client application to update one or more probability of failure values of one or more transportation system digital twins; retrieving the one or more transportation system digital twins to fulfill the request; retrieving one or more dynamic models to fulfill the request; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more probability of failure values; and updating the one or more probability of failure values of the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models. [00059] In embodiments, the client application corresponds to a transportation system or one or more transportation entities within the transportation system. [00060] In embodiments, the client application supports a network connected sensor system. [00061] In embodiments, the client application supports a vibration sensor system. [00062] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00063] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance,
SFT-106-A-PCT worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00064] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00065] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00066] In embodiments, the one or more dynamic models are identified using a lookup table. [00067] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00068] According to some embodiments of the present disclosure, a method for updating one or more probability of downtime values of one or more transportation system digital twins is disclosed. The method includes receiving a request to update one or more probability of downtime values of one or more transportation system digital twins; retrieving the one or more transportation system digital twins to fulfill the request from a digital twin datastore; retrieving one or more dynamic models required to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more probability of downtime values; and updating the one or more probability of downtime values for the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models. [00069] In embodiments, the request is received from a client application that corresponds to a transportation system or one or more transportation entities within the transportation system. [00070] In embodiments, the request is received from a client application that supports a network connected sensor system. [00071] In embodiments, the request is received from a client application that supports a vibration sensor system. [00072] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00073] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal
SFT-106-A-PCT frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance, worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00074] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00075] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00076] In embodiments, the one or more dynamic models are identified using a lookup table. [00077] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00078] According to some embodiments of the present disclosure, a method for updating one or more probability of shutdown values of one or more transportation system digital twins having a set of transportation entities is disclosed. The method includes receiving a request from a client application to update one or more probability of shutdown values for the set of transportation entities within one or more transportation system digital twins; retrieving the one or more transportation system digital twins to fulfill the request from a digital twin datastore; retrieving one or more dynamic models to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more probability of shutdown values; and updating the one or more probability of shutdown values for the set of transportation entities within the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models. [00079] In embodiments, the client application corresponds to a transportation system or one or more transportation entities within the transportation system. [00080] In embodiments, the client application supports a network connected sensor system. [00081] In embodiments, the client application supports a vibration sensor system. [00082] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00083] In embodiments, the set of transportation entities includes a refueling center or a vehicle charging center.
SFT-106-A-PCT [00084] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance, worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00085] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00086] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00087] In embodiments, the one or more dynamic models are identified using a lookup table. [00088] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00089] According to some embodiments of the present disclosure, a method for updating one or more cost of downtime values of one or more transportation system digital twins is disclosed. The method includes receiving a request to update one or more cost of downtime values of one or more transportation system digital twins; retrieving the one or more transportation system digital twins to fulfill the request from a digital twin datastore; retrieving one or more dynamic models to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more cost of downtime values; and updating the one or more cost of downtime values for the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models. [00090] In embodiments, the cost of downtime value is selected from the set of cost of downtime per hour, cost of downtime per day, cost of downtime per week, cost of downtime per month, cost of downtime per quarter, and cost of downtime per year. [00091] In embodiments, the request is received from a client application that corresponds to a transportation system or one or more transportation entities within the transportation system.
SFT-106-A-PCT [00092] In embodiments, the request is received from a client application that supports a network connected sensor system. [00093] In embodiments, the request is received from a client application that supports a vibration sensor system. [00094] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00095] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance, worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00096] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00097] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00098] In embodiments, the one or more dynamic models are identified using a lookup table. [00099] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00100] According to some embodiments of the present disclosure, a method for updating one or more key performance indicator (KPI) values of one or more transportation system digital twins is disclosed. The method includes receiving a request to update one or more key performance indicator values of one or more transportation system digital twins; retrieving the one or more transportation system digital twins to fulfill the request from a digital twin datastore; retrieving one or more dynamic models to fulfill the request from a dynamic model datastore; selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more key performance indicator values; and updating one or more key performance indicator values for the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models.
SFT-106-A-PCT [00101] In embodiments, the key performance indicator is selected from the set of uptime, capacity utilization, on standard operating efficiency, overall operating efficiency, overall equipment effectiveness, machine downtime, unscheduled downtime, machine set up time, on-time delivery, training hours, employee turnover, reportable health & safety incidents, revenue per employee, profit per employee, schedule attainment, planned maintenance percentage, and availability. [00102] In embodiments, the request is received from a client application that corresponds to a transportation system or one or more transportation entities within the transportation system. [00103] In embodiments, the request is received from a client application that supports a network connected sensor system. [00104] In embodiments, the request is received from a client application that supports a vibration sensor system. [00105] In embodiments, the one or more transportation system digital twins include one or more digital twins of transportation entities. [00106] In embodiments, the one or more dynamic models take data selected from the set of vibration, temperature, pressure, humidity, wind, rainfall, tide, storm surge, cloud cover, snowfall, visibility, radiation, audio, video, image, water level, quantum, flow rate, signal power, signal frequency, motion, displacement, velocity, acceleration, lighting level, financial, cost, stock market, news, social media, revenue, worker, maintenance, productivity, asset performance, worker performance, worker response time, analyte concentration, biological compound concentration, metal concentration, and organic compound concentration data. [00107] In embodiments, the selected data sources are selected from the group consisting of an analog vibration sensor, a digital vibration sensor, a fixed digital vibration sensor, a tri-axial vibration sensor, a single axis vibration sensor, an optical vibration sensor, a switch, a network connected device, and a machine vision system. [00108] In embodiments, retrieving the one or more dynamic models includes identifying the one or more dynamic models based on the request and a respective type of the one or more transportation system digital twins. [00109] In embodiments, the one or more dynamic models are identified using a lookup table. [00110] In embodiments, a digital twin dynamic model system retrieves the data from the selected data sources via a digital twin I/O system. [00111] According to some embodiments of the present disclosure, a method is disclosed. The method includes: receiving imported data from one or more data sources, the imported data corresponding to a transportation system; generating a digital twin of a transportation system representing the transportation system based on the imported data; identifying one or more transportation entities within the transportation system; generating a set of discrete digital twins
SFT-106-A-PCT representing the one or more transportation entities within the transportation system; embedding the set of discrete digital twins within the digital twin of the transportation system; establishing a connection with a sensor system of the transportation system; receiving real-time sensor data from one or more sensors of the sensor system via the connection; and updating at least one of the transportation system digital twin and the set of discrete digital twins based on the real-time sensor data. [00112] In embodiments, the connection with the sensor system is established via an application programming interface (API). [00113] In embodiments, the transportation system digital twin and the set of discrete digital twins are visual digital twins that are configured to be rendered in a visual manner. In some embodiments, the method further includes outputting the visual digital twins to a client application that displays the visual digital twins via a virtual reality headset. In some embodiments, the method further includes outputting the visual digital twins to a client application that displays the visual digital twins via a display device of a user device. In some embodiments, the method further includes outputting the visual digital twins to a client application that displays the visual digital twins in a display interface with information related to the digital twins overlaid on the visual digital twins or displayed within the display interface. In some embodiments, the method further includes outputting the visual digital twins to a client application that displays the visual digital twins via an augmented reality-enabled device. [00114] In some embodiments, the method further includes instantiating a graph database having a set of nodes connected by edges, wherein a first node of the set of nodes contains data defining the transportation system digital twin and one or more entity nodes respectively contain respective data defining a respective discrete digital twin of the set of discrete digital twins. In some embodiments, each edge represents a relationship between two respective digital twins. In some of these embodiments embedding a discrete digital twin includes connecting an entity node corresponding to a respective discrete digital twin to the first node with an edge representing a respective relationship between a respective transportation entity represented by the respective discrete digital twin and the transportation system. In some embodiments, each edge represents a spatial relationship between two respective digital twins. In some embodiments, each edge represents an operational relationship between two respective digital twins. In some embodiments, each edge stores metadata corresponding to the relationship between the two respective digital twins. In some embodiments, each entity node of the one or more entity nodes includes one or more properties of respective properties of the respective transportation entity represented by the entity node. In some embodiments, each entity node of the one or more entity nodes includes one or more behaviors of respective properties of the respective transportation entity represented by
SFT-106-A-PCT the entity node. In some embodiments, the transportation system node includes one or more properties of the transportation system. In some embodiments, the transportation system node includes one or more behaviors of the transportation system. [00115] In some embodiments, the method further includes executing a simulation based on the transportation system digital twin and the set of discrete digital twins. In some embodiments, the simulation simulates an operation of a machine that produces an output based on a set of inputs. In some embodiments, the simulation simulates vibrational patterns of a bearing in a machine of a transportation system. [00116] In embodiments, the one or more transportation entities are selected from a set of machine components, infrastructure components, equipment components, workpiece components, tool components, vessel components, vehicle components, chassis components, drivetrain components, electrical components, fluid handling components, mechanical components, power components, manufacturing components, energy production components, material extraction components, workers, robots, assembly lines, and vehicles. [00117] In embodiments, the transportation system includes one of a mobile factory, a mobile energy production facility, a mobile material extraction facility, a mining vehicle or device, a drilling/tunneling vehicle or device, a mobile food processing facility, a cargo vessel, a tanker vessel, and a mobile storage facility. [00118] In embodiments, the imported data includes a three-dimensional scan of the transportation system. [00119] In embodiments, the imported data includes a LIDAR scan of the transportation system. [00120] In embodiments, generating the digital twin of the transportation system includes generating a set of surfaces of the transportation system. [00121] In embodiments, generating the digital twin of the transportation system includes configuring a set of dimensions of the transportation system. [00122] In embodiments, generating the set of discrete digital twins includes importing a predefined digital twin of a transportation entity from a manufacturer of the transportation entity, wherein the predefined digital twin includes properties and behaviors of the transportation entity. [00123] In embodiments, generating the set of discrete digital twins includes classifying a transportation entity within the imported data of the transportation system and generating a discrete digital twin corresponding to the classified transportation entity. [00124] According to aspects of the present disclosure, a system for monitoring interaction within a transportation system includes a digital twin datastore and one or more processors. The digital twin datastore includes data collected by a set of proximity sensors disposed within a transportation system. The data includes location data indicating respective locations of a plurality of elements
SFT-106-A-PCT within the transportation system. The one or more processors are configured to maintain, via the digital twin datastore, a transportation system digital twin for the transportation system, receive signals indicating actuation of at least one proximity sensor within the set of proximity sensors by a real-world element from the plurality of elements, collect, in response to actuation of the set of proximity sensors, updated location data for the real-world element using the set of proximity sensors, and update the transportation system digital twin within the digital twin datastore to include the updated location data. [00125] In embodiments, each of the set of proximity sensors is configured to detect a device associated with a user. [00126] In embodiments, the device is a wearable device. [00127] In embodiments, the device is an RFID device. [00128] In embodiments, each element of the plurality of elements is a mobile element. [00129] In embodiments, each element of the plurality of elements is a respective worker. [00130] In embodiments, the plurality of elements includes mobile equipment elements and workers, mobile-equipment-position data is determined using data transmitted by the respective mobile equipment element, and worker-position data is determined using data obtained by the system. [00131] In embodiments, the worker-position data is determined using information transmitted from a device associated with respective workers. [00132] In embodiments, the actuation of the set of proximity sensors occurs in response to interaction between the respective worker and the set of proximity sensors. [00133] In embodiments, the actuation of the set of proximity sensors occurs in response to interaction between a worker and a respective at least one proximity-sensor digital twin corresponding to the set of proximity sensors. [00134] In embodiments, the one or more processors collect updated location data for the plurality of elements using the set of proximity sensors in response to the actuation of the set of proximity sensors. [00135] According to aspects of the present disclosure, a system for monitoring a transportation system having real-world elements disposed therein includes a digital twin datastore and one or more processors. The digital twin datastore includes a set of states stored therein. The set of states includes states for one or more of the real-world elements. Each state within the set of states is uniquely identifiable by a set of identifying criteria from a set of monitored attributes. The set of monitored attributes corresponds to signals received from a sensor array operatively coupled to the real-world elements. The one or more processors are configured to maintain, via the digital twin datastore, a transportation-system digital twin for the transportation system, receive, via the sensor
SFT-106-A-PCT array, signals for one or more attributes within the set of monitored attributes, determine a present state for one or more of the real-world elements in response to determining that the signals for the one or more attributes satisfy a respective set of identifying criteria, and update, in response to determining the present state, the transportation system digital twin to include the present state of the one or more of the real-world elements. The present state corresponds to the respective state within the set of states. [00136] In embodiments, a cognitive intelligence system stores the identifying criteria within the digital twin datastore. [00137] In embodiments, a cognitive intelligence system, in response to receiving the identifying criteria, updates triggering conditions for the set of monitored attributes to include an updated triggering condition. [00138] In embodiments, the updated triggering condition is reducing time intervals between receiving sensed attributes from the set of monitored attributes. [00139] In embodiments, the sensed attributes are the one or more attributes that satisfy the respective set of identifying criteria. [00140] In embodiments, the sensed attributes are all attributes corresponding to the respective real-world element. [00141] In embodiments, a cognitive intelligence system determines whether instructions exist for responding to the state and the cognitive intelligence system, in response to determining no instructions exist, determines instructions for responding to the state using a digital twin simulation system. [00142] In embodiments, the digital twin simulation system and the cognitive intelligence system repeatedly iterate simulated values and response actions until an associated cost function is minimized and the one or more processors are further configured to, in response to minimization of the associated cost function, store the response action that minimizes the associated cost function within the digital twin datastore. [00143] In embodiments, a cognitive intelligence system is configured to affect the response actions associated with the state. [00144] In embodiments, a cognitive intelligence system is configured to halt operation of one or more real-world elements that are identified by the response actions. [00145] In embodiments, a cognitive intelligence system is configured to determine resources for the transportation system identified by the response actions and alter the resources in response thereto. [00146] In embodiments, the resources include data transfer bandwidth and altering the resources includes establishing additional connections to thereby increase the data transfer bandwidth.
SFT-106-A-PCT [00147] According to aspects of the present disclosure, a system for monitoring navigational route data through a transportation system has real-world elements disposed therein includes a digital twin datastore and one or more processors. The digital twin datastore includes a transportation system digital twin corresponding to the transportation system and a worker digital twin corresponding to a respective worker of a set of workers within the transportation system. The one or more processors are configured to maintain, via the digital twin datastore, the transportation system digital twin to include contemporaneous positions for the set of workers within the transportation system, monitor movement of each worker in the set of workers via a sensor array, determine, in response to detecting movement of the respective worker, navigational route data for the respective worker, update the transportation system digital twin to include indicia of the navigational route data for the respective worker, and move the worker digital twin along a route of the navigational route data. [00148] In embodiments, the one or more processors are further configured to update, in response to representing movement of the respective worker, determine navigational route data for remaining workers in the set of workers. [00149] In embodiments, the navigational route data includes a route for collecting vibration measurements from one or more machines in the transportation system. [00150] In embodiments, the navigational route data automatically transmitted to the system by one or more individual-associated devices. [00151] In embodiments, the individual-associated device is a mobile device that has cellular data capabilities. [00152] In embodiments, the individual-associated device is a wearable device associated with the worker. [00153] In embodiments, the navigational route data is determined via environment-associated sensors. [00154] In embodiments, the navigational route data is determined using historical routing data stored in the digital twin datastore. [00155] In embodiments, the historical routing data was obtained using the respective worker. [00156] In embodiments, the historical routing data was obtained using another worker. [00157] In embodiments, the historical routing data is associated with a current task of the worker. [00158] In embodiments, the digital twin datastore includes a transportation system digital twin. [00159] In embodiments, the one or more processors are further configured to determine existence of a conflict between the navigational route data and the transportation system digital twin, alter, in response to determining accuracy of the transportation system digital twin via the sensor array, the navigational route data for the worker, and update, in response to determining inaccuracy of
SFT-106-A-PCT the transportation system digital twin via the sensor array, the transportation system digital twin to thereby resolve the conflict. [00160] In embodiments, the transportation system digital twin is updated using collected data transmitted from the worker. [00161] In embodiments, the collected data includes proximity sensor data, image data, or combinations thereof. [00162] According to aspects of the present disclosure, a system for monitoring navigational route data includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation system digital twin with real-world-element digital twins embedded therein. The transportation system digital twin provides a digital twin of a transportation system. Each real- world-element digital twin provides an other digital twin for corresponding real-world elements within the transportation system. The corresponding real-world-elements include a set of workers. The one or more processors are configured to monitor movement of each worker in the set of workers, determine navigational route data for at least one worker in the set of workers, and represent the movement of the at least one worker by movement of associated digital twins using the navigational route data. [00163] In embodiments, the one or more processors are further configured to update, in response to representing movement of the at least one worker, determine navigational route data for remaining workers in the set of workers. [00164] In embodiments, the navigational route data includes a route for collecting vibration measurements from one or more machines in the transportation system. [00165] In embodiments, the navigational route data automatically transmitted to the system by one or more individual-associated devices. [00166] In embodiments, the individual-associated device is a mobile device that has cellular data capabilities. [00167] In embodiments, the individual-associated device is a wearable device associated with the worker. [00168] In embodiments, the navigational route data is determined via environment-associated sensors. [00169] In embodiments, the navigational route data is determined using historical routing data stored in the digital twin datastore. [00170] In embodiments, the historical route data was obtained using the respective worker. [00171] In embodiments, the historical route data was obtained using another worker. [00172] In embodiments, the historical route data is associated with a current task of the worker. [00173] In embodiments, the digital twin datastore includes a transportation system digital twin.
SFT-106-A-PCT [00174] In embodiments, the one or more processors are further configured to determine existence of a conflict between the navigational route data and the transportation system digital twin, alter, in response to determining accuracy of the transportation system digital twin via a sensor array, the navigational route data for the worker, and update, in response to determining inaccuracy of the transportation system digital twin via the sensor array, the transportation system digital twin to thereby resolve the conflict. [00175] In embodiments, the transportation system digital twin is updated using collected data transmitted from the worker. [00176] In embodiments, the collected data includes proximity sensor data, image data, or combinations thereof. [00177] According to aspects of the present disclosure, a system for representing workpiece objects in a digital twin includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation-system digital twin with real-world-element digital twins embedded therein. The transportation system digital twin provides a digital twin of a transportation system. Each real-world-element digital twin providing an other digital twin for corresponding real-world elements within the transportation system. The corresponding real-world-elements including a workpiece and a worker. The one or more processors are configured to simulate, using a digital twin simulation system, a set of physical interactions to be performed on the workpiece by the worker. The simulation includes obtaining the set of physical interactions, determining an expected duration for performance of each physical interaction within the set of physical interactions based on historical data of the worker, and storing, within the digital twin datastore, workpiece digital twins corresponding to performance of the set of physical interactions on the workpiece. [00178] In embodiments, the historical data is obtained from user-input data. [00179] In embodiments, the historical data is obtained from a sensor array within the transportation system. [00180] In embodiments, the historical data is obtained from a wearable device worn by the worker. [00181] In embodiments, each datum of the historical data includes indicia of a first time and a second time, and the first time is a time of performance for the physical interaction. [00182] In embodiments, the second time is a time for beginning an expected break time of the worker. [00183] In embodiments, the historical data further includes indicia of a duration for the expected break time. [00184] In embodiments, the second time is a time for ending an expected break time of the worker.
SFT-106-A-PCT [00185] In embodiments, the historical data further includes indicia of a duration for the expected break time. [00186] In embodiments, the second time is a time for ending an unexpected break time of the worker. [00187] In embodiments, the historical data further includes indicia of a duration for the unexpected break time. [00188] In embodiments, each datum of the historical data includes indicia of consecutive interactions of the worker with a plurality of other workpieces prior to performing the set of physical interactions with the workpiece. [00189] In embodiments, each datum of the historical data includes indicia of consecutive days the worker was present within the transportation system. [00190] In embodiments, each datum of the historical data includes indicia of an age of the worker. [00191] In embodiments, the historical data further includes indicia of a first duration for an expected break time of the worker and a second duration for an unexpected break time of the worker, each datum of the historical data includes indicia of a plurality of times, indicia of consecutive interactions of the worker with a plurality of other workpieces prior to performing the set of physical interactions with the workpiece and indicia of consecutive days the worker was present within the transportation system, or indicia of an age of the worker. The plurality of times includes a first time, a second time, a third time, and a fourth time. The first time is a time of performance for the physical interaction, the second time is a time for beginning the expected break time, the third time is a time for ending the expected break time, and the fourth time is a time for ending the unexpected break time. [00192] In embodiments, the workpiece digital twins are a first workpiece digital twin corresponding to the workpiece prior to performance of the physical interaction and a second workpiece digital twin corresponding to the workpiece after performance of the set of physical interactions. [00193] In embodiments, the workpiece digital twins are a plurality of workpiece digital twins, each of the plurality of workpiece digital twins corresponds to the workpiece after performance of a respective one of the set of physical interactions. [00194] According to aspects of the present disclosure, a system for inducing an experience via a wearable device includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation-system digital twin with real-world-element digital twins embedded therein. The transportation system digital twin provides a digital twin of a transportation system. Each real-world-element digital twin providing an other digital twin for corresponding real-world elements within the transportation system. The corresponding real-world-elements
SFT-106-A-PCT including a wearable device worn by a wearer within the transportation system. The one or more processors are configured to embed a set of control instructions for a wearable device within the digital twins and induce, in response to an interaction between the wearable device and each respective one of the digital twins, an experience for the wearer of the wearable device. [00195] In embodiments, the wearable device is configured to output video, audio, haptic feedback, or combinations thereof to induce the experience for the wearer. [00196] In embodiments, the experience is a virtual reality experience. [00197] In embodiments, the wearable device includes an image capture device and the interaction includes the wearable device capturing an image of the digital twin. [00198] In embodiments, the wearable device includes a display device and the experience includes display of information related to the respective digital twin. [00199] In embodiments, the information displayed includes financial data associated with the digital twin. [00200] In embodiments, the information displayed includes a profit or loss associated with operation of the digital twin. [00201] In embodiments, the information displayed includes information related to an occluded element that is at least partially occluded by a foreground element. [00202] In embodiments, the information displayed includes an operating parameter for the occluded element. [00203] In embodiments, the information displayed further includes a comparison to a design parameter corresponding to the operating parameter displayed. [00204] In embodiments, the comparison includes altering display of the operating parameter to change a color, size, or display period for the operating parameter. [00205] In embodiments, the information includes a virtual model of the occluded element overlaid on the occluded element and visible with the foreground element. [00206] In embodiments, the information includes indicia for removable elements that are is configured to provide access to the occluded element. Each indicium is displayed proximate to the respective removable element. [00207] In embodiments, the indicia are sequentially displayed such that a first indicium corresponding to a first removable element is displayed, and a second indicium corresponding to a second removable element is displayed in response to a worker removing the first removable element. [00208] According to aspects of the present disclosure, a system for embedding device output in a transportation system digital twin includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation system digital twin having real-world-element digital
SFT-106-A-PCT twins embedded therein. The transportation system digital twin provides a digital twin of a transportation system. Each real-world-element digital twin providing an other digital twin for corresponding real-world elements within the transportation system. The real-world elements include a simultaneous location and mapping sensor. The one or more processors are configured to obtain location information from the simultaneous location and mapping sensor, determine that the simultaneous location and mapping sensor is disposed within the transportation system, collect mapping information, pathing information, or a combination thereof from the simultaneous location and mapping sensor, and update the transportation system digital twin using the mapping information, the pathing information, or the combination thereof. The collection is in response to determining the simultaneous location and mapping sensor is within the transportation system. [00209] In embodiments, the one or more processors are further configured to detect objects within the mapping information and, for each detected object within the mapping information, determine whether the detected object corresponds to an existing real-world-element digital twin, add, in response to determining that the detected object does not correspond to an existing real-world- element digital twin, a detected-object digital twin to the real-world-element digital twins within the digital twin datastore using a digital twin management system, and update, in response to determining that the detected object corresponds to an existing real-world-element digital twin, the real-world-element digital twin to include new information detected by the simultaneous location and mapping sensor. [00210] In embodiments, the simultaneous location and mapping sensor is configured to produce the mapping information using a sub-optimal mapping algorithm. [00211] In embodiments, the sub-optimal mapping algorithm produces bounded-region representations for elements within the transportation system. [00212] In embodiments, the one or more processors are further configured to obtain objects detected by the sub-optimal mapping algorithm, determine whether the detected object corresponds to an existing real-world-element digital twin, and update, in response to determining the detected object corresponds to the existing real-world-element digital twin, the mapping information to include dimensional information for the real-world-element digital twin. [00213] In embodiments, the updated mapping information is provided to the simultaneous location and mapping sensor to thereby optimize navigation through the transportation system. [00214] In embodiments, the one or more processors are further configured to request, in response to determining the detected object does not correspond to an existing real-world-element digital twin, updated data for the detected object from the simultaneous location and mapping sensor that is configured to produce a refined map of the detected object.
SFT-106-A-PCT [00215] In embodiments, the simultaneous location and mapping sensor provides the updated data using a second algorithm. The second algorithm is configured to increase resolution of the detected object. [00216] In embodiments, the simultaneous location and mapping sensor, in response to receiving the request, captures the updated data for the real-world element corresponding to the detected object. [00217] In embodiments, the simultaneous location and mapping sensor is within an autonomous vehicle navigating the transportation system. [00218] In embodiments, navigation of the autonomous vehicle includes use of digital twins received from the digital twin datastore. [00219] According to aspects of the present disclosure, a system for embedding device output in a transportation system digital twin includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation-system digital twin having real-world-element digital twins embedded therein. The transportation system digital twin provides a digital twin of a transportation system. Each real-world-element digital twin providing an other digital twin for corresponding real-world elements within the transportation system. The real-world elements including a light detection and ranging sensor. The one or more processors are configured to obtain output from the light detection and ranging sensor and embed the output of the light detection and ranging sensor into the transportation system digital twin to define external features of at least one of the real-world elements within the transportation system. [00220] In embodiments, the one or more processors are further configured to analyze the output to determine a plurality of detected objects within the output of the light detection and ranging sensor. Each of the plurality of detected objects is a closed shape. [00221] In embodiments, the one or more processors are further configured to compare the plurality of detected objects to the real-world-element digital twins within the digital twin datastore and, for each of the plurality of detected objects, update, in response to determining the detected object corresponds to one or more of the real-world-element digital twins, the respective real- world-element digital twin within the digital twin datastore, and add, in response to determining the detected object does not correspond to the real-world-element digital twins, a new real-world- element digital twin to the digital twin datastore. [00222] In embodiments, the output from the light detection and ranging sensor is received in a first resolution and the one or more processors are further configured to compare the plurality of detected objects to the real-world-element digital twins within the digital twin datastore and, for each of the plurality of detected objects that does not correspond to a real-world-element digital
SFT-106-A-PCT twin, direct the light detection and ranging sensor to increase scan resolution to a second resolution and perform a scan of the detected object using the second resolution. [00223] In embodiments, the scan is at least 5 times the resolution of the first resolution. [00224] In embodiments, the scan is at least 10 times the resolution of the first resolution. [00225] In embodiments, the output from the light detection and ranging sensor is received in a first resolution and the one or more processors are further configured to compare the plurality of detected objects to the real-world-element digital twins within the digital twin datastore and, for each of the plurality of detected objects, update, in response to determining the detected object corresponds to one or more of the real-world-element digital twins, the respective real-world- element digital twin within the digital twin datastore. In response to determining the detected object does not correspond to the real-world-element digital twins, the system is further configured to direct the light detection and ranging sensor to increase scan resolution to a second resolution, perform a scan of the detected object using the second resolution, and add a new real-world- element digital twin for the detected object to the digital twin datastore. [00226] According to aspects of the present disclosure, a system for embedding device output in a transportation system digital twin includes a digital twin datastore and one or more processors. The digital twin datastore includes a transportation-system digital twin providing a digital twin of a transportation system. The transportation system includes real-world elements disposed therein. The real-world elements include a plurality of wearable devices. The transportation system digital twin includes a plurality of real-world-element digital twins embedded therein. Each real-world- element digital twin corresponds to a respective at least one of the real-world elements. The one or more processors are configured to, for each of the plurality of wearable devices, obtain output from the wearable device, and update, in response to detecting a triggering condition, the transportation system digital twin using the output from the wearable device. [00227] In embodiments, the triggering condition is receipt of the output from the wearable device. [00228] In embodiments, the triggering condition is a determination that the output from the wearable device is different from a previously stored output from the wearable device. [00229] In embodiments, the triggering condition is a determination that received output from another wearable device within the plurality of wearable devices is different from a previously stored output from the other wearable device. [00230] In embodiments, the triggering condition includes a mismatch between the output from the wearable device and contemporaneous output from another of the wearable devices. [00231] In embodiments, the triggering condition includes a mismatch between the output from the wearable device and a simulated value for the wearable device.
SFT-106-A-PCT [00232] In embodiments, the triggering condition includes user interaction with a digital twin corresponding to the wearable device. [00233] In embodiments, the one or more processors are further configured to detect objects within mapping information received from a simultaneous location and mapping sensor. For each detected object within the mapping information, the system is further configured to determine whether the detected object corresponds to an existing real-world-element digital twin, and, in response to determining that the detected object does not correspond to an existing real-world-element digital twin, a detected-object digital twin to the real-world-element digital twins within the digital twin datastore using a digital twin management system, and update, in response to determining that the detected object corresponds to an existing real-world-element digital twin, the real-world-element digital twin to include new information detected by the simultaneous location and mapping sensor. [00234] In embodiments, a simultaneous location and mapping sensor is configured to produce mapping information using a sub-optimal mapping algorithm. [00235] In embodiments, the sub-optimal mapping algorithm produces bounded-region representations for elements within the transportation system. [00236] In embodiments, the one or more processors are further configured to obtain objects detected by the sub-optimal mapping algorithm, determine whether the detected object corresponds to an existing real-world-element digital twin, and update, in response to determining the detected object corresponds to the existing real-world-element digital twin, the mapping information to include dimensional information from the real-world-element digital twin. [00237] In embodiments, the updated mapping information is provided to the simultaneous location and mapping sensor to thereby optimize navigation through the transportation system. [00238] In embodiments, the one or more processors are further configured to request, in response to determining the detected object does not correspond to an existing real-world-element digital twin, updated data for the detected object from the simultaneous location and mapping sensor that is configured to produce a refined map of the detected object. [00239] In embodiments, the simultaneous location and mapping sensor provides the updated data using a second algorithm. The second algorithm is configured to increase resolution of the detected object. [00240] In embodiments, the simultaneous location and mapping sensor, in response to receiving the request, captures the updated data for the real-world element corresponding to the detected object. [00241] In embodiments, the simultaneous location and mapping sensor is within an autonomous vehicle navigating the transportation system.
SFT-106-A-PCT [00242] In embodiments, navigation of the autonomous vehicle includes use of real-world-element digital twins received from the digital twin datastore. [00243] According to aspects of the present disclosure, a system for representing attributes in a transportation system digital twin includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation-system digital twin including real-world-element digital twins embedded therein. The transportation system digital twin corresponds to a transportation system. Each real-world-element digital twin provides a digital twin of a respective real-world element that is disposed within the transportation system. The real-world-element digital twins include mobile-element digital twins. Each mobile-element digital twin provides a digital twin of a respective mobile element within the real-world elements. The one or more processors are configured to, for each mobile element, determine, in response to occurrence of a triggering condition, a position of the mobile element, and update, in response to determining the position of the mobile element, the mobile-element digital twin corresponding to the mobile element to reflect the position of the mobile element. [00244] In embodiments, the mobile elements are workers within the transportation system. [00245] In embodiments, the mobile elements are vehicles within the transportation system. [00246] In embodiments, triggering condition is expiration of dynamically determined time interval. [00247] In embodiments, the dynamically determined time interval is increased in response to determining a single mobile element within the transportation system. [00248] In embodiments, the dynamically determined time interval is increased in response to determining occurrence of a predetermined period of reduced environmental activity. [00249] In embodiments, the dynamically determined time interval is decreased in response to determining abnormal activity within the transportation system. [00250] In embodiments, the dynamically determined time interval is a first time interval, and the dynamically determined time interval is decreased to a second time interval in response to determining movement of the mobile element. [00251] In embodiments, the dynamically determined time interval is increased from the second time interval to the first time interval in response to determining nonmovement of the mobile element for at least a third time interval. [00252] In embodiments, the triggering condition is expiration of a time interval. The time interval is calculated based on a probability that the mobile element has moved. [00253] In embodiments, the triggering condition is proximity of the mobile element to another of the mobile elements.
SFT-106-A-PCT [00254] In embodiments, the triggering condition is based on density of movable elements within the transportation system. [00255] In embodiments, the path information obtained from a navigation module of the mobile element. [00256] In embodiments, the one or more processors are further configured to obtain the path information including detecting, using a plurality of sensors within the transportation system, movement of the mobile element, obtaining a destination for the mobile element, calculating, using the plurality of sensors within the transportation system, an optimized path for the mobile element, and instructing the mobile element to navigate the optimized path. [00257] In embodiments, the optimized path includes using path information for other mobile elements within the real-world elements. [00258] In embodiments, the optimized path minimizes interactions between mobile elements and humans within the transportation system. [00259] In embodiments, the mobile elements include autonomous vehicles and non-autonomous vehicles, and the optimized path reduces interactions of the autonomous vehicles with the non- autonomous vehicles. [00260] In embodiments, the traffic modeling includes use of a particle traffic model, a trigger- response mobile-element-following traffic model, a macroscopic traffic model, a microscopic traffic model, a submicroscopic traffic model, a mesoscopic traffic model, or a combination thereof. [00261] According to aspects of the present disclosure, a system for representing design specification information includes a digital twin datastore and one or more processors. The digital twin datastore stores a transportation-system digital twin including real-world-element digital twins embedded therein. The transportation system digital twin corresponds to a transportation system. Each real-world-element digital twin provides a digital twin of a respective real-world element that is disposed within the transportation system. The one or more processors are configured to, for each of the real-world elements, determine a design specification for the real- world element, associate the design specification with the real-world-element digital twin, and display the design specification to a user in response to the user interacting with the real-world- element digital twin. [00262] In embodiments, the user interacting with the real-world-element digital twin includes the user selecting the real-world-element digital twin. [00263] In embodiments, the user interacting with the real-world-element digital twin includes the user directing an image capture device toward the real-world-element digital twin. [00264] In embodiments, the image capture device is a wearable device.
SFT-106-A-PCT [00265] In embodiments, the real-world element digital twin is a transportation-system digital twin. [00266] In embodiments, the design specification is stored in the digital twin datastore in response to input of the user. [00267] In embodiments, the design specification is determined using a digital twin simulation system. [00268] In embodiments, the one or more processors are further configured to, for each of the real- world elements, detect, using a sensor within the transportation system, one or more contemporaneous operating parameters, compare the one or more contemporaneous operating parameters to the design specification, and automatically display the design specification, the one or more contemporaneous operating parameters, or a combination thereof in response to a mismatch between the one or more contemporaneous operating parameters and the design specification. The one or more contemporaneous operating parameters correspond to the design specification of the real-world element. [00269] In embodiments, display of the design specification includes indicia of contemporaneous operating parameters. [00270] In embodiments, display of the design specification includes source indicia for the specification information. [00271] In embodiments, the source indicia inform the user that the design specification was determined via use of a digital twin simulation system. A more complete understanding of the disclosure will be appreciated from the description and accompanying drawings and the claims, which follow. [00272] According to aspects of the present disclosure, a method is provided for configuring role- based digital twins, comprising: receiving, by a processing system having one or more processors, an organizational definition of an enterprise, wherein the organizational definition defines a set of roles within the enterprise; generating, by the processing system, an organizational digital twin of the enterprise based on the organizational definition, wherein the organizational digital twin is a digital representation of an organizational structure of the enterprise; determining, by the processing system, a set of relationships between different roles within the set of roles based on the organizational! definition; determining, by the processing system, a set of settings for a role from the set of roles based on the determined set of relationships; !inking an identity of a respective individual to the role; determining, by the processing system, a configuration of a presentation layer of a role-based digital twin corresponding to the role based on the settings of the role that is linked to the identity, wherein the configuration of the presentation layer defines a set of states that is depicted in the role-based digital twin associated with the role; determining, by the processing
SFT-106-A-PCT system, a set of data sources that provide data corresponding to the set of states, wherein each data source provides one or more respective types of data; and configuring one or more data structures that is received from the one or more data sources, wherein the one or more data structures are configured to provide data used to populate one or more of the set of states in the role-based digital twin. [00273] In embodiments, an organizational definition may further identify a set of physical assets of the enterprise. [00274] In embodiments, determining a set of relationships may include parsing the organizational definition to identify a reporting structure and one or more business units of the enterprise. [00275] In embodiments, a set of relationships may be inferred from a reporting structure and a business unit. [00276] In embodiments, a set of identities may be linked to a set of roles, wherein each identity corresponds to a respective role from the set of roles. [00277] In embodiments, an organizational structure may include hierarchical components, which may be embodied in a graph data structure. [00278] In embodiments, a set of settings for a set of roles may include role-based preference settings. [00279] In embodiments, a role-based preference setting may be configured based on a set of role specific templates. [00280] In embodiments, a set of templates may include at least one of a CEO template, a COO template, a CFO template, a counsel template, a board member template, a CTO template, a chief marketing officer template, an information technology manager template, a chief information officer template, a chief data oftl.cer template, an investor template, a customer template, a vendor template, a supplier template, an engineering manager template, a project manager template, an operations manager template, a sales manager template, a salesperson template, a service manager template, a maintenance operator template, and a business development template. [00281] In embodiments, a set of settings for the set of roles may include role-based taxonomy settings. [00282] In embodiments, a taxonomy setting may identify a taxonomy that is used to characterize data that is presented in a role-based digital twin, such that the data is presented in a taxonomy that is linked to the role corresponding to the role-based digital twin. [00283] In embodiments, a set of taxonomies includes at least one of a CEO taxonomy, a COO taxonomy, a CFO taxonomy, a counsel taxonomy, a board member taxonomy, a CTO taxonomy, a chief marketing officer taxonomy, an information technology manager taxonomy, a chief information officer taxonomy, a chief data officer taxonomy, an investor taxonomy, a customer
SFT-106-A-PCT taxonomy, a vendor taxonomy, a supplier taxonomy, an engineering manager taxonomy, a project manager taxonomy, an operations manager taxonomy, a sales manager taxonomy, a salesperson taxonomy, a service manager taxonomy, a maintenance operator taxonomy, and a business development taxonomy. [00284] In embodiments, at least one role of the set of roles may be selected from among a CEO role, a COO role, a CFO role, a counsel role, a board member role, a CTO role, an information technology manager role, a chief information officer role, a chief data officer role, a human resources manager role, an investor role, an engineering manager role, an accountant role, an auditor role, a resource planning role, a public relations manager role, a project manager role, an operations manager role, a research and development role, an engineer role, including but not limited to mechanical engineer, electrical engineer, semiconductor engineer, chemical engineer, computer science engineer, data science engineer, network engineer, or some other type of engineer, and a business development role. [00285] In embodiments, at least one role may be selected from among a factory manager role, a factory operations role, a factory worker role, a power plant manager role, a power plant operations role, a power plant worker role, an equipment service role, and an equipment maintenance operator role. [00286] In embodiments, at least one role may be selected from among a chief marketing officer role, a product development role, a supply chain manager role, a product design role, a marketing analyst role, a product manager role, a competitive analyst role, a customer service representative role, a procurement operator, an inbound logistics operator, an outbound logistics operator, a customer role, a supplier role, a vendor role, a demand management role, a marketing manager role, a sales manager role, a service manager role, a demand forecasting role, a retail manager role, a warehouse manager role, a salesperson role, and a distribution center manager role. [00287] According to aspects of the present disclosure, a method is provided for configuring a digital twin of a workforce, comprising: representing an enterprise organizational structure in a digital twin of an enterprise; parsing the structure to infer relationships among a set of roles within the organizational structure, the relationships and the roles defining a workforce of the enterprise; and configuring the presentation layer of a digital twin to represent the enterprise as a set of workforces having a set of attributes and relationships. [00288] In embodiments, a digital twin may integrate with an enterprise resource planning system that operates on a data structure representing a set of roles in the enterprise, such that changes in the enterprise resource planning system are automatically reflected in the digital twin. [00289] In embodiments, an organizational structure may include hierarchical components. [00290] In embodiments, hierarchical components may be embodied in a graph data structure.
SFT-106-A-PCT [00291] In embodiments, a workforce may be a factory operations workforce, a plant operations workforce, a resource extraction operations workforce, or some other type of workforce. [00292] In embodiments, at least one workforce role may be selected from among a CEO role, a COO role, a CFO role, a counsel role, a board member role, a CTO role, an information technology manager role, a chief information officer role, a chief data officer role, an investor role, an engineering manager role, a project manager role, an operations manager role, and a business development role. [00293] In embodiments, a digital twin may represent a recommendation for training for the workforce, a recommendation for augmentation of the workforce, a recommendation for configuration of a set of operations involving the workforce, a recommendation for configuration of the workforce, or some other kind of recommendation. [00294] In embodiments, a quantum computing system may provide a framework for providing a set of quantum computing services to one or more quantum computing clients within a transportation system. In some embodiments, the quantum computing system framework may be at least partially replicated in respective quantum computing clients. In embodiments, an individual client may include some or all of the capabilities of the quantum computing system, whereby the quantum computing system is adapted for the specific functions performed by the subsystems of the quantum computing client. Additionally, or alternatively, in some embodiments, the quantum computing system may be implemented as a set of microservices, such that different quantum computing clients may leverage the quantum computing system via one or more APIs exposed to the quantum computing clients within a transportation system. In these embodiments, the quantum computing system may be configured to perform various types of quantum computing services that may be adapted for different quantum computing clients within a transportation system. In either of these configurations, a quantum computing client may provide a request to the quantum computing system, whereby the request is to perform a specific task (e.g., an optimization). In response, the quantum computing system may execute the requested task and returns a response to the quantum computing client within the transportation system. [00295] In embodiments, the transportation system may include a thalamus service and a set of input sensors streaming data from various sources across the system with centrally-managed data sources. The thalamus service may filter the into a control system such that the control system is never overwhelmed by the total volume of information. In embodiments, the thalamus service may provide an information suppression mechanism for information flows within the transportation system. This mechanism monitors all data streams and strips away irrelevant data streams by ensuring that the maximum data flows from all input sensors are always constrained.
SFT-106-A-PCT [00296] The thalamus service may be a gateway for all communication that responds to the prioritization of the control system. The control system may decide to change the prioritization of the data streamed from the thalamus service, for example, during a known fire in an isolated area, and the event may direct the thalamus service to continue to provide flame sensor information despite the fact that majority of this data is not unusual. The thalamus service may be an integral part of the overall system communication framework within the transportation system. [00297] In embodiments, the thalamus service may include an intake management system. The intake management system may be configured to receive and process multiple large datasets by converting them into data streams that are sized and organized for subsequent use by a central control system 1operating within one or more transportation systems. For example, a robot may include vision and sensing systems that are used by its central control system to identify and move through an environment in real time. The intake management system may facilitate robot decision- making by parsing, filtering, classifying, or otherwise reducing the size and increasing the utility of multiple large datasets that would otherwise overwhelm the central control system. In embodiments, the intake management system may include an intake controller that works with an intelligence service to evaluate incoming data and take actions-based evaluation results. Evaluations and actions may include specific instruction sets received by the thalamus service, for example the use of a set of specific compression and prioritization tools stipulated within a “Networking” library module. In another example, thalamus service inputs may direct the use of specific filtering and suppression techniques. In a third example, thalamus service inputs may stipulate data filtering associated with an area of interest such as a certain type of financial transaction. The intake management system is also configured to recognize and manage datasets that are in a vectorized format such as PCMP, where they may be passed directly to central control, or alternatively deconstructed and processed separately. The intake management system may include a learning module that receives data from external sources that enables improvement and creation of application and data management library modules. In some cases, the intake management system may request external data to augment existing datasets. [00298] In embodiments, the transportation system may include a dual process artificial neural network (DPANN) system. The DPANN system may include an artificial neural network (ANN) having behaviors and operational processes (such as decision-making) that are products of a training system and a retraining system. The training system may be configured to perform automatic, trained execution of ANN operations. The retraining system performs effortful, analytical, intentional retraining of the ANN, such as based on one or more relevant aspects of the ANN, such as memory, one or more input data sets (including time information with respect to elements in such data sets), one or more goals or objectives (including ones that may vary
SFT-106-A-PCT dynamically, such as periodically and/or based on contextual changes, such as ones relating to the usage context of the ANN), and/or others. In cases involving memory-based retraining, the memory may include original/historical training data and refined training data. The DPANN system may include a dual process learning function (DPLF) configured to manage and perform an ongoing data retention process. The DPLF (including, where applicable, memory management process) facilitate retraining and refining of behavior of the ANN. The DPLF provides a framework by which the ANN creates outputs such as predictions, classifications, recommendations, conclusions and/or other outputs based on a historic inputs, new inputs, and new outputs (including outputs configured for specific use cases, including ones determined by parameters of the context of utilization (which may include performance parameters such as latency parameters, accuracy parameters, consistency parameters, bandwidth utilization parameters, processing capacity utilization parameters, prioritization parameters, energy utilization parameters, and many others). Summary of Transportation systems with quantum and/or biological systems [00299] In embodiments, a transportation system includes: a first data system configured to, receive a plurality of data values of a data stream, generate a predictive model for predicting future data values of the data stream based on the received plurality of data values, wherein generating the predictive model comprises determining a plurality of model parameters, and transmit the plurality of model parameters; and a second data system configured to, receive the plurality of model parameters transmitted by the first data system, parameterize a predictive model using the plurality of model parameters, predict a future data value of the data stream using the parameterized predictive model, and adjusting an operating state of a transportation system based on the future data value. In embodiments, adjusting the operating state of the transportation system includes, predicting, by the second data system, an effect of the operating state on the transportation system through an analysis of the social media-sourced data, and adjusting, by the second data system, at least one operating state of the transportation system responsive to the predicted effect thereon. In embodiments, adjusting the operating state of the transportation system includes, classifying, using a first neural network, social media data sourced from a plurality of social media sources as affecting the transportation system, predicting, using a second neural network, at least one operating objective of the transportation system based on the classified social media data, and adjusting, using a third neural network, the operating state of the transportation system to achieve the at least one operating objective of the transportation system. In embodiments, receiving the plurality of data values includes gathering social media-sourced data about a plurality of individuals, the data being sourced from a plurality of social media sources. In embodiments, the plurality of data values is received from one or more security cameras, and the data stream includes motion vectors extracted from video data captured by the security cameras.
SFT-106-A-PCT [00300] In embodiments, a method for prioritizing predictive model data streams, the method includes: receiving, by the first device, social media data sourced from a plurality of social media sources as affecting a transportation system; classifying, by the first device, the social media data based on a set of model parameters for each of a plurality of predictive models, wherein each predictive model is trained to predict future data values of the transportation system; selecting, by the first device and from the classified social media data, at least one predictive model data stream; parameterizing, by the first device, a predictive model using the set of model parameters included in the selected at least one predictive model stream; and predicting, by the first device, at least one future data value of the transportation system using the parameterized predictive model. In embodiments, selecting the at least one assigning, by the first device, priorities to each of a plurality of predictive model data streams included in the social media data, and selecting the at least one predictive model stream is based on the priorities assigned to each of the plurality of predictive model data streams. In embodiments, the selected at least one predictive model data stream is associated with a highest priority among the plurality of predictive model data streams. In embodiments, the selecting comprises suppressing at least one of the predictive model data streams that were not selected based on the priority assigned to each of the at least one non-selected predictive model data streams. Some embodiments further include adjusting an operating state of the transportation system based on the future data value of the transportation system. [00301] In embodiments, a system for transportation includes: a hybrid neural network including, a first neural network configured to process a plurality of data values of a data stream to determine an emotional state of a rider of a vehicle, wherein the plurality of data values include sensor data collected from one or more sensors and associated with the rider of a vehicle, a second neural network configured to generate a predictive model for predicting a future emotional state of the rider of the vehicle based on the received plurality of data values, and a third neural network configured to adjust at least one operating parameter of the vehicle based on an output of the predictive model. In embodiments, generating the predictive model comprises determining a plurality of model parameters of the predictive model, and the third neural network adjusts the at least one operating parameter of the vehicle based on the plurality of model parameters determined by the predictive model. In embodiments, the predictive model includes a behavior analysis model, and the predicted future emotional state of the rider is based on a predicted behavior of the rider in response to the at least one operating parameter of the vehicle. In embodiments, the hybrid neural network is further configured to, receive additional data values of the data stream, and refine the predictive model based on the additional data values, wherein refining the predictive model adjusts one or more model parameters of the predictive model. In embodiments, the data stream includes
SFT-106-A-PCT a video stream received from a camera associated with the vehicle, and the plurality of data values includes one or more vectors extracted from the video stream received from the camera. [00302] In embodiments, a system for transportation includes: an expert system to select a configuration for a vehicle, wherein the configuration includes at least one parameter selected from the group consisting of a vehicle parameter, a user experience parameter, and combinations thereof, and the expert system includes, a first data system configured to, receive a plurality of data values of a data stream, wherein the data values comprise sensor data collected from one or more sensor devices, generate a predictive model for predicting the at least one parameter based on the received plurality of data values, wherein generating the predictive model includes determining a plurality of model parameters, and transmit the plurality of model parameters, and a second data system configured to, receive the plurality of model parameters, parameterize a predictive model based on the plurality of model parameters, and select the at least one parameter based on the parameterized predictive model. In embodiments, the predictive model includes a behavior analysis model, and the at least one parameter is based on a predicted behavior of a rider of the vehicle. In embodiments, the predictive model includes a classification model, and the at least one parameter includes a predicted future state of the vehicle based on classified data received from one or more sensor devices associated with the vehicle. In embodiments, the data stream includes a video stream received from a camera associated with the vehicle, and the plurality of data values includes one or more motion vectors extracted from the video stream received from the camera. In embodiments, the expert system includes, a first neural network configured operates to classify a state of the vehicle through analysis of information about the vehicle captured by an Internet-of-things device during operation of the vehicle, and a second neural network configured to optimize the at least one parameter of the vehicle based on the classified state of the vehicle, information about a state of a rider occupying the vehicle, and information that correlates vehicle operation with an effect on rider state. [00303] In embodiments, a system for transportation includes: a quantum-enabled risk identification module configured to identify a risk associated with a vehicle, and a vehicle parameter selection module configured to adjust at least one vehicle parameter of the vehicle based on the risk to improve a margin of safety of the vehicle. In embodiments, the quantum-enabled risk identification module is configured to perform one or more of, identifying a risk associated with an operating state of the vehicle, assess an impact of the at least one vehicle parameter on the margin of safety of the vehicle, determine a current risk profile associated with the vehicle, determine a potential risk profile associated with the vehicle based on an adjustment of one or more operating parameters of the vehicle, or determine a probability of a risk associated with the vehicle and one or more predicted events. In embodiments, the quantum-enabled risk identification module
SFT-106-A-PCT is configured to improve the margin of safety by determining a risk type of a risk associated with the vehicle based on a set of risk types. In embodiments, the quantum-enabled risk identification module is further configured to, predict one or more events associated with the vehicle, determine an impact of the predicted one or more events on the margin of safety associated with the vehicle, and determine an adjustment of the at least one vehicle parameter of the vehicle that improves the margin of safety of the vehicle based on the one or more predicted events. In embodiments, the quantum-enabled risk identification module is further configured to generate a classical prediction engine that identifies the risk associated with the vehicle, wherein the vehicle parameter selection module is based on an output of the classical prediction engine. [00304] In embodiments, a method of vehicle routing includes: adjusting a quantum continual learning system based on an expression received from a user; determining, by the quantum continual learning system, a routing preference for a route of a vehicle; determining at least one vehicle-routing parameter used to route vehicles to reflect the routing preference; and adjusting, by a vehicle routing system, the route of the vehicle based on the at least one determined vehicle routing parameter. Some embodiments include: presenting, in a game-based interface, a vehicle route preference-affecting game activity; and receiving, through the game-based interface, a response of the user to the presented game activity, wherein the adjusting of the quantum continual learning system is based on the response of the user to the presented game activity. In embodiments, the quantum continual learning system is further configured to receive a continuous stream of realtime data, and the routing preference is determined by the quantum continual learning system based on the continuous stream of realtime data. In embodiments, the adjusting of the quantum continual learning system includes continuously training the quantum continual learning system based on realtime data, the realtime data including the expression received from the user. In embodiments, the vehicle is included in a set of vehicles, and adjusting the route of the vehicle includes adjusting a routing parameter of at least one other vehicle of the set of vehicles based on the at least one determined vehicle routing parameter. [00305] In embodiments, a system includes: an artificial intelligence system including a quantum annealing module, the artificial intelligence system configured to, receive, from a plurality of rechargeable vehicles within a target geographic region, an operational status of each rechargeable vehicle; predict a near-term need for recharging each rechargeable vehicle based on the operational status of each rechargeable vehicle; and determine, by the quantum annealing module, at least one parameter of a recharging plan for a recharging infrastructure based on the predicted near-term need for recharging each rechargeable vehicle. In embodiments, determining the at least one parameter of the recharging plan further comprises, determining, by the quantum annealing module, a set of candidate state changes associated with each candidate parameter of a set of
SFT-106-A-PCT candidate parameters for the recharging plan for the recharging infrastructure, and applying, by the quantum annealing module, a quantum annealing selection to the set of candidate parameters to determine the at least one parameter of the recharging plan based on the set of candidate state changes determined by the quantum annealing module. In embodiments, determining the at least one parameter of the recharging plan further comprises, setting, by the quantum annealing module, an initial weight of a state of the recharging infrastructure associated with each candidate parameter of a set of candidate parameters for the recharging plan, and evolving, by the quantum annealing module, the initial weight of each state to an adjusted weight based on a time-dependent equation, wherein the at least one parameter is determined based on the adjusted weight of the state associated with each candidate parameter of the set of candidate parameters. In embodiments, the artificial intelligence system is further configured to receive capacity information associated with the recharging infrastructure, and the determining of the at least one parameter is based on the capacity information associated with the recharging infrastructure. In embodiments, the artificial intelligence system further comprises a hybrid neural network including, a first portion of the hybrid neural network configured to operate on a first portion of the operational status of each rechargeable vehicle, wherein the first portion is associated with a route plan of the rechargeable vehicle, and a second portion of the hybrid neural network configured to operate on a second portion of the operational status of each rechargeable vehicle, wherein the second portion is associated with a recharging range of each rechargeable vehicle. [00306] In embodiments, a system for transportation includes: a cognitive system including a quantum annealing module, the cognitive system configured to, determine, by the quantum annealing module, at least one parameter of a reward to be made available to a rider of the vehicle in response to the rider undertaking a predetermined action while in the vehicle, and provide the reward to the rider in response to a performance of the predetermined action by the rider. In embodiments, the at least one parameter is based on at least one input received from the rider by a rider interface, and the cognitive system is configured to present to an offer of the reward the rider by the rider interface. In embodiments, the predetermined action includes a selection by the rider of a route of the vehicle, and a parameter of the reward to be made available to the rider is based on a routing preference of the rider. In embodiments, determining the at least one parameter of the reward includes, determining, by the quantum annealing module, an effect on a set of vehicles of each of a set of predetermined actions that could be undertaken by the rider, and determining, by the quantum annealing module, the parameter of the reward based on a quantum annealing selection among the set of predetermined actions. In embodiments, determining the at least one parameter of the reward includes determining, by the quantum annealing module, a set of candidate state changes associated with a routing of a set of vehicles based on the predetermined action of
SFT-106-A-PCT the rider, wherein the at least one parameter of the reward is based on the set of candidate state changes associated with the routing of the set of vehicles. [00307] In embodiments, a system for transportation includes: a data capture module configured to capture a data set associated with an interaction between a rider within a vehicle and a user interface of the vehicle; and a dual purpose artificial neural network that is configured to, train based on the data set to perform actions on behalf of a rider within a vehicle, retrain based on a dual process learning function applied to the data set to adjust the actions performed on behalf of the rider, and update the data set in response to the retraining of the dual purpose artificial neural network. In embodiments, retraining the dual purpose artificial neural network further comprises, identifying a poor performance of the dual purpose artificial intelligence network on a classification task, updating the data set to include at least one additional data sample that is associated with the classification task, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the classification task. In embodiments, retraining the dual purpose artificial neural network further comprises, updating the data set to include at least one additional data sample that is based on an additional action to be performed on behalf of the rider, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the additional action. In embodiments, retraining the dual purpose artificial neural network further comprises, identifying a novel problem for which the dual purpose artificial neural network is not currently trained to perform actions on behalf of the rider, updating the data set to include at least one additional data sample that is associated with the novel problem, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the novel problem. In embodiments, the data capture module includes a robotic process automation module that is configured to capture the data set associated with an action performed by the rider and associated with the vehicle, and the dual purpose artificial neural network is configured to train based on the data set to perform the action instead of the rider performing the action. [00308] In embodiments, a system for transportation includes: an interface configured to configure a set of expert systems to provide one or more outputs associated with a set of parameters, wherein the parameters are selected from a group including at least one vehicle parameters, at least one fleet parameter, or at least one of user experience parameter; and a dual purpose artificial neural network that is configured to, train based on the data set to select the one or more parameters for the set of expert systems, retrain based on a dual process learning function applied to the data set to adjust the one or more parameters selected for the set of expert systems, and update the data set in response to the retraining of the dual purpose artificial neural network. In embodiments,
SFT-106-A-PCT retraining the dual purpose artificial neural network further comprises, identifying a poor selection of the expert systems based on a first set of one or more parameters selected by the dual purpose artificial neural network, updating the data set to include at least one additional data sample that is associated with the one or more parameters, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the one or more parameters. In embodiments, retraining the dual purpose artificial neural network further comprises, updating the data set to include at least one additional data sample that is based on an additional output of the set of expert systems in response to the one or more parameters, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the additional output. In embodiments, retraining the dual purpose artificial neural network further comprises, identifying a novel problem for which the set of expert systems is not currently trained to provide one or more outputs, updating the data set to include at least one additional data sample that is associated with the novel problem, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the novel problem. In embodiments, the dual- purpose learning function is further configured to receive at least one additional data sample associated with a new input value, consolidate the at least one additional data sample with at least one data sample of the data set, and update the data set based on the consolidating of the at least one additional data sample and the at least one data sample of the data set. [00309] In some aspects, the techniques described herein relate to a software-defined vehicle for mitigating rider seat fatigue, the software-designed vehicle including: a plurality of seat sensors configured to detect and generate sensor data indicating physical parameters indicative of rider fatigue; a generative artificial intelligence (AI) engine configured to analyze the sensor data and generate personalized seat adjustment profiles to mitigate detected rider fatigue; and a vehicle control unit (VCU) communicatively coupled to the plurality of seat sensors and the generative AI engine, the VCU configured to implement the personalized seat adjustment profiles in the software-defined vehicle. [00310] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI engine fuses the sensor data with rider preference data to create a comprehensive model of rider comfort and predict optimal seat adjustments. [00311] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI engine employs machine learning algorithms to identify patterns of rider discomfort and dynamically suggest changes to seating ergonomics.
SFT-106-A-PCT [00312] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI engine generates real-time recommendations for micro-adjustments to seat positions to redistribute pressure and improve circulation for a rider. [00313] In some aspects, the techniques described herein relate to a software-defined vehicle, further including an encryption module to secure a transmission of sensor data from the plurality of seat sensors to the VCU and the generative AI engine. [00314] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the VCU includes a secure access control system that restricts modification of the generative AI engine to authorized personnel only. [00315] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI engine is configured to detect and respond to cybersecurity threats by initiating protective protocols to safeguard rider data. [00316] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a digital twin of a seating system of the software-defined vehicle, which the generative AI engine uses to simulate and evaluate an effectiveness of fatigue mitigation strategies. [00317] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the digital twin is configured to update in real-time with sensor data to reflect a current state of the seating system and rider fatigue levels. [00318] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the digital twin is configured for virtual testing of potential new seat materials and designs for fatigue reduction before physical implementation. [00319] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a user interface that displays seat adjustment recommendations from the generative AI engine and allows a rider to provide feedback. [00320] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the user interface includes a haptic feedback mechanism to alert the rider of a need for a change in seating position to mitigate fatigue. [00321] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the user interface is integrated with a mobile application that tracks seating patterns and provides personalized fatigue mitigation advice based on output from the generative AI engine. Software defined vehicles with rider seat fatigue mitigation enhanced by generative AI. [00322] In some aspects, the techniques described herein relate to a software-defined vehicle configured to mitigate brain atrophy, the software-defined vehicle including: a processor; a memory storing instructions that, when executed by the processor, cause the software-defined
SFT-106-A-PCT vehicle to: monitor interactions of a driver with vehicle controls and navigation systems; analyze driving patterns to identify routine behaviors as identified routine behaviors; generate cognitive challenges based on the identified routine behaviors to engage cognitive functions of the driver; and adapt an operation of the software-defined vehicle to present the cognitive challenges to the driver during operation of the software-defined vehicle. [00323] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the cognitive challenges include route deviation prompts that encourage the driver to navigate without step-by-step navigation assistance for familiar routes. [00324] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the route deviation prompts are generated in response to real-time driving conditions to encourage the driver to adapt to changing conditions and engage in problem-solving activities. [00325] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the cognitive challenges are generated by a generative AI configured to create tasks that stimulate memory, spatial awareness, and executive functioning based on information about the driver. [00326] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI is further configured to adjust a complexity of the cognitive challenges based on a performance and an interaction with the cognitive challenges by the driver. [00327] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a gamification module that assigns points and rewards to the driver for successfully completing the cognitive challenges. [00328] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the gamification module includes a leaderboard feature that compares a performance of the driver with historical performance data or peer performance data to foster a competitive environment for cognitive engagement. [00329] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the rewards include at least one of virtual badges, unlocking new vehicle features, or personalized messages of encouragement. [00330] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a generative AI engine configured to work in conjunction with the gamification module to dynamically create the cognitive challenges with personalization based on preferences and past interactions of the driver with the gamification module. [00331] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI engine uses driver feedback from the gamification module to refine
SFT-106-A-PCT and optimize the cognitive challenges for enhanced engagement and effectiveness in mitigating brain atrophy. [00332] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a vehicle user interface configured to provide at least one of auditory, visual, or haptic feedback to the driver based on the cognitive challenges to utilize multiple sensory modalities to enhance cognitive stimulation. [00333] In some aspects, the techniques described herein relate to a software-defined vehicle, further including an emergency intervention protocol that is activated in response to detecting a lack of driver response to the cognitive challenges, indicating potential acute cognitive impairment. [00334] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the software-defined vehicle is further configured to encourage breaks for physical activity during long journeys. [00335] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a generative AI engine configured to suggest exercises tailored to physical capabilities and preferences of the driver. [00336] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a natural language processing module that allows the driver to interact with the cognitive challenges using voice commands to facilitate hands-free engagement and reduced driver distraction. [00337] In some aspects, the techniques described herein relate to a software-defined vehicle, further including a generative AI engine, and wherein the software-defined vehicle is further configured to adjust environmental settings within the software-defined vehicle to create an optimal environment for cognitive function as determined by the generative AI engine. [00338] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the generative AI engine is configured to determine the optimal environment based, at least in part, on a time of day and a driver state. [00339] In some aspects, the techniques described herein relate to a software-defined vehicle, wherein the software-defined vehicle is configured to adjust at least one of lighting or temperature as the environmental settings to create the optimal environment. Software defined vehicles with brain atrophy mitigation features. [00340] In some aspects, the techniques described herein relate to a vehicle maintenance system including: a data processing unit configured to generate an analysis of environmental data, user behavioral data, and vehicle diagnostic data; an emotional state detection module configured to determine patterns of emotional states of a user based on the analysis; a scheduling module
SFT-106-A-PCT configured to: generate maintenance recommendations for a vehicle for which a user is associated with maintenance; deduce an optimal time within a maintenance window of the maintenance recommendations to recommend the maintenance recommendations to the user to result in a most favorable predicted emotional state of the user; and generate a maintenance reminder timing schedule for the user based on the patterns of emotional states of the user and on the optimal time; and a communication module configured to transmit the maintenance recommendations to the user according to the maintenance reminder timing schedule. [00341] In some aspects, the techniques described herein relate to a vehicle maintenance system, wherein the environmental data includes weather patterns, daylight hours, and seasonal changes, and the emotional state detection module is further configured to infer mood states of the user associated with different seasons. [00342] In some aspects, the techniques described herein relate to a vehicle maintenance system, wherein the user behavioral data includes at least one of historical maintenance records, social media activity, or vehicle usage patterns, and the emotional state detection module is further configured to detect periods when the user is more likely to engage in maintenance activities. [00343] In some aspects, the techniques described herein relate to a vehicle maintenance system, wherein the scheduling module utilizes generative AI algorithms to predict the optimal time. [00344] In some aspects, the techniques described herein relate to a vehicle maintenance system, wherein the communication module uses generative AI to create personalized maintenance notifications that the scheduling module predicts will resonate with a current mood of the user during a current season. [00345] In some aspects, the techniques described herein relate to a vehicle maintenance system, further including a user feedback interface configured to receive user responses to maintenance notifications, wherein the scheduling module adjusts future maintenance recommendations based on the user responses. [00346] In some aspects, the techniques described herein relate to a vehicle maintenance system, wherein the scheduling module is further configured to delay maintenance recommendations during seasons when the emotional state detection module determines a lower interest in maintenance activities for the user. Vehicle maintenance systems for improved emotional status of users. [00347] In some aspects, the techniques described herein relate to a refueling planning system for vehicles, the refueling planning system including: an emotional state system configured to predict user emotional state changes in response to refueling decisions for a vehicle and a user; a fuel status system configured to identify a fuel status of the vehicle and to predict refueling requirements of the vehicle for a trip; and a refueling recommendation engine configured to
SFT-106-A-PCT generate a refueling plan to achieve the refueling requirements with favorable emotional state changes, wherein the refueling recommendation engine is configured to consider combustion fuel refilling and electrical energy storage refueling in the refueling plan. [00348] In some aspects, the techniques described herein relate to a refueling planning system, wherein the refueling recommendation engine is further configured to prioritize charging a battery of a hybrid vehicle over filling a gas tank based on user environmental preferences indicating a user desire for considering environmental factors. [00349] In some aspects, the techniques described herein relate to a refueling planning system, wherein the refueling recommendation engine is further configured to adjust prioritization between charging and gas filling based on a comparison of an environmental impact of each option and user historical preference data. [00350] In some aspects, the techniques described herein relate to a refueling planning system, wherein the refueling recommendation engine includes a generative AI module configured to simulate potential refueling and charging scenarios to generate the refueling plan. [00351] In some aspects, the techniques described herein relate to a refueling planning system, wherein the generative AI module is further configured to generate personalized notifications and suggestions to the user to enhance an emotional benefit of the refueling plan. [00352] In some aspects, the techniques described herein relate to a refueling planning system, further including a digital twin that simulates vehicle operation and predicts future refueling needs. [00353] In some aspects, the techniques described herein relate to a refueling planning system, wherein the digital twin includes a model of user emotional responses to various refueling and charging scenarios, and wherein the refueling recommendation engine uses the model to optimize the refueling plan. [00354] In some aspects, the techniques described herein relate to a refueling planning system, further including a user interface configured to visually represent an impact of refueling and charging options on a user emotional state through graphical elements. [00355] In some aspects, the techniques described herein relate to a refueling planning system, wherein the user interface includes interactive elements that allow the user to provide real-time feedback on the user emotional state, and wherein the refueling planning system uses the real- time feedback to refine the refueling plan. [00356] In some aspects, the techniques described herein relate to a refueling planning system, wherein the refueling planning system is further configured to analyze social media data to identify refueling locations associated with positive emotional feedback from users.
SFT-106-A-PCT [00357] In some aspects, the techniques described herein relate to a refueling planning system, wherein the refueling planning system prioritizes charging locations for a hybrid vehicle based on social media indicators that suggest an improved emotional state for users who prioritize environmental benefits over convenience. [00358] In some aspects, the techniques described herein relate to a refueling planning system, wherein the refueling planning system adjusts the refueling plan to include locations that, according to social media data, offer amenities that contribute to a user emotional well-being. [00359] In some aspects, the techniques described herein relate to a transportation system, including: a first data system configured to: receive a plurality of data values of a data stream, generate a predictive model for predicting future data values of the data stream based on the received plurality of data values, wherein generating the predictive model includes determining a plurality of model parameters, and transmit the plurality of model parameters; and a second data system configured to: receive the plurality of model parameters transmitted by the first data system, parameterize the predictive model using the plurality of model parameters, predict a future data value of the data stream using the parameterized predictive model, and adjust an operating state of the transportation system based on the future data value. [00360] In some aspects, the techniques described herein relate to a transportation system, wherein adjusting the operating state of the transportation system includes: predicting, by the second data system, an effect of the operating state on the transportation system through an analysis of social media-sourced data, and adjusting, by the second data system, at least one operating state of the transportation system responsive to the predicted effect thereon. [00361] In some aspects, the techniques described herein relate to a transportation system, wherein adjusting the operating state of the transportation system includes: classifying, using a first neural network, social media data sourced from a plurality of social media sources as affecting the transportation system, predicting, using a second neural network, at least one operating objective of the transportation system based on the classified social media data, and adjusting, using a third neural network, the operating state of the transportation system to achieve the at least one operating objective of the transportation system. [00362] In some aspects, the techniques described herein relate to a transportation system, wherein receiving the plurality of data values includes gathering social media-sourced data about a plurality of individuals, the data being sourced from a plurality of social media sources. [00363] In some aspects, the techniques described herein relate to a transportation system, wherein the plurality of data values is received from one or more security cameras, and the data stream includes motion vectors extracted from video data captured by the security cameras.
SFT-106-A-PCT [00364] In some aspects, the techniques described herein relate to a method for prioritizing predictive model data streams, the method including: receiving, by a first device, social media data sourced from a plurality of social media sources as affecting a transportation system; classifying, by the first device, the social media data based on a set of model parameters for each of a plurality of predictive models, wherein each predictive model is trained to predict future data values of the transportation system; selecting, by the first device and from the classified social media data, at least one predictive model data stream; parameterizing, by the first device, a predictive model using the set of model parameters included in the selected at least one predictive model stream; and predicting, by the first device, at least one future data value of the transportation system using the parameterized predictive model. [00365] In some aspects, the techniques described herein relate to a method, wherein selecting the at least one assigning, by the first device, priorities to each of a plurality of predictive model data streams included in the social media data and selecting the at least one predictive model stream is based on the priorities assigned to each of the plurality of predictive model data streams. [00366] In some aspects, the techniques described herein relate to a method, wherein the selected at least one predictive model data stream is associated with a highest priority among the plurality of predictive model data streams. [00367] In some aspects, the techniques described herein relate to a method, wherein the selecting includes suppressing at least one of the predictive model data streams that were not selected based on the priority assigned to each of the predictive model data streams that were not selected. [00368] In some aspects, the techniques described herein relate to a method, further including adjusting an operating state of the transportation system based on the future data value of the transportation system. [00369] In some aspects, the techniques described herein relate to a system for transportation, including: a hybrid neural network including: a first neural network configured to process a plurality of data values of a data stream to determine an emotional state of a rider of a vehicle, wherein the plurality of data values include sensor data collected from one or more sensors and associated with the rider of a vehicle, a second neural network configured to generate a predictive model for predicting a future emotional state of the rider of the vehicle based on the plurality of data values, and a third neural network configured to adjust at least one operating parameter of the vehicle based on an output of the predictive model. [00370] In some aspects, the techniques described herein relate to a system, wherein generating the predictive model includes determining a plurality of model parameters of the predictive
SFT-106-A-PCT model, and the third neural network adjusts the at least one operating parameter of the vehicle based on the plurality of model parameters determined by the predictive model. [00371] In some aspects, the techniques described herein relate to a system, wherein the predictive model includes a behavior analysis model, and the predicted future emotional state of the rider is based on a predicted behavior of the rider in response to the at least one operating parameter of the vehicle. [00372] In some aspects, the techniques described herein relate to a system, wherein the hybrid neural network is further configured to: receive additional data values of the data stream, and refine the predictive model based on the additional data values, wherein refining the predictive model adjusts one or more model parameters of the predictive model. [00373] In some aspects, the techniques described herein relate to a system, wherein the data stream includes a video stream received from a camera associated with the vehicle, and the plurality of data values includes one or more vectors extracted from the video stream received from the camera. [00374] In some aspects, the techniques described herein relate to a system for transportation, including: an expert system to select a configuration for a vehicle, wherein the configuration includes at least one parameter selected from the group consisting of a vehicle parameter, a user experience parameter, and combinations thereof, and the expert system includes, a first data system configured to: receive a plurality of data values of a data stream, wherein the data values include sensor data collected from one or more sensor devices, generate a predictive model for predicting the at least one parameter based on the received plurality of data values, wherein generating the predictive model includes determining a plurality of model parameters, and transmit the plurality of model parameters; and a second data system configured to: receive the plurality of model parameters, parameterize a predictive model based on the plurality of model parameters, and select the at least one parameter based on the parameterized predictive model. [00375] In some aspects, the techniques described herein relate to a system, wherein the predictive model includes a behavior analysis model, and the at least one parameter is based on a predicted behavior of a rider of the vehicle. [00376] In some aspects, the techniques described herein relate to a system, wherein the predictive model includes a classification model, and the at least one parameter includes a predicted future state of the vehicle based on classified data received from one or more sensor devices associated with the vehicle. [00377] In some aspects, the techniques described herein relate to a system, wherein the data stream includes a video stream received from a camera associated with the vehicle, and the
SFT-106-A-PCT plurality of data values includes one or more motion vectors extracted from the video stream received from the camera. [00378] In some aspects, the techniques described herein relate to a system, wherein the expert system includes: a first neural network configured operates to classify a state of the vehicle through analysis of information about the vehicle captured by an Internet-of-things device during operation of the vehicle, and a second neural network configured to optimize the at least one parameter of the vehicle based on the classified state of the vehicle, information about a state of a rider occupying the vehicle, and information that correlates vehicle operation with an effect on rider state. [00379] In some aspects, the techniques described herein relate to a system for transportation, including: a quantum-enabled risk identification module configured to identify a risk associated with a vehicle, and a vehicle parameter selection module configured to adjust at least one vehicle parameter of the vehicle based on the risk to improve a margin of safety of the vehicle. [00380] In some aspects, the techniques described herein relate to a system, wherein the quantum-enabled risk identification module is configured to perform one or more of: identifying a risk associated with an operating state of the vehicle, assessing an impact of the at least one vehicle parameter on the margin of safety of the vehicle, determining a current risk profile associated with the vehicle, determining a potential risk profile associated with the vehicle based on an adjustment of one or more operating parameters of the vehicle, or determining a probability of a risk associated with the vehicle and one or more predicted events. [00381] In some aspects, the techniques described herein relate to a system, wherein the quantum-enabled risk identification module is configured to improve the margin of safety by determining a risk type of a risk associated with the vehicle based on a set of risk types. [00382] In some aspects, the techniques described herein relate to a system, wherein the quantum-enabled risk identification module is further configured to: predict one or more events associated with the vehicle, determine an impact of the predicted one or more events on the margin of safety associated with the vehicle, and determine an adjustment of the at least one vehicle parameter of the vehicle that improves the margin of safety of the vehicle based on the one or more predicted events. [00383] In some aspects, the techniques described herein relate to a system, wherein the quantum-enabled risk identification module is further configured to generate a classical prediction engine that identifies the risk associated with the vehicle, wherein the vehicle parameter selection module is based on an output of the classical prediction engine. [00384] In some aspects, the techniques described herein relate to a method of vehicle routing including: adjusting a quantum continual learning system based on an expression received from a
SFT-106-A-PCT user; determining, by the quantum continual learning system, a routing preference for a route of a vehicle; determining at least one vehicle-routing parameter used to route vehicles to reflect the routing preference; and adjusting, by a vehicle routing system, the route of the vehicle based on the at least one determined vehicle routing parameter. [00385] In some aspects, the techniques described herein relate to a method, further including: presenting, in a game-based interface, a vehicle route preference-affecting game activity; and receiving, through the game-based interface, a response of the user to the presented game activity, wherein the adjusting of the quantum continual learning system is based on the response of the user to the presented game activity. [00386] In some aspects, the techniques described herein relate to a method, wherein the quantum continual learning system is further configured to receive a continuous stream of realtime data, and the routing preference is determined by the quantum continual learning system based on the continuous stream of realtime data. [00387] In some aspects, the techniques described herein relate to a method, wherein the adjusting of the quantum continual learning system includes continuously training the quantum continual learning system based on realtime data, the realtime data including the expression received from the user. [00388] In some aspects, the techniques described herein relate to a method, wherein the vehicle is included in a set of vehicles, and adjusting the route of the vehicle includes adjusting a routing parameter of at least one other vehicle of the set of vehicles based on the at least one determined vehicle routing parameter. [00389] In some aspects, the techniques described herein relate to a system, including: an artificial intelligence system including a quantum annealing module, the artificial intelligence system configured to: receive, from a plurality of rechargeable vehicles within a target geographic region, an operational status of each rechargeable vehicle; predict a near-term need for recharging each rechargeable vehicle based on the operational status of each rechargeable vehicle; and determine, by the quantum annealing module, at least one parameter of a recharging plan for a recharging infrastructure based on the predicted near-term need for recharging each rechargeable vehicle. [00390] In some aspects, the techniques described herein relate to a system, wherein determining the at least one parameter of the recharging plan further includes, determining, by the quantum annealing module, a set of candidate state changes associated with each candidate parameter of a set of candidate parameters for the recharging plan for the recharging infrastructure, and applying, by the quantum annealing module, a quantum annealing selection to the set of
SFT-106-A-PCT candidate parameters to determine the at least one parameter of the recharging plan based on the set of candidate state changes determined by the quantum annealing module. [00391] In some aspects, the techniques described herein relate to a system, wherein determining the at least one parameter of the recharging plan further includes, setting, by the quantum annealing module, an initial weight of a state of the recharging infrastructure associated with each candidate parameter of a set of candidate parameters for the recharging plan, and evolving, by the quantum annealing module, the initial weight of each state to an adjusted weight based on a time-dependent equation, wherein the at least one parameter is determined based on the adjusted weight of the state associated with each candidate parameter of the set of candidate parameters. [00392] In some aspects, the techniques described herein relate to a system, wherein the artificial intelligence system is further configured to receive capacity information associated with the recharging infrastructure, and the determining of the at least one parameter is based on the capacity information associated with the recharging infrastructure. [00393] In some aspects, the techniques described herein relate to a system, wherein the artificial intelligence system further includes a hybrid neural network including: a first portion of the hybrid neural network configured to operate on a first portion of the operational status of each rechargeable vehicle, wherein the first portion is associated with a route plan of the rechargeable vehicle, and a second portion of the hybrid neural network configured to operate on a second portion of the operational status of each rechargeable vehicle, wherein the second portion is associated with a recharging range of each rechargeable vehicle. [00394] In some aspects, the techniques described herein relate to a system for transportation, including: a cognitive system including a quantum annealing module, the cognitive system configured to: determine, by the quantum annealing module, at least one parameter of a reward to be made available to a rider of a vehicle in response to the rider undertaking a predetermined action while in the vehicle, and provide the reward to the rider in response to a performance of the predetermined action by the rider. [00395] In some aspects, the techniques described herein relate to a system, wherein the at least one parameter is based on at least one input received from the rider by a rider interface, and the cognitive system is configured to present to an offer of the reward the rider by the rider interface. [00396] In some aspects, the techniques described herein relate to a system, wherein the predetermined action includes a selection by the rider of a route of the vehicle, and a parameter of the reward to be made available to the rider is based on a routing preference of the rider. [00397] In some aspects, the techniques described herein relate to a system, wherein determining the at least one parameter of the reward includes: determining, by the quantum annealing module,
SFT-106-A-PCT an effect on a set of vehicles of each of a set of predetermined actions that could be undertaken by the rider, and determining, by the quantum annealing module, the parameter of the reward based on a quantum annealing selection among the set of predetermined actions. [00398] In some aspects, the techniques described herein relate to a system, wherein determining the at least one parameter of the reward includes determining, by the quantum annealing module, a set of candidate state changes associated with a routing of a set of vehicles based on the predetermined action of the rider, wherein the at least one parameter of the reward is based on the set of candidate state changes associated with the routing of the set of vehicles. [00399] In some aspects, the techniques described herein relate to a system for transportation, including: a data capture module configured to capture a data set associated with an interaction between a rider within a vehicle and a user interface of the vehicle; and a dual purpose artificial neural network that is configured to: train based on the data set to perform actions on behalf of a rider within a vehicle, retrain based on a dual process learning function applied to the data set to adjust the actions performed on behalf of the rider, and update the data set in response to the retraining of the dual purpose artificial neural network. [00400] In some aspects, the techniques described herein relate to a system, wherein retraining the dual purpose artificial neural network further includes: identifying a poor performance of the dual purpose artificial neural network on a classification task, updating the data set to include at least one additional data sample that is associated with the classification task, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the classification task. [00401] In some aspects, the techniques described herein relate to a system, wherein retraining the dual purpose artificial neural network further includes: updating the data set to include at least one additional data sample that is based on an additional action to be performed on behalf of the rider, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the additional action. [00402] In some aspects, the techniques described herein relate to a system, wherein retraining the dual purpose artificial neural network further includes: identifying a novel problem for which the dual purpose artificial neural network is not currently trained to perform actions on behalf of the rider, updating the data set to include at least one additional data sample that is associated with the novel problem, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the novel problem. [00403] In some aspects, the techniques described herein relate to a system, wherein the data capture module includes a robotic process automation module that is configured to capture the
SFT-106-A-PCT data set associated with an action performed by the rider and associated with the vehicle, and the dual purpose artificial neural network is configured to train based on the data set to perform the action instead of the rider performing the action. [00404] In some aspects, the techniques described herein relate to a system for transportation, including: an interface configured to configure a set of expert systems to provide one or more outputs associated with a set of parameters, wherein the parameters are selected from a group including at least one vehicle parameters, at least one fleet parameter, or at least one of user experience parameter; and a dual purpose artificial neural network that is configured to: train based on a data set to select the one or more parameters for the set of expert systems, retrain based on a dual process learning function applied to the data set to adjust the one or more parameters selected for the set of expert systems, and update the data set in response to the retraining of the dual purpose artificial neural network. [00405] In some aspects, the techniques described herein relate to a system, wherein retraining the dual purpose artificial neural network further includes: identifying a poor selection of the expert systems based on a first set of one or more parameters selected by the dual purpose artificial neural network, updating the data set to include at least one additional data sample that is associated with the one or more parameters, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the one or more parameters. [00406] In some aspects, the techniques described herein relate to a system, wherein retraining the dual purpose artificial neural network further includes: updating the data set to include at least one additional data sample that is based on an additional output of the set of expert systems in response to the one or more parameters, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the additional output. [00407] In some aspects, the techniques described herein relate to a system, wherein retraining the dual purpose artificial neural network further includes: identifying a novel problem for which the set of expert systems is not currently trained to provide one or more outputs, updating the data set to include at least one additional data sample that is associated with the novel problem, and retraining the dual purpose artificial neural network based on the data set including the at least one additional data sample that is associated with the novel problem. [00408] In some aspects, the techniques described herein relate to a system, wherein the dual purpose learning function is further configured to: receive at least one additional data sample associated with a new input value, consolidate the at least one additional data sample with at least
SFT-106-A-PCT one data sample of the data set, and update the data set based on the consolidating of the at least one additional data sample and the at least one data sample of the data set. [00409] It is to be understood that any combination of features from the methods disclosed herein and/or from the systems disclosed herein may be used together, and/or that any features from any or all of these aspects may be combined with any of the features of the embodiments and/or examples disclosed herein to achieve the benefits as described in this disclosure. BRIEF DESCRIPTION OF THE FIGURES [00410] In the accompanying figures, like reference numerals refer to identical or functionally similar elements throughout the separate views and together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate various embodiments and to explain various principles and advantages all in accordance with the systems and methods disclosed herein. [00411] Fig. 1 is a diagrammatic view that illustrates an architecture for a transportation system showing certain illustrative components and arrangements relating to various embodiments of the present disclosure. [00412] Fig.2 is a diagrammatic view that illustrates use of a hybrid neural network to optimize a powertrain component of a vehicle relating to various embodiments of the present disclosure. [00413] Fig.3 is a diagrammatic view that illustrates a set of states that may be provided as inputs to and/or be governed by an expert system/Artificial Intelligence (AI) system relating to various embodiments of the present disclosure. [00414] Fig. 4 is a diagrammatic view that illustrates a range of parameters that may be taken as inputs by an expert system or AI system, or component thereof, as described throughout this disclosure, or that may be provided as outputs from such a system and/or one or more sensors, cameras, or external systems relating to various embodiments of the present disclosure. [00415] Fig. 5 is a diagrammatic view that illustrates a set of vehicle user interfaces relating to various embodiments of the present disclosure. [00416] Fig. 6 is a diagrammatic view that illustrates a set of interfaces among transportation system components relating to various embodiments of the present disclosure. [00417] Fig.7 is a diagrammatic view that illustrates a data processing system, which may process data from various sources relating to various embodiments of the present disclosure. [00418] Fig. 8 is a diagrammatic view that illustrates a set of algorithms that may be executed in connection with one or more of the many embodiments of transportation systems described throughout this disclosure relating to various embodiments of the present disclosure.
SFT-106-A-PCT [00419] Fig.9 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00420] Fig.10 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00421] Fig. 11 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00422] Fig.12 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00423] Fig. 13 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00424] Fig.14 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00425] Fig. 15 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00426] Fig.16 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00427] Fig. 17 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00428] Fig.18 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00429] Fig. 19 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00430] Fig. 20 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00431] Fig. 21 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00432] Fig.22 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00433] Fig. 23 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00434] Fig. 24 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00435] Fig.25 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure.
SFT-106-A-PCT [00436] Fig. 26 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00437] Fig. 26A is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00438] Fig.27 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00439] Fig. 28 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00440] Fig.29 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00441] Fig.30 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00442] Fig.31 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00443] Fig.32 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00444] Fig. 33 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00445] Fig.34 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00446] Fig. 35 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00447] Fig.36 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00448] Fig.37 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00449] Fig. 38 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00450] Fig. 39 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00451] Fig. 40 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00452] Fig.41 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure.
SFT-106-A-PCT [00453] Fig. 42 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00454] Fig. 43 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00455] Fig.44 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00456] Fig.45 is a diagrammatic view that illustrates systems and methods described throughout this disclosure relating to various embodiments of the present disclosure. [00457] Fig.46 is a diagrammatic view that illustrates systems and methods described throughout this disclosure relating to various embodiments of the present disclosure. [00458] Fig.47 is a diagrammatic view that illustrates systems and methods described throughout this disclosure relating to various embodiments of the present disclosure. [00459] Fig.48 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00460] Fig. 49 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00461] Fig. 50 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00462] Fig.51 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00463] Fig.52 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00464] Fig.53 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00465] Fig. 54 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00466] Fig. 55 is a diagrammatic view that illustrates a method described throughout this disclosure relating to various embodiments of the present disclosure. [00467] Fig.56 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00468] Fig.57 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure. [00469] Fig.58 is a diagrammatic view that illustrates systems described throughout this disclosure relating to various embodiments of the present disclosure.
SFT-106-A-PCT [00470] Fig. 59 is a diagrammatic view that illustrates an architecture for a transportation system including a digital twin system of a vehicle showing certain illustrative components and arrangements relating to various embodiments of the present disclosure. [00471] Fig.60 shows a schematic illustration of the digital twin system integrated with an identity and access management system in accordance with certain embodiments of the present disclosure. [00472] Fig.61 illustrates a schematic view of an interface of the digital twin system presented on the user device of a driver of the vehicle relating to various embodiments of the present disclosure. [00473] Fig.62 is a schematic diagram showing the interaction between the driver and the digital twin using one or more views and modes of the interface in accordance with an example embodiment of the present disclosure. [00474] Fig.63 illustrates a schematic view of an interface of the digital twin system presented on the user device of a manufacturer of the vehicle in accordance with various embodiments of the present disclosure. [00475] Fig.64 depicts a scenario in which the manufacturer uses the quality view of a digital twin interface to run simulations and generate what-if scenarios for quality testing a vehicle in accordance with an example embodiment of the present disclosure. [00476] Fig.65 illustrates a schematic view of an interface of the digital twin system presented on the user device of a dealer of the vehicle. [00477] Fig. 66 is a diagram illustrating the interaction between the dealer and the digital twin using one or more views with the goal of personalizing the experience of a customer purchasing a vehicle in accordance with an example embodiment. [00478] Fig. 67 is a diagram illustrating the service & maintenance view presented to a user of a vehicle including a driver, a manufacturer and a dealer of the vehicle in accordance with various embodiments of the present disclosure. [00479] Fig. 68 is a method used by the digital twin for detecting faults and predicting any future failures of the vehicle in accordance with an example embodiment. [00480] Fig. 69 is a diagrammatic view that illustrates the architecture of a vehicle with a digital twin system for performing predictive maintenance on a vehicle in accordance with an example embodiment of the present disclosure. [00481] Fig. 70 is a flow chart depicting a method for generating a digital twin of a vehicle in accordance with various embodiments of the disclosure. [00482] Fig.71 is a diagrammatic view that illustrates an alternate architecture for a transportation system comprising a vehicle and a digital twin system in accordance with various embodiments of the present disclosure.
SFT-106-A-PCT [00483] Fig.72 depicts a digital twin representing a combination of a set of states of both a vehicle and a driver of the vehicle in accordance with certain embodiments of the present disclosure. [00484] Fig.73 illustrates a schematic diagram depicting a scenario in which the integrated vehicle and driver digital twin may configure the vehicle experience in accordance with an example embodiment. [00485] Fig. 74 is a schematic illustrating an example of a portion of an information technology system for transportation artificial intelligence leveraging digital twins according to some embodiments of the present disclosure. [00486] Fig. 75 is a schematic illustrating examples of architecture of a digital twin system according to embodiments of the present disclosure. [00487] Fig. 76 is a schematic illustrating exemplary components of a digital twin management system according to embodiments of the present disclosure. [00488] Fig. 77 is a schematic illustrating examples of a digital twin I/O system that interfaces with an environment, the digital twin system, and/or components thereof to provide bi-directional transfer of data between coupled components according to embodiments of the present disclosure. [00489] Fig. 78 is a schematic illustrating an example set of identified states related to transportation systems that the digital twin system may identify and/or store for access by intelligent systems (e.g., a cognitive intelligence system) or users of the digital twin system according to embodiments of the present disclosure. [00490] Fig.79 is a schematic illustrating example embodiments of methods for updating a set of properties of a digital twin of the present disclosure on behalf of a client application and/or one or more embedded digital twins. [00491] Fig. 80 illustrates example embodiments of a display interface of the present disclosure that renders a digital twin of a dryer centrifuge with information relating to the dryer centrifuge. [00492] Fig.81 is a schematic illustrating an example embodiment of a method for updating a set of vibration fault level states of machine components such as bearings in the digital twin of a machine, on behalf of a client application. [00493] Fig.82 is a schematic illustrating an example embodiment of a method for updating a set of vibration severity unit values of machine components such as bearings in the digital twin of a machine on behalf of a client application. [00494] Fig.83 is a schematic illustrating an example embodiment of a method for updating a set of probability of failure values in the digital twins of machine components on behalf of a client application.
SFT-106-A-PCT [00495] Fig.84 is a schematic illustrating an example embodiment of a method for updating a set of probability of downtime values of machines in the digital twin of a transportation system on behalf of a client application. [00496] Fig. 85 is a schematic illustrating an example embodiment of a method for updating one or more probability of shutdown values of transportation entities in one or more transportation system digital twins. [00497] Fig.86 is a schematic illustrating an example embodiment of a method for updating a set of cost of downtime values of machines in the digital twin of a transportation system. [00498] Fig. 87 is a schematic illustrating an example embodiment of a method for updating one or more KPI values in a digital twin of a transportation system, on behalf of a client application. [00499] Fig. 88 is a schematic illustrating an example embodiment of a method of the present disclosure. [00500] Fig. 89 is a schematic illustrating examples of different types of enterprise digital twins, including executive digital twins, in relation to the data layer, processing layer, and application layer of an enterprise digital twin framework according to some embodiments of the present disclosure. [00501] Fig. 90 is a schematic illustrating an example of a method for configuring role-based digital twins according to some embodiments of the present disclosure. [00502] Fig. 91 is a schematic illustrating an example of a method for configuring a digital twin of a workforce according to some embodiments of the present disclosure. [00503] FIG. 92 is a schematic view of an exemplary embodiment of the quantum computing service according to some embodiments of the present disclosure. [00504] FIG. 93 illustrates quantum computing service request handling according to some embodiments of the present disclosure. [00505] FIG.94 is a diagrammatic view that illustrates embodiments of the biology-based system in accordance with the present disclosure. [00506] FIG.95 is a diagrammatic view of the thalamus service and how it coordinates within the modules in accordance with the present disclosure. [00507] FIG.96 is a diagrammatic view of the dual process artificial neural network system. [00508] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of the many embodiments of the systems and methods disclosed herein. [00509] FIG.97 is a diagrammatic view of artificial intelligence capabilities, convergence technology stack capabilities and software-defined vehicle modules of a transportation system.
SFT-106-A-PCT [00510] FIG. 98 is a diagrammatic view of software defined vehicle modules of a transportation system. [00511] FIG. 99 depicts a block diagram of exemplary features, capabilities, and interfaces of a generative artificial intelligence platform of a transportation system. [00512] FIG. 100 is a diagrammatic view of data and visualization methods and systems of a transportation system. [00513] FIG. 101 is a diagrammatic view of data and visualization methods and systems of a transportation system. DETAILED DESCRIPTION [00514] The present disclosure will now be described in detail by describing various illustrative, non-limiting embodiments thereof with reference to the accompanying drawings and exhibits. The disclosure may, however, be embodied in many different forms and should not be construed as being limited to the illustrative embodiments set forth herein. Rather, the embodiments are provided so that this disclosure will be thorough and will fully convey the concept of the disclosure to those skilled in the art. The claims should be consulted to ascertain the true scope of the disclosure. [00515] Before describing in detail embodiments that are in accordance with the systems and methods disclosed herein, it should be observed that the embodiments reside primarily in combinations of method and/or system components. Accordingly, the system components and methods have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the systems and methods disclosed herein. [00516] All documents mentioned herein are hereby incorporated by reference in their entirety. References to items in the singular should be understood to include items in the plural, and vice versa, unless explicitly stated otherwise or clear from the context. Grammatical conjunctions are intended to express any and all disjunctive and conjunctive combinations of conjoined clauses, sentences, words, and the like, unless otherwise stated or clear from the context. Thus, the term “or” should generally be understood to mean “and/or” and so forth, except where the context clearly indicates otherwise. [00517] Recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated herein, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. The words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one
SFT-106-A-PCT skilled in the art to operate satisfactorily for an intended purpose. Ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. The use of any and all examples, or exemplary language (“e.g.,” “such as,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments or the claims. No language in the specification should be construed as indicating any unclaimed element as essential to the practice of the embodiments. [00518] In the following description, it is understood that terms such as “first,” “second,” “third,” “above,” “below,” and the like, are words of convenience and are not to be construed as implying a chronological order or otherwise limiting any corresponding element unless expressly stated otherwise. The term “set” should be understood to encompass a set with a single member or a plurality of members. [00519] Referring to Fig. 1, an architecture for a transportation system 111 is depicted, showing certain illustrative components and arrangements relating to certain embodiments described herein. The transportation system 111 may include one or more vehicles 110, which may include various mechanical, electrical, and software components and systems, such as a powertrain 113, a suspension system 117, a steering system, a braking system, a fuel system, a charging system, seats 128, a combustion engine, an electric vehicle drive train, a transmission 119, a gear set, and the like. The vehicle may have a vehicle user interface 123, which may include a set of interfaces that include a steering system, buttons, levers, touch screen interfaces, audio interfaces, and the like as described throughout this disclosure. The vehicle may have a set of sensors 125 (including cameras 127), such as for providing input to expert system/artificial intelligence features described throughout this disclosure, such as one or more neural networks (which may include hybrid neural networks 147 as described herein). Sensors 125 and/or external information may be used to inform the expert system/Artificial Intelligence (AI) system 136 and to indicate or track one or more vehicle states 144, such as vehicle operating states 345 (Fig. 3), user experience states 346 (Fig. 3), and others described herein, which also may be as inputs to or taken as outputs from a set of expert system/AI components. Routing information 143 may inform and take input from the expert system/AI system 136, including using in-vehicle navigation capabilities and external navigation capabilities, such as Global Position System (GPS), routing by triangulation (such as cell towers), peer-to-peer routing with other vehicles 121, and the like. A collaboration engine 129 may facilitate collaboration among vehicles and/or among users of vehicles, such as for managing collective experiences, managing fleets and the like. Vehicles 110 may be networked among each other in a peer-to-peer manner, such as using cognitive radio, cellular, wireless or other networking features. An AI system 136 or other expert systems may take as input a wide range of vehicle parameters
SFT-106-A-PCT 130, such as from onboard diagnostic systems, telemetry systems, and other software systems, as well as from vehicle-located sensors 125 and from external systems. In embodiments, the system may manage a set of feedback/rewards 148, incentives, or the like, such as to induce certain user behavior and/or to provide feedback to the AI system 136, such as for learning on a set of outcomes to accomplish a given task or objective. The expert system or AI system 136 may inform, use, manage, or take output from a set of algorithms 149, including a wide variety as described herein. In the example of the present disclosure depicted in Fig. 1, a data processing system 162, is connected to the hybrid neural network 147. The data processing system 162 may process data from various sources (see Fig. 7). In the example of the present disclosure depicted in Fig. 1, a system user interface 163, is connected to the hybrid neural network 147. See the disclosure, below, relating to Fig.6 for further disclosure relating to interfaces. Fig.1 shows that vehicle surroundings 164 may be part of the transportation system 111. Vehicle surroundings may include roadways, weather conditions, lighting conditions, etc. Fig. 1 shows that devices 165, for example, mobile phones and computer systems, navigation systems, etc., may be connected to various elements of the transportation system 111, and therefore may be part of the transportation system 111 of the present disclosure. [00520] Referring to Fig. 2, provided herein are transportation systems having a hybrid neural network 247 for optimizing a powertrain 213 of a vehicle, wherein at least two parts of the hybrid neural network 247 optimize distinct parts of the powertrain 213. An artificial intelligence system may control a powertrain component 215 based on an operational model (such as a physics model, an electrodynamic model, a hydrodynamic model, a chemical model, or the like for energy conversion, as well as a mechanical model for operation of various dynamically interacting system components). For example, the AI system may control a powertrain component 215 by manipulating a powertrain operating parameter 260 to achieve a powertrain state 261. The AI system may be trained to operate a powertrain component 215, such as by training on a data set of outcomes (e.g., fuel efficiency, safety, rider satisfaction, or the like) and/or by training on a data set of operator actions (e.g., driver actions sensed by a sensor set, camera or the like or by a vehicle information system). In embodiments, a hybrid approach may be used, where one neural network optimizes one part of a powertrain (e.g., for gear shifting operations), while another neural network optimizes another part (e.g., braking, clutch engagement, or energy discharge and recharging, among others). Any of the powertrain components described throughout this disclosure may be controlled by a set of control instructions that consist of output from at least one component of a hybrid neural network 247. [00521] Fig.3 illustrates a set of states that may be provided as inputs to and/or be governed by an expert system/AI system 336, as well as used in connection with various systems and components
SFT-106-A-PCT in various embodiments described herein. States 344 may include vehicle operating states 345, including vehicle configuration states, component states, diagnostic states, performance states, location states, maintenance states, and many others, as well as user experience states 346, such as experience-specific states, emotional states 366 for users, satisfaction states 367, location states, content/entertainment states and many others. [00522] Fig.4 illustrates a range of parameters 430 that may be taken as inputs by an expert system or AI system 136 (Fig. 1), or component thereof, as described throughout this disclosure, or that may be provided as outputs from such a system and/or one or more sensors 125 (Fig.1), cameras 127 (Fig.1), or external systems. Parameters 430 may include one or more goals 431 or objectives (such as ones that are to be optimized by an expert system/AI system, such as by iteration and/or machine learning), such as a performance goal 433, such as relating to fuel efficiency, trip time, satisfaction, financial efficiency, safety, or the like. Parameters 430 may include market feedback parameters 435, such as relating to pricing, availability, location, or the like of goods, services, fuel, electricity, advertising, content, or the like. Parameters 430 may include rider state parameters 437, such as parameters relating to comfort 439, emotional state, satisfaction, goals, type of trip, fatigue and the like. Parameters 430 may include parameters of various transportation-relevant profiles, such as traffic profiles 440 (location, direction, density and patterns in time, among many others), road profiles 441 (elevation, curvature, direction, road surface conditions and many others), user profiles, and many others. Parameters 430 may include routing parameters 442, such as current vehicle locations, destinations, waypoints, points of interest, type of trip, goal for trip, required arrival time, desired user experience, and many others. Parameters 430 may include satisfaction parameters 443, such as for riders (including drivers), fleet managers, advertisers, merchants, owners, operators, insurers, regulators and others. Parameters 430 may include operating parameters 444, including the wide variety described throughout this disclosure. [00523] Fig. 5 illustrates a set of vehicle user interfaces 523. Vehicle user interfaces 523 may include electromechanical interfaces 568, such as steering interfaces, braking interfaces, interfaces for seats, windows, moonroof, glove box and the like. Interfaces 523 may include various software interfaces (which may have touch screen, dials, knobs, buttons, icons or other features), such as a game interface 569, a navigation interface 570, an entertainment interface 571, a vehicle settings interface 572, a search interface 573, an ecommerce interface 574, and many others. Vehicle interfaces may be used to provide inputs to, and may be governed by, one or more AI systems/expert systems such as described in embodiments throughout this disclosure. [00524] Fig. 6 illustrates a set of interfaces among transportation system components, including interfaces within a host system (such as governing a vehicle or fleet of vehicles) and host interfaces 650 between a host system and one or more third parties and/or external systems. Interfaces include
SFT-106-A-PCT third party interfaces 655 and end user interfaces 651 for users of the host system, including the in-vehicle interfaces that may be used by riders as noted in connection with Fig.5, as well as user interfaces for others, such as fleet managers, insurers, regulators, police, advertisers, merchants, content providers, and many others. Interfaces may include merchant interfaces 652, such as by which merchants may provide advertisements, content relating to offerings, and one or more rewards, such as to induce routing or other behavior on the part of users. Interfaces may include machine interfaces 653, such as application programming interfaces (API) 654, networking interfaces, peer-to-peer interfaces, connectors, brokers, extract-transform-load (ETL) system, bridges, gateways, ports and the like. Interfaces may include one or more host interfaces by which a host may manage and/or configure one or more of the many embodiments described herein, such as configuring neural network components, setting weight for models, setting one or more goals or objectives, setting reward parameters 656, and many others. Interfaces may include expert system/AI system configuration interfaces 657, such as for selecting one or more models 658, selecting and configuring data sets 659 (such as sensor data, external data and other inputs described herein), AI selection 660 and AI configuration 661 (such as selection of neural network category, parameter weighting and the like), feedback selection 662 for an expert system/AI system, such as for learning, and supervision configuration 663, among many others. [00525] Fig. 7 illustrates a data processing system 758, which may process data from various sources, including social media data sources 769, weather data sources 770, road profile sources 771, traffic data sources 772, media data sources 773, sensors sets 774, and many others. The data processing system may be configured to extract data, transform data to a suitable format (such as for use by an interface system, an AI system/expert system, or other systems), load it to an appropriate location, normalize data, cleanse data, deduplicate data, store data (such as to enable queries) and perform a wide range of processing tasks as described throughout this disclosure. [00526] Fig. 8 illustrates a set of algorithms 849 that may be executed in connection with one or more of the many embodiments of transportation systems described throughout this disclosure. Algorithms 849 may take input from, provide output to, and be managed by a set of AI systems/expert systems, such as of the many types described herein. Algorithms 849 may include algorithms for providing or managing user satisfaction 874, one or more genetic algorithms 875, such as for seeking favorable states, parameters, or combinations of states/parameters in connection with optimization of one or more of the systems described herein. Algorithms 849 may include vehicle routing algorithms 876, including ones that are sensitive to various vehicle operating parameters, user experience parameters, or other states, parameters, profiles, or the like described herein, as well as to various goals or objectives. Algorithms 849 may include object detection algorithms 876. Algorithms 849 may include energy calculation algorithms 877, such as
SFT-106-A-PCT for calculating energy parameters, for optimizing fuel usage, electricity usage or the like, for optimizing refueling or recharging time, location, amount or the like. Algorithms may include prediction algorithms, such as for a traffic prediction algorithm 879, a transportation prediction algorithm 880, and algorithms for predicting other states or parameters of transportation systems as described throughout this disclosure. [00527] In various embodiments, transportation systems 111 as described herein may include vehicles (including fleets and other sets of vehicles), as well as various infrastructure systems. Infrastructure systems may include Internet of Things systems (such as using cameras and other sensors, such as disposed on or in roadways, on or in traffic lights, utility poles, toll booths, signs and other roadside devices and systems, on or in buildings, and the like), refueling and recharging systems (such as at service stations, charging locations and the like, and including wireless recharging systems that use wireless power transfer), and many others. [00528] Vehicle electrical, mechanical and/or powertrain components as described herein may include a wide range of systems, including transmission, gear system, clutch system, braking system, fuel system, lubrication system, steering system, suspension system, lighting system (including emergency lighting as well as interior and exterior lights), electrical system, and various subsystems and components thereof. [00529] Vehicle operating states and parameters may include route, purpose of trip, geolocation, orientation, vehicle range, powertrain parameters, current gear, speed/acceleration, suspension profile (including various parameters, such as for each wheel), charge state for electric and hybrid vehicles, fuel state for fueled vehicles, and many others as described throughout this disclosure. [00530] Rider and/or user experience states and parameters as described throughout this disclosure may include emotional states, comfort states, psychological states (e.g., anxiety, nervousness, relaxation or the like), awake/asleep states, and/or states related to satisfaction, alertness, health, wellness, one or more goals or objectives, and many others. User experience parameters as described herein may further include ones related to driving, braking, curve approach, seat positioning, window state, ventilation system, climate control, temperature, humidity, sound level, entertainment content type (e.g., news, music, sports, comedy, or the like), route selection (such as for POIs, scenic views, new sites and the like), and many others. [00531] In embodiments, a route may be ascribed various parameters of value, such as parameters of value that may be optimized to improve user experience or other factors, such as under control of an AI system/expert system. Parameters of value of a route may include speed, duration, on time arrival, length (e.g., in miles), goals (e.g., to see a Point of Interest (POI), to complete a task (e.g., complete a shopping list, complete a delivery schedule, complete a meeting, or the like), refueling or recharging parameters, game-based goals, and others. As one of many examples, a route may
SFT-106-A-PCT be attributed value, such as in a model and/or as an input or feedback to an AI system or expert system that is configured to optimize a route, for task completion. A user may, for example, indicate a goal to meet up with at least one of a set of friends during a weekend, such as by interacting with a user interface or menu that allows setting of objectives. A route may be configured (including with inputs that provide awareness of friend locations, such as by interacting with systems that include location information for other vehicles and/or awareness of social relationships, such as through social data feeds) to increase the likelihood of meeting up, such as by intersecting with predicted locations of friends (which may be predicted by a neural network or other AI system/expert system as described throughout this disclosure) and by providing in-vehicle messages (or messages to a mobile device) that indicates possible opportunities for meeting up. [00532] Market feedback factors may be used to optimize various elements of transportation systems as described throughout this disclosure, such as current and predicted pricing and/or cost (e.g., of fuel, electricity and the like, as well as of goods, services, content and the like that may be available along the route and/or in a vehicle), current and predicted capacity, supply and/or demand for one or more transportation related factors (such as fuel, electricity, charging capacity, maintenance, service, replacement parts, new or used vehicles, capacity to provide ride sharing, self-driving vehicle capacity or availability, and the like), and many others. [00533] An interface in or on a vehicle may include a negotiation system, such as a bidding system, a price-negotiating system, a reward-negotiating system, or the like. For example, a user may negotiate for a higher reward in exchange for agreeing to re-route to a merchant location, a user may name a price the user is willing to pay for fuel (which may be provided to nearby refueling stations that may offer to meet the price), or the like. Outputs from negotiation (such as agreed prices, trips and the like) may automatically result in reconfiguration of a route, such as one governed by an AI system/expert system. [00534] Rewards, such as provided by a merchant or a host, among others, as described herein may include one or more coupons, such as redeemable at a location, provision of higher priority (such as in collective routing of multiple vehicles), permission to use a “Fast Lane,” priority for charging or refueling capacity, among many others. Actions that can lead to rewards in a vehicle may include playing a game, downloading an app, driving to a location, taking a photograph of a location or object, visiting a website, viewing or listening to an advertisement, watching a video, and many others. [00535] In embodiments, an AI system/expert system may use or optimize one or more parameters for a charging plan, such as for charging a battery of an electric or hybrid vehicle. Charging plan parameters may include routing (such as to charging locations), amount of charge or fuel provided, duration of time for charging, battery state, battery charging profile, time required to charge, value
SFT-106-A-PCT of charging, indicators of value, market price, bids for charging, available supply capacity (such as within a geofence or within a range of a set of vehicles), demand (such as based on detected charge/refueling state, based on requested demand, or the like), supply, and others. A neural network or other systems (optionally a hybrid system as described herein), using a model or algorithm (such as a genetic algorithm) may be used (such as by being trained over a set of trials on outcomes, and/or using a training set of human created or human supervised inputs, or the like) may provide a favorable and/or optimized charging plan for a vehicle or a set of vehicles based on the parameters. Other inputs may include priority for certain vehicles (e.g., for emergency responders or for those who have been rewarded priority in connection with various embodiments described herein). [00536] In embodiments, a processor, as described herein, may comprise a neural processing chip, such as one employing a fabric, such as a LambdaFabric. Such a chip may have a plurality of cores, such as 256 cores, where each core is configured in a neuron-like arrangement with other cores on the same chip. Each core may comprise a micro-scale digital signal processor, and the fabric may enable the cores to readily connect to the other cores on the chip. In embodiments, the fabric may connect a large number of cores (e.g., more than 500,000 cores) and/or chips, thereby facilitating use in computational environments that require, for example, large scale neural networks, massively parallel computing, and large-scale, complex conditional logic. In embodiments, a low- latency fabric is used, such as one that has latency of 400 nanoseconds, 300 nanoseconds, 200 nanoseconds, 100 nanoseconds, or less from device-to-device, rack-to-rack, or the like. The chip may be a low power chip, such as one that can be powered by energy harvesting from the environment, from an inspection signal, from an onboard antenna, or the like. In embodiments, the cores may be configured to enable application of a set of sparse matrix heterogeneous machine learning algorithms. The chip may run an object-oriented programming language, such as C++, Java, or the like. In embodiments, a chip may be programmed to run each core with a different algorithm, thereby enabling heterogeneity in algorithms, such as to enable one or more of the hybrid neural network embodiments described throughout this disclosure. A chip can thereby take multiple inputs (e.g., one per core) from multiple data sources, undertake massively parallel processing using a large set of distinct algorithms, and provide a plurality of outputs (such as one per core or per set of cores). [00537] In embodiments, a chip may contain or enable a security fabric, such as a fabric for performing content inspection, packet inspection (such as against a black list, white list, or the like), and the like, in addition to undertaking processing tasks, such as for a neural network, hybrid AI solution, or the like.
SFT-106-A-PCT [00538] In embodiments, the platform described herein may include, integrate with, or connect with a system for robotic process automation (RPA), whereby an artificial intelligence/machine learning system may be trained on a training set of data that consists of tracking and recording sets of interactions of humans as the humans interact with a set of interfaces, such as graphical user interfaces (e.g., via interactions with mouse, trackpad, keyboard, touch screen, joystick, remote control devices); audio system interfaces (such as by microphones, smart speakers, voice response interfaces, intelligent agent interfaces (e.g., Siri and Alexa) and the like); human-machine interfaces (such as involving robotic systems, prosthetics, cybernetic systems, exoskeleton systems, wearables (including clothing, headgear, headphones, watches, wrist bands, glasses, arm bands, torso bands, belts, rings, necklaces and other accessories); physical or mechanical interfaces (e.g., buttons, dials, toggles, knobs, touch screens, levers, handles, steering systems, wheels, and many others); optical interfaces (including ones triggered by eye tracking, facial recognition, gesture recognition, emotion recognition, and the like); sensor-enabled interfaces (such as ones involving cameras, EEG or other electrical signal sensing (such as for brain-computer interfaces), magnetic sensing, accelerometers, galvanic skin response sensors, optical sensors, IR sensors, LIDAR and other sensor sets that are capable of recognizing thoughts, gestures (facial, hand, posture, or other), utterances, and the like, and others. In addition to tracking and recording human interactions, the RPA system may also track and record a set of states, actions, events and results that occur by, within, from or about the systems and processes with which the humans are engaging. For example, the RPA system may record mouse clicks on a frame of video that appears within a process by which a human review the video, such as where the human highlights points of interest within the video, tags objects in the video, captures parameters (such as sizes, dimensions, or the like), or otherwise operates on the video within a graphical user interface. The RPA system may also record system or process states and events, such as recording what elements were the subject of interaction, what the state of a system was before, during and after interaction, and what outputs were provided by the system or what results were achieved. Through a large training set of observation of human interactions and system states, events, and outcomes, the RPA system may learn to interact with the system in a fashion that mimics that of the human. Learning may be reinforced by training and supervision, such as by having a human correct the RPA system as it attempts in a set of trials to undertake the action that the human would have undertaken (e.g., tagging the right object, labeling an item correctly, selecting the correct button to trigger a next step in a process, or the like), such that over a set of trials the RPA system becomes increasingly effective at replicating the action the human would have taken. Learning may include deep learning, such as by reinforcing learning based on outcomes, such as successful outcomes (such as based on successful process completion, financial yield, and many other outcome measures
SFT-106-A-PCT described throughout this disclosure). In embodiments, an RPA system may be seeded during a learning phase with a set of expert human interactions, such that the RPA system begins to be able to replicate expert interaction with a system. For example, an expert driver's interactions with a robotic system, such as a remote-controlled vehicle or a UAV, may be recorded along with information about the vehicles state (e.g., the surrounding environment, navigation parameters, and purpose), such that the RPA system may learn to drive the vehicle in a way that reflects the same choices as an expert driver. After being taught to replicate the skills or expertise of an expert human, the RPA system may be transitioned to a deep learning mode, where the system further improves based on a set of outcomes, such as by being configured to attempt some level of variation in approach (e.g., trying different navigation paths to optimize time of arrival, or trying different approaches to deceleration and acceleration in curves) and tracking outcomes (with feedback), such that the RPA system can learn, by variation/experimentation (which may be randomized, rule- based, or the like, such as using genetic programming techniques, random-walk techniques, random forest techniques, and others) and selection, to exceed the expertise of the human expert. Thus, the RPA system learns from a human expert, acquires expertise in interacting with a system or process, facilitates automation of the process (such as by taking over some of the more repetitive tasks, including ones that require consistent execution of acquired skills), and provides a very effective seed for artificial intelligence, such as by providing a seed model or system that can be improved by machine learning with feedback on outcomes of a system or process. [00539] RPA systems may have particular value in situations where human expertise or knowledge is acquired with training and experience, as well as in situations where the human brain and sensory systems are particularly adapted and evolved to solve problems that are computationally difficult or highly complex. Thus, in embodiments, RPA systems may be used to learn to undertake, among other things: visual pattern recognition tasks with respect to the various systems, processes, workflows and environments described herein (such as recognizing the meaning of dynamic interactions of objects or entities within a video stream (e.g., to understand what is taking place as humans and objects interact in a video); recognition of the significance of visual patterns (e.g., recognizing objects, structures, defects and conditions in a photograph or radiography image); tagging of relevant objects within a visual pattern (e.g., tagging or labeling objects by type, category, or specific identity (such as person recognition); indication of metrics in a visual pattern (such as dimensions of objects indicated by clicking on dimensions in an x-ray or the like); labeling activities in a visual pattern by category (e.g., what work process is being done); recognizing a pattern that is displayed as a signal (e.g., a wave or similar pattern in a frequency domain, time domain, or other signal processing representation); anticipate a n future state based on a current state (e.g., anticipating motion of a flying or rolling object, anticipating a next action by a human
SFT-106-A-PCT in a process, anticipating a next step by a machine, anticipating a reaction by a person to an event, and many others); recognize and predicting emotional states and reactions (such as based on facial expression, posture, body language or the like); apply a heuristic to achieve a favorable state without deterministic calculation (e.g., selecting a favorable strategy in sport or game, selecting a business strategy, selecting a negotiating strategy, setting a price for a product, developing a message to promote a product or idea, generating creative content, recognizing a favorable style or fashion, and many others); any many others. In embodiments, an RPA system may automate workflows that involve visual inspection of people, systems, and objects (including internal components), workflows that involve performing software tasks, such as involving sequential interactions with a series of screens in a software interface, workflows that involve remote control of robots and other systems and devices, workflows that involve content creation (such as selecting, editing and sequencing content), workflows that involve financial decision-making and negotiation (such as setting prices and other terms and conditions of financial and other transactions), workflows that involve decision-making (such as selecting an optimal configuration for a system or sub-system, selecting an optimal path or sequence of actions in a workflow, process or other activity that involves dynamic decision-making), and many others. [00540] In embodiments, an RPA system may use a set of IoT devices and systems (such as cameras and sensors), to track and record human actions and interactions with respect to various interfaces and systems in an environment. The RPA system may also use data from onboard sensors, telemetry, and event recording systems, such as telemetry systems on vehicles and event logs on computers). The RPA system may thus generate and/or receive a large data set (optionally distributed) for an environment (such as any of the environments described throughout this disclosure) including data recording the various entities (human and non-human), systems, processes, applications (e.g., software applications used to enable workflows), states, events, and outcomes, which can be used to train the RPA system (or a set of RPA systems dedicated to automating various processes and workflows) to accomplish processes and workflows in a way that reflects and mimics accumulated human expertise, and that eventually improves on the results of that human expertise by further machine learning. [00541] Referring to Fig.9, in embodiments provided herein are transportation systems 911 having an artificial intelligence system 936 that uses at least one genetic algorithm 975 to explore a set of possible vehicle operating states 945 to determine at least one optimized operating state. In embodiments, the genetic algorithm 975 takes inputs relating to at least one vehicle performance parameter 982 and at least one rider state 937. [00542] An aspect provided herein includes a system for transportation 911, comprising: a vehicle 910 having a vehicle operating state 945; an artificial intelligence system 936 to execute a genetic
SFT-106-A-PCT algorithm 975 to generate mutations from an initial vehicle operating state to determine at least one optimized vehicle operating state. In embodiments, the vehicle operating state 945 includes a set of vehicle parameter values 984. In embodiments, the genetic algorithm 975 is to: vary the set of vehicle parameter values 984 for a set of corresponding time periods such that the vehicle 910 operates according to the set of vehicle parameter values 984 during the corresponding time periods; evaluate the vehicle operating state 945 for each of the corresponding time periods according to a set of measures 983 to generate evaluations; and select, for future operation of the vehicle 910, an optimized set of vehicle parameter values based on the evaluations. [00543] In embodiments, the vehicle operating state 945 includes the rider state 937 of a rider of the vehicle. In embodiments, the at least one optimized vehicle operating state includes an optimized state of the rider. In embodiments, the genetic algorithm 975 is to optimize the state of the rider. In embodiments, the evaluating according to the set of measures 983 is to determine the state of the rider corresponding to the vehicle parameter values 984. [00544] In embodiments, the vehicle operating state 945 includes a state of the rider of the vehicle. In embodiments, the set of vehicle parameter values 984 includes a set of vehicle performance control values. In embodiments, the at least one optimized vehicle operating state includes an optimized state of performance of the vehicle. In embodiments, the genetic algorithm 975 is to optimize the state of the rider and the state of performance of the vehicle. In embodiments, the evaluating according to the set of measures 983 is to determine the state of the rider and the state of performance of the vehicle corresponding to the vehicle performance control values. [00545] In embodiments, the set of vehicle parameter values 984 includes a set of vehicle performance control values. In embodiments, the at least one optimized vehicle operating state includes an optimized state of performance of the vehicle. In embodiments, the genetic algorithm 975 is to optimize the state of performance of the vehicle. In embodiments, the evaluating according to the set of measures 983 is to determine the state of performance of the vehicle corresponding to the vehicle performance control values. [00546] In embodiments, the set of vehicle parameter values 984 includes a rider-occupied parameter value. In embodiments, the rider-occupied parameter value affirms a presence of a rider in the vehicle 910. In embodiments, the vehicle operating state 945 includes the rider state 937 of a rider of the vehicle. In embodiments, the at least one optimized vehicle operating state includes an optimized state of the rider. In embodiments, the genetic algorithm 975 is to optimize the state of the rider. In embodiments, the evaluating according to the set of measures 983 is to determine the state of the rider corresponding to the vehicle parameter values 984. In embodiments, the state of the rider includes a rider satisfaction parameter. In embodiments, the state of the rider includes an input representative of the rider. In embodiments, the input representative of the rider is selected
SFT-106-A-PCT from the group consisting of: a rider state parameter, a rider comfort parameter, a rider emotional state parameter, a rider satisfaction parameter, a rider goals parameter, a classification of the trip, and combinations thereof. [00547] In embodiments, the set of vehicle parameter values 984 includes a set of vehicle performance control values. In embodiments, the at least one optimized vehicle operating state includes an optimized state of performance of the vehicle. In embodiments, the genetic algorithm 975 is to optimize the state of the rider and the state of performance of the vehicle. In embodiments, the evaluating according to the set of measures 983 is to determine the state of the rider and the state of performance of the vehicle corresponding to the vehicle performance control values. In embodiments, the set of vehicle parameter values 984 includes a set of vehicle performance control values. In embodiments, the at least one optimized vehicle operating state includes an optimized state of performance of the vehicle. In embodiments, the genetic algorithm 975 is to optimize the state of performance of the vehicle. In embodiments, the evaluating according to the set of measures 983 is to determine the state of performance of the vehicle corresponding to the vehicle performance control values. [00548] In embodiments, the set of vehicle performance control values are selected from the group consisting of: a fuel efficiency; a trip duration; a vehicle wear; a vehicle make; a vehicle model; a vehicle energy consumption profiles; a fuel capacity; a real-time fuel level; a charge capacity; a recharging capability; a regenerative braking state; and combinations thereof. In embodiments, at least a portion of the set of vehicle performance control values is sourced from at least one of an on-board diagnostic system, a telemetry system, a software system, a vehicle-located sensor, and a system external to the vehicle 910. In embodiments, the set of measures 983 relates to a set of vehicle operating criteria. In embodiments, the set of measures 983 relates to a set of rider satisfaction criteria. In embodiments, the set of measures 983 relates to a combination of vehicle operating criteria and rider satisfaction criteria. In embodiments, each evaluation uses feedback indicative of an effect on at least one of a state of performance of the vehicle and a state of the rider. [00549] An aspect provided herein includes a system for transportation 911, comprising: an artificial intelligence system 936 to process inputs representative of a state of a vehicle and inputs representative of a rider state 937 of a rider occupying the vehicle during the state of the vehicle with the genetic algorithm 975 to optimize a set of vehicle parameters that affects the state of the vehicle or the rider state 937. In embodiments, the genetic algorithm 975 is to perform a series of evaluations using variations of the inputs. In embodiments, each evaluation in the series of evaluations uses feedback indicative of an effect on at least one of a vehicle operating state 945 and the rider state 937. In embodiments, the inputs representative of the rider state 937 indicate
SFT-106-A-PCT that the rider is absent from the vehicle 910. In embodiments, the state of the vehicle includes the vehicle operating state 945. In embodiments, a vehicle parameter in the set of vehicle parameters includes a vehicle performance parameter 982. In embodiments, the genetic algorithm 975 is to optimize the set of vehicle parameters for the state of the rider. [00550] In embodiments, optimizing the set of vehicle parameters is responsive to an identifying, by the genetic algorithm 975, of at least one vehicle parameter that produces a favorable rider state. In embodiments, the genetic algorithm 975 is to optimize the set of vehicle parameters for vehicle performance. In embodiments, the genetic algorithm 975 is to optimize the set of vehicle parameters for the state of the rider and is to optimize the set of vehicle parameters for vehicle performance. In embodiments, optimizing the set of vehicle parameters is responsive to the genetic algorithm 975 identifying at least one of a favorable vehicle operating state, and favorable vehicle performance that maintains the rider state 937. In embodiments, the artificial intelligence system 936 further includes a neural network selected from a plurality of different neural networks. In embodiments, the selection of the neural network involves the genetic algorithm 975. In embodiments, the selection of the neural network is based on a structured competition among the plurality of different neural networks. In embodiments, the genetic algorithm 975 facilitates training a neural network to process interactions among a plurality of vehicle operating systems and riders to produce the optimized set of vehicle parameters. [00551] In embodiments, a set of inputs relating to at least one vehicle parameter are provided by at least one of an on-board diagnostic system, a telemetry system, a vehicle-located sensor, and a system external to the vehicle. In embodiments, the inputs representative of the rider state 937 comprise at least one of comfort, emotional state, satisfaction, goals, classification of trip, or fatigue. In embodiments, the inputs representative of the rider state 937 reflect a satisfaction parameter of at least one of a driver, a fleet manager, an advertiser, a merchant, an owner, an operator, an insurer, and a regulator. In embodiments, the inputs representative of the rider state 937 comprise inputs relating to a user that, when processed with a cognitive system yield the rider state 937. [00552] Referring to Fig. 10, in embodiments provided herein are transportation systems 1011 having a hybrid neural network 1047 for optimizing the operating state of a continuously variable powertrain 1013 of a vehicle 1010. In embodiments, at least one part of the hybrid neural network 1047 operates to classify a state of the vehicle 1010 and another part of the hybrid neural network 1047 operates to optimize at least one operating parameter 99060 of the transmission 1019. In embodiments, the vehicle 1010 may be a self-driving vehicle. In an example, the first portion 1085 of the hybrid neural network may classify the vehicle 1010 as operating in a high-traffic state (such as by use of LIDAR, RADAR, or the like that indicates the presence of other vehicles, or by taking
SFT-106-A-PCT input from a traffic monitoring system, or by detecting the presence of a high density of mobile devices, or the like) and a bad weather state (such as by taking inputs indicating wet roads (such as using vision-based systems), precipitation (such as determined by radar), presence of ice (such as by temperature sensing, vision-based sensing, or the like), hail (such as by impact detection, sound-sensing, or the like), lightning (such as by vision-based systems, sound-based systems, or the like), or the like. Once classified, another neural network 1086 (optionally of another type) may optimize the vehicle operating parameter based on the classified state, such as by putting the vehicle 1010 into a safe-driving mode (e.g., by providing forward-sensing alerts at greater distances and/lower speeds than in good weather, by providing automated braking earlier and more aggressively than in good weather, and the like). [00553] An aspect provided herein includes a system for transportation 1011, comprising: a hybrid neural network 1047 for optimizing an operating state of a continuously variable powertrain 1013 of a vehicle 1010. In embodiments, a portion 1085 of the hybrid neural network 1047 is to operate to classify a state 1044 of the vehicle 1010 thereby generating a classified state of the vehicle, and another portion 1086 of the hybrid neural network 1047 is to operate to optimize at least one operating parameter 1060 of a transmission 1019 portion of the continuously variable powertrain 1013. [00554] In embodiments, the system for transportation 1011 further comprises: an artificial intelligence system 1036 operative on at least one processor 1088, the artificial intelligence system 1036 to operate the portion 1085 of the hybrid neural network 1047 to operate to classify the state of the vehicle and the artificial intelligence system 1036 to operate the other portion 1086 of the hybrid neural network 1047 to optimize the at least one operating parameter 1087 of the transmission 1019 portion of the continuously variable powertrain 1013 based on the classified state of the vehicle. In embodiments, the vehicle 1010 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle 1010 is at least a semi- autonomous vehicle. In embodiments, the vehicle 1010 is to be automatically routed. In embodiments, the vehicle 1010 is a self-driving vehicle. In embodiments, the classified state of the vehicle is: a vehicle maintenance state; a vehicle health state; a vehicle operating state; a vehicle energy utilization state; a vehicle charging state; a vehicle satisfaction state; a vehicle component state; a vehicle sub-system state; a vehicle powertrain system state; a vehicle braking system state; a vehicle clutch system state; a vehicle lubrication system state; a vehicle transportation infrastructure system state; or a vehicle rider state. In embodiments, at least a portion of the hybrid neural network 1047 is a convolutional neural network. [00555] Fig. 11 illustrates a method 1100 for optimizing operation of a continuously variable vehicle powertrain of a vehicle in accordance with embodiments of the systems and methods
SFT-106-A-PCT disclosed herein. At 19902, the method includes executing a first network of a hybrid neural network on at least one processor, the first network classifying a plurality of operational states of the vehicle. In embodiments, at least a portion of the operational states is based on a state of the continuously variable powertrain of the vehicle. At 1104, the method includes executing a second network of the hybrid neural network on the at least one processor, the second network processing inputs that are descriptive of the vehicle and of at least one detected condition associated with an occupant of the vehicle for at least one of the plurality of classified operational states of the vehicle. In embodiments, the processing the inputs by the second network causes optimization of at least one operating parameter of the continuously variable powertrain of the vehicle for a plurality of the operational states of the vehicle. [00556] Referring to Fig. 10 and Fig. 11 together, in embodiments, the vehicle comprises an artificial intelligence system 1036, the method further comprising automating at least one control parameter of the vehicle by the artificial intelligence system 1036. In embodiments, the vehicle 1010 is at least a semi-autonomous vehicle. In embodiments, the vehicle 1010 is to be automatically routed. In embodiments, the vehicle 1010 is a self-driving vehicle. In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, an operating state of the continuously variable powertrain 1013 of the vehicle based on the optimized at least one operating parameter 1060 of the continuously variable powertrain 1013 by adjusting at least one other operating parameter 1087 of a transmission 1019 portion of the continuously variable powertrain 1013. [00557] In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing social data from a plurality of social data sources. In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing data sourced from a stream of data from unstructured data sources. In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing data sourced from wearable devices. In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing data sourced from in-vehicle sensors. In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing data sourced from a rider helmet. [00558] In embodiments, the method further comprises optimizing, by the artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing data sourced from rider headgear. In embodiments, the method further comprises optimizing, by the
SFT-106-A-PCT artificial intelligence system 1036, the operating state of the continuously variable powertrain 1013 by processing data sourced from a rider voice system. In embodiments, the method further comprises operating, by the artificial intelligence system 1036, a third network of the hybrid neural network 1047 to predict a state of the vehicle based at least in part on at least one of the classified plurality of operational states of the vehicle and at least one operating parameter of the transmission 1019. In embodiments, the first network of the hybrid neural network 1047 comprises a structure- adaptive network to adapt a structure of the first network responsive to a result of operating the first network of the hybrid neural network 1047. In embodiments, the first network of the hybrid neural network 1047 is to process a plurality of social data from social data sources to classify the plurality of operational states of the vehicle. [00559] In embodiments, at least a portion of the hybrid neural network 1047 is a convolutional neural network. In embodiments, at least one of the classified plurality of operational states of the vehicle is: a vehicle maintenance state; or a vehicle health state. In embodiments, at least one of the classified states of the vehicle is: a vehicle operating state; a vehicle energy utilization state; a vehicle charging state; a vehicle satisfaction state; a vehicle component state; a vehicle sub-system state; a vehicle powertrain system state; a vehicle braking system state; a vehicle clutch system state; a vehicle lubrication system state; or a vehicle transportation infrastructure system state. In embodiments, the at least one of classified states of the vehicle is a vehicle driver state. In embodiments, the at least one of classified states of the vehicle is a vehicle rider state. [00560] Referring to Fig. 12, in embodiments, provided herein are transportation systems 1211 having a cognitive system for routing at least one vehicle 1210 within a set of vehicles 1294 based on a routing parameter determined by facilitating negotiation among a designated set of vehicles. In embodiments, negotiation accepts inputs relating to the value attributed by at least one rider to at least one parameter 1230 of a route 1295. A user 1290 may express value by a user interface that rates one or more parameters (e.g., any of the parameters noted throughout), by behavior (e.g., undertaking behavior that reflects or indicates value ascribed to arriving on time, following a given route 1295, or the like), or by providing or offering value (e.g., offering currency, tokens, points, cryptocurrency, rewards, or the like). For example, a user 1290 may negotiate for a preferred route by offering tokens to the system that are awarded if the user 1290 arrives at a designated time, while others may offer to accept tokens in exchange for taking alternative routes (and thereby reducing congestion). Thus, an artificial intelligence system may optimize a combination of offers to provide rewards or to undertake behavior in response to rewards, such that the reward system optimizes a set of outcomes. Negotiation may include explicit negotiation, such as where a driver offers to reward drivers ahead of the driver on the road in exchange for their leaving the route temporarily as the driver passes.
SFT-106-A-PCT [00561] An aspect provided herein includes a system for transportation 1211, comprising: a cognitive system for routing at least one vehicle 1210 within a set of vehicles 1294 based on a routing parameter determined by facilitating a negotiation among a designated set of vehicles, wherein the negotiation accepts inputs relating to a value attributed by at least one user 1290 to at least one parameter of a route 1295. [00562] Fig.13 illustrates a method 1300 of negotiation-based vehicle routing in accordance with embodiments of the systems and methods disclosed herein. At 1302, the method includes facilitating a negotiation of a route-adjustment value for a plurality of parameters used by a vehicle routing system to route at least one vehicle in a set of vehicles. At 1304, the method includes determining a parameter in the plurality of parameters for optimizing at least one outcome based on the negotiation. [00563] Referring to Fig.12 and Fig.13, in embodiments, a user 1290 is an administrator for a set of roadways to be used by the at least one vehicle 1210 in the set of vehicles 1294. In embodiments, a user 1290 is an administrator for a fleet of vehicles including the set of vehicles 1294. In embodiments, the method further comprises offering a set of offered user-indicated values for the plurality of parameters 1230 to users 1290 with respect to the set of vehicles 1294. In embodiments, the route-adjustment value 1224 is based at least in part on the set of offered user-indicated values 1297. In embodiments, the route-adjustment value 1224 is further based on at least one user response to the offering. In embodiments, the route-adjustment value 1224 is based at least in part on the set of offered user-indicated values 1297 and at least one response thereto by at least one user of the set of vehicles 1294. In embodiments, the determined parameter facilitates adjusting a route 1295 of at least one of the vehicles 1210 in the set of vehicles 1294. In embodiments, adjusting the route includes prioritizing the determined parameter for use by the vehicle routing system. [00564] In embodiments, the facilitating negotiation includes facilitating negotiation of a price of a service. In embodiments, the facilitating negotiation includes facilitating negotiation of a price of fuel. In embodiments, the facilitating negotiation includes facilitating negotiation of a price of recharging. In embodiments, the facilitating negotiation includes facilitating negotiation of a reward for taking a routing action. [00565] An aspect provided herein includes a transportation system 1211 for negotiation-based vehicle routing comprising: a route adjustment negotiation system 1289 through which users 1290 in a set of users 1291 negotiate a route-adjustment value 1224 for at least one of a plurality of parameters 1230 used by a vehicle routing system 1292 to route at least one vehicle 1210 in a set of vehicles 1294; and a user route optimizing circuit 1293 to optimize a portion of a route 1295 of at least one user 1290 of the set of vehicles 1294 based on the route-adjustment value 1224 for the
SFT-106-A-PCT at least one of the plurality of parameters 1230. In embodiments, the route-adjustment value 1224 is based at least in part on user-indicated values 1297 and at least one negotiation response thereto by at least one user of the set of vehicles 1294. In embodiments, the transportation system 1211 further comprises a vehicle-based route negotiation interface through which user-indicated values 1297 for the plurality of parameters 1230 used by the vehicle routing system are captured. In embodiments, a user 1290 is a rider of the at least one vehicle 1210. In embodiments, a user 1290 is an administrator for a set of roadways to be used by the at least one vehicle 1210 in the set of vehicles 1294. [00566] In embodiments, a user 1290 is an administrator for a fleet of vehicles including the set of vehicles 1294. In embodiments, the at least one of the plurality of parameters 1230 facilitates adjusting a route 1295 of the at least one vehicle 1210. In embodiments, adjusting the route 1295 includes prioritizing a determined parameter for use by the vehicle routing system. In embodiments, at least one of the user-indicated values 1297 is attributed to at least one of the plurality of parameters 1230 through an interface to facilitate expression of rating one or more route parameters. In embodiments, the vehicle-based route negotiation interface facilitates expression of rating one or more route parameters. In embodiments, the user-indicated values 1297 are derived from a behavior of the user 1290. In embodiments, the vehicle-based route negotiation interface facilitates converting user behavior to the user-indicated values 1297. In embodiments, the user behavior reflects value ascribed to the at least one parameter used by the vehicle routing system to influence a route 1295 of at least one vehicle 1210 in the set of vehicles 1294. In embodiments, the user-indicated value indicated by at least one user 1290 correlates to an item of value provided by the user 1290. In embodiments, the item of value is provided by the user 1290 through an offering of the item of value in exchange for a result of routing based on the at least one parameter. In embodiments, the negotiating of the route-adjustment value 1224 includes offering an item of value to the users of the set of vehicles 1294. [00567] Referring to Fig. 14, in embodiments provided herein are transportation systems 1411 having a cognitive system for routing at least one vehicle 1410 within a set of vehicles 1494 based on a routing parameter determined by facilitating coordination among a designated set of vehicles 1498. In embodiments, the coordination is accomplished by taking at least one input from at least one game-based interface 1499 for riders of the vehicles. A game-based interface 1499 may include rewards for undertaking game-like actions (i.e., game activities 14101) that provide an ancillary benefit. For example, a rider in a vehicle 1410 may be rewarded for routing the vehicle 1410 to a point of interest off a highway (such as to collect a coin, to capture an item, or the like), while the rider’s departure clears space for other vehicles that are seeking to achieve other objectives, such as on-time arrival. For example, a game like Pokemon Go™ may be configured to indicate the
SFT-106-A-PCT presence of rare Pokemon™ creatures in locations that attract traffic away from congested locations. Others may provide rewards (e.g., currency, cryptocurrency or the like) that may be pooled to attract users 1490 away from congested roads. [00568] An aspect provided herein includes a system for transportation 1411, comprising: a cognitive system for routing at least one vehicle 1410 within a set of vehicles 1494 based on a set of routing parameters 1430 determined by facilitating coordination among a designated set of vehicles 1498, wherein the coordination is accomplished by taking at least one input from at least one game-based interface 1499 for a user 1490 of a vehicle 1410 in the designated set of vehicles 1498. [00569] In embodiments, the system for transportation further comprises: a vehicle routing system 1492 to route the at least one vehicle 1410 based on the set of routing parameters 1430; and the game-based interface 1499 through which the user 1490 indicates a routing preference 14100 for at least one vehicle 1410 within the set of vehicles 1494 to undertake a game activity 14101 offered in the game-based interface 1499; wherein the game-based interface 1499 is to induce the user 1490 to undertake a set of favorable routing choices based on the set of routing parameters 1430. As used herein, “to route” means to select a route 1495. [00570] In embodiments, the vehicle routing system 1492 accounts for the routing preference 14100 of the user 1490 when routing the at least one vehicle 1410 within the set of vehicles 1494. In embodiments, the game-based interface 1499 is disposed for in-vehicle use as indicated in Fig. 14 by the line extending from the Game-Based Interface into the box for Vehicle 1. In embodiments, the user 1490 is a rider of the at least one vehicle 1410. In embodiments, the user 1490 is an administrator for a set of roadways to be used by the at least one vehicle 1410 in the set of vehicles 1494. In embodiments, the user 1490 is an administrator for a fleet of vehicles including the set of vehicles 1494. In embodiments, the set of routing parameters 1430 includes at least one of traffic congestion, desired arrival times, preferred routes, fuel efficiency, pollution reduction, accident avoidance, avoiding bad weather, avoiding bad road conditions, reduced fuel consumption, reduced carbon footprint, reduced noise in a region, avoiding high-crime regions, collective satisfaction, maximum speed limit, avoidance of toll roads, avoidance of city roads, avoidance of undivided highways, avoidance of left turns, avoidance of driver-operated vehicles. In embodiments, the game activity 14101 offered in the game-based interface 1499 includes contests. In embodiments, the game activity 14101 offered in the game-based interface 1499 includes entertainment games. [00571] In embodiments, the game activity 14101 offered in the game-based interface 1499 includes competitive games. In embodiments, the game activity 14101 offered in the game-based interface 1499 includes strategy games. In embodiments, the game activity 14101 offered in the
SFT-106-A-PCT game-based interface 1499 includes scavenger hunts. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a fuel efficiency objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a reduced traffic objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a reduced pollution objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a reduced carbon footprint objective. [00572] In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a reduced noise in neighborhoods objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a collective satisfaction objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoiding accident scenes objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoiding high-crime areas objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a reduced traffic congestion objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a bad weather avoidance objective. [00573] In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a maximum travel time objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves a maximum speed limit objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoidance of toll road’s objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoidance of city road’s objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoidance of undivided highway’s objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoidance of left turns objective. In embodiments, the set of favorable routing choices is configured so that the vehicle routing system 1492 achieves an avoidance of driver-operated vehicles objective. [00574] Fig.15 illustrates a method 1500 of game-based coordinated vehicle routing in accordance with embodiments of the systems and methods disclosed herein. At 1502, the method includes presenting, in a game-based interface, a vehicle route preference-affecting game activity. At 1504, the method includes receiving, through the game-based interface, a user response to the presented game activity. At 1506, the method includes adjusting a routing preference for the user responsive to the received response. At 1508, the method includes determining at least one vehicle-routing
SFT-106-A-PCT parameter used to route vehicles to reflect the adjusted routing preference for routing vehicles. At 1509, the method includes routing, with a vehicle routing system, vehicles in a set of vehicles responsive to the at least one determined vehicle routing parameter adjusted to reflect the adjusted routing preference, wherein routing of the vehicles includes adjusting the determined routing parameter for at least a plurality of vehicles in the set of vehicles. [00575] Referring to Fig.14 and Fig.15, in embodiments, the method further comprises indicating, by the game-based interface 1499, a reward value 14102 for accepting the game activity 14101. In embodiments, the game-based interface 1499 further comprises a routing preference negotiation system 1436 for a rider to negotiate the reward value 14102 for accepting the game activity 14101. In embodiments, the reward value 14102 is a result of pooling contributions of value from riders in the set of vehicles. In embodiments, at least one routing parameter 1430 used by the vehicle routing system 1492 to route the vehicles 1410 in the set of vehicles 1494 is associated with the game activity 14101 and a user acceptance of the game activity 14101 adjusts (e.g., by the routing adjustment value 1424) the at least one routing parameter 1430 to reflect the routing preference. In embodiments, the user response to the presented game activity 14101 is derived from a user interaction with the game-based interface 1499. In embodiments, the at least one routing parameter used by the vehicle routing system 1492 to route the vehicles 1410 in the set of vehicles 1494 includes at least one of: traffic congestion, desired arrival times, preferred routes, fuel efficiency, pollution reduction, accident avoidance, avoiding bad weather, avoiding bad road conditions, reduced fuel consumption, reduced carbon footprint, reduced noise in a region, avoiding high- crime regions, collective satisfaction, maximum speed limit, avoidance of toll roads, avoidance of city roads, avoidance of undivided highways, avoidance of left turns, and avoidance of driver- operated vehicles. [00576] In embodiments, the game activity 14101 presented in the game-based interface 1499 includes contests. In embodiments, the game activity 14101 presented in the game-based interface 1499 includes entertainment games. In embodiments, the game activity 14101 presented in the game-based interface 1496 includes competitive games. In embodiments, the game activity 14101 presented in the game-based interface 1499 includes strategy games. In embodiments, the game activity 14101 presented in the game-based interface 1499 includes scavenger hunts. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a fuel efficiency objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a reduced traffic objective. [00577] In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a reduced pollution objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a reduced carbon footprint
SFT-106-A-PCT objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a reduced noise in neighborhoods objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a collective satisfaction objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoiding accident scene’s objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoiding high-crime areas objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a reduced traffic congestion objective. [00578] In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a bad weather avoidance objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a maximum travel time objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves a maximum speed limit objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoidance of toll road’s objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoidance of city road’s objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoidance of undivided highway’s objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoidance of left turns objective. In embodiments, the routing responsive to the at least one determined vehicle routing parameter 14103 achieves an avoidance of driver-operated vehicles objective. [00579] Referring to Fig. 16, in embodiments, provided herein are transportation systems 1611 having a cognitive system for routing at least one vehicle, wherein the routing is determined at least in part by processing at least one input from a rider interface wherein a rider can obtain a reward 16102 by undertaking an action while in the vehicle. In embodiments, the rider interface may display a set of available rewards for undertaking various actions, such that the rider may select (such as by interacting with a touch screen or audio interface), a set of rewards to pursue, such as by allowing a navigation system of the vehicle (or of a ride-share system of which the user 1690 has at least partial control) or a routing system 1692 of a self-driving vehicle to use the actions that result in rewards to govern routing. For example, selection of a reward for attending a site may result in sending a signal to a navigation or routing system 1692 to set an intermediate destination at the site. As another example, indicating a willingness to watch a piece of content may cause a routing system 1692 to select a route that permits adequate time to view or hear the content.
SFT-106-A-PCT [00580] An aspect provided herein includes a system for transportation 1611, comprising: a cognitive system for routing at least one vehicle 1610, wherein the routing is based, at least in part, by processing at least one input from a rider interface, wherein a reward 16102 is made available to a rider in response to the rider undertaking a predetermined action while in the at least one vehicle 1610. [00581] An aspect provided herein includes a transportation system 1611 for reward-based coordinated vehicle routing comprising: a reward-based interface 16104 to offer a reward 16102 and through which a user 1690 related to a set of vehicles 1694 indicates a routing preference of the user 1690 related to the reward 16102 by responding to the reward 16102 offered in the reward- based interface 16104; a reward offer response processing circuit 16105 to determine at least one user action resulting from the user response to the reward 16102 and to determine a corresponding effect 16106 on at least one routing parameter 1630; and a vehicle routing system 1692 to use the routing preference 16100 of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles 1694. [00582] In embodiments, the user 1690 is a rider of at least one vehicle 1610 in the set of vehicles 1694. In embodiments, the user 1690 is an administrator for a set of roadways to be used by at least one vehicle 1610 in the set of vehicles 1694. In embodiments, the user 1690 is an administrator for a fleet of vehicles including the set of vehicles 1694. In embodiments, the reward-based interface 16104 is disposed for in-vehicle use. In embodiments, the at least one routing parameter 1630 includes at least one of: traffic congestion, desired arrival times, preferred routes, fuel efficiency, pollution reduction, accident avoidance, avoiding bad weather, avoiding bad road conditions, reduced fuel consumption, reduced carbon footprint, reduced noise in a region, avoiding high- crime regions, collective satisfaction, maximum speed limit, avoidance of toll roads, avoidance of city roads, avoidance of undivided highways, avoidance of left turns, and avoidance of driver- operated vehicles. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a fuel efficiency objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a reduced traffic objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve` a reduced pollution objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a reduced carbon footprint objective.
SFT-106-A-PCT [00583] In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a reduced noise in neighborhoods objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a collective satisfaction objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve` an avoiding accident scenes objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve an avoiding high-crime areas objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a reduced traffic congestion objective. [00584] In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a bad weather avoidance objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a maximum travel time objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve a maximum speed limit objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve an avoidance of toll road’s objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve an avoidance of city road’s objective. [00585] In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve an avoidance of undivided highway’s objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing parameter to govern routing of the set of vehicles to achieve an avoidance of left turns objective. In embodiments, the vehicle routing system 1692 is to use the routing preference of the user 1690 and the corresponding effect on the at least one routing
SFT-106-A-PCT parameter to govern routing of the set of vehicles to achieve an avoidance of driver-operated vehicles objective. [00586] Fig. 17 illustrates a method 1700 of reward-based coordinated vehicle routing in accordance with embodiments of the systems and methods disclosed herein. At 1702, the method includes receiving through a reward-based interface a response of a user related to a set of vehicles to a reward offered in the reward-based interface. At 1704, the method includes determining a routing preference based on the response of the user. At 1706, the method includes determining at least one user action resulting from the response of the user to the reward. At 1708, the method includes determining a corresponding effect of the at least one user action on at least one routing parameter. At 1709, the method includes governing routing of the set of vehicles responsive to the routing preference and the corresponding effect on the at least one routing parameter. [00587] In embodiments, the user 1690 is a rider of at least one vehicle 1610 in the set of vehicles 1694. In embodiments, the user 1690 is an administrator for a set of roadways to be used by at least one vehicle 1610 in the set of vehicles 1694. In embodiments, the user 1690 is an administrator for a fleet of vehicles including the set of vehicles 1694. [00588] In embodiments, the reward-based interface 16104 is disposed for in-vehicle use. In embodiments, the at least one routing parameter 1630 includes at least one of: traffic congestion, desired arrival times, preferred routes, fuel efficiency, pollution reduction, accident avoidance, avoiding bad weather, avoiding bad road conditions, reduced fuel consumption, reduced carbon footprint, reduced noise in a region, avoiding high-crime regions, collective satisfaction, maximum speed limit, avoidance of toll roads, avoidance of city roads, avoidance of undivided highways, avoidance of left turns, and avoidance of driver-operated vehicles. In embodiments, the user 1690 responds to the reward 16102 offered in the reward-based interface 16104 by accepting the reward 16102 offered in the interface, rejecting the reward 16102 offered in the reward-based interface 16104, or ignoring the reward 16102 offered in the reward-based interface 16104. In embodiments, the user 1690 indicates the routing preference by either accepting or rejecting the reward 16102 offered in the reward-based interface 16104. In embodiments, the user 1690 indicates the routing preference by undertaking an action in at least one vehicle 1610 in the set of vehicles 1694 that facilitates transferring the reward 16102 to the user 1690. [00589] In embodiments, the method further comprises sending, via a reward offer response processing circuit 16105, a signal to the vehicle routing system 1692 to select a vehicle route that permits adequate time for the user 1690 to perform the at least one user action. In embodiments, the method further comprises: sending, via a reward offer response processing circuit 16105, a signal to a vehicle routing system 1692, the signal indicating a destination of a vehicle associated with the at least one user action; and adjusting, by the vehicle routing system 1692, a route of the
SFT-106-A-PCT vehicle 1695 associated with the at least one user action to include the destination. In embodiments, the reward 16102 is associated with achieving a vehicle routing fuel efficiency objective. [00590] In embodiments, the reward 16102 is associated with achieving a vehicle routing reduced traffic objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing reduced pollution objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing reduced carbon footprint objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing reduced noise in neighborhoods objective. In embodiments, reward 16102 is associated with achieving a vehicle routing collective satisfaction objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing avoiding accident scene’s objective. [00591] In embodiments, the reward 16102 is associated with achieving a vehicle routing avoiding high-crime areas objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing reduced traffic congestion objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing bad weather avoidance objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing maximum travel time objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing maximum speed limit objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing avoidance of toll road’s objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing avoidance of city road’s objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing avoidance of undivided highway’s objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing avoidance of left turns objective. In embodiments, the reward 16102 is associated with achieving a vehicle routing avoidance of driver-operated vehicles objective. [00592] Referring to Fig. 18, in embodiments provided herein are transportation systems 1811 having a data processing system 1862 for taking data 18114 from a plurality 1869 of social data sources 18107 and using a neural network 18108 to predict an emerging transportation need 18112 for a group of individuals. Among the various social data sources 18107, such as those described above, a large amount of data is available relating to social groups, such as friend groups, families, workplace colleagues, club members, people having shared interests or affiliations, political groups, and others. The expert system described above can be trained, as described throughout, such as using a training data set of human predictions and/or a model, with feedback of outcomes, to predict the transportation needs of a group. For example, based on a discussion thread of a social group as indicated at least in part on a social network feed, it may become evident that a group meeting or trip will take place, and the system may (such as using location information for respective members, as well as indicators of a set of destinations of the trip), predict where and
SFT-106-A-PCT when each member would need to travel in order to participate. Based on such a prediction, the system could automatically identify and show options for travel, such as available public transportation options, flight options, ride share options, and the like. Such options may include ones by which the group may share transportation, such as indicating a route that results in picking up a set of members of the group for travel together. Social media information may include posts, tweets, comments, chats, photographs, and the like and may be processed as noted above. [00593] An aspect provided herein includes a system 1811 for transportation, comprising: a data processing system 1862 for taking data 18114 from a plurality 1869 of social data sources 18107 and using a neural network 18108 to predict an emerging transportation need 18112 for a group of individuals 18110. [00594] Fig.19 illustrates a method 1900 of predicting a common transportation need for a group in accordance with embodiments of the systems and methods disclosed herein. At 1902, the method includes gathering social media-sourced data about a plurality of individuals, the data being sourced from a plurality of social media sources. At 1904, the method includes processing the data to identify a subset of the plurality of individuals who form a social group based on group affiliation references in the data. At 1906, the method includes detecting keywords in the data indicative of a transportation need. At 1908, the method includes using a neural network trained to predict transportation needs based on the detected keywords to identify the common transportation need for the subset of the plurality of individuals. [00595] Referring to Fig. 18 and Fig. 19, in embodiments, the neural network 18108 is a convolutional neural network 18113. In embodiments, the neural network 18108 is trained based on a model that facilitates matching phrases in social media with transportation activity. In embodiments, the neural network 18108 predicts at least one of a destination and an arrival time for the subset 18110 of the plurality of individuals sharing the common transportation need. In embodiments, the neural network 18108 predicts the common transportation need based on analysis of transportation need-indicative keywords detected in a discussion thread among a portion of individuals in the social group. In embodiments, the method further comprises identifying at least one shared transportation service 18111 that facilitates a portion of the social group meeting the predicted common transportation need 18112. In embodiments, the at least one shared transportation service comprises generating a vehicle route that facilitates picking up the portion of the social group. [00596] Fig.20 illustrates a method 2000 of predicting a group transportation need for a group in accordance with embodiments of the systems and methods disclosed herein. At 2002, the method includes gathering social media-sourced data about a plurality of individuals, the data being sourced from a plurality of social media sources. At 2004, the method includes processing the data
SFT-106-A-PCT to identify a subset of the plurality of individuals who share the group transportation need. At 2006, the method includes detecting keywords in the data indicative of the group transportation need for the subset of the plurality of individuals. At 2008, the method includes predicting the group transportation need using a neural network trained to predict transportation needs based on the detected keywords. At 2009, the method includes directing a vehicle routing system to meet the group transportation need. [00597] Referring to Fig. 18 and Fig. 20, in embodiments, the neural network 18108 is a convolutional neural network 18113. In embodiments, directing the vehicle routing system to meet the group transportation need involves routing a plurality of vehicles to a destination derived from the social media-sourced data 18114. In embodiments, the neural network 18108 is trained based on a model that facilitates matching phrases in the social media-sourced data 18114 with transportation activities. In embodiments, the method further comprises predicting, by the neural network 18108, at least one of a destination and an arrival time for the subset 18110 of the plurality 18109 of individuals sharing the group transportation need. In embodiments, the method further comprises predicting, by the neural network 18108, the group transportation need based on an analysis of transportation need-indicative keywords detected in a discussion thread in the social media-sourced data 18114. In embodiments, the method further comprises identifying at least one shared transportation service 18111 that facilitates meeting the predicted group transportation need for at least a portion of the subset 18110 of the plurality of individuals. In embodiments, the at least one shared transportation service 18111 comprises generating a vehicle route that facilitates picking up the at least the portion of the subset 18110 of the plurality of individuals. [00598] Fig.21 illustrates a method 2100 of predicting a group transportation need in accordance with embodiments of the systems and methods disclosed herein. At 2102, the method includes gathering social media-sourced data from a plurality of social media sources. At 2104, the method includes processing the data to identify an event. At 2106, the method includes detecting keywords in the data indicative of the event to determine a transportation need associated with the event. At 2108, the method includes using a neural network trained to predict transportation needs based at least in part on social media-sourced data to direct a vehicle routing system to meet the transportation need. [00599] Referring to Fig. 18 and Fig. 21, in embodiments, the neural network 18108 is a convolutional neural network 18113. In embodiments, the vehicle routing system is directed to meet the transportation need by routing a plurality of vehicles to a location associated with the event. In embodiments, the vehicle routing system is directed to meet the transportation need by routing a plurality of vehicles to avoid a region proximal to a location associated with the event. In embodiments, the vehicle routing system is directed to meet the transportation need by routing
SFT-106-A-PCT vehicles associated with users whose social media-sourced data 18114 do not indicate the transportation need to avoid a region proximal to a location associated with the event. In embodiments, the method further comprises presenting at least one transportation service for satisfying the transportation need. In embodiments, the neural network 18108 is trained based on a model that facilitates matching phrases in social media-sourced data 18114 with transportation activity. [00600] In embodiments, the neural network 18108 predicts at least one of a destination and an arrival time for individuals attending the event. In embodiments, the neural network 18108 predicts the transportation need based on analysis of transportation need-indicative keywords detected in a discussion thread in the social media-sourced data 18114. In embodiments, the method further comprises identifying at least one shared transportation service that facilitates meeting the predicted transportation need for at least a subset of individuals identified in the social media- sourced data 18114. In embodiments, the at least one shared transportation service comprises generating a vehicle route that facilitates picking up the portion of the subset of individuals identified in the social media-sourced data 18114. [00601] Referring to Fig. 22, in embodiments provided herein are transportation systems 2211 having a data processing system 2211 for taking social media data 22114 from a plurality 2269 of social data sources 22107 and using a hybrid neural network 2247 to optimize an operating state of a transportation system 22111 based on processing the social data sources 22107 with the hybrid neural network 2247. A hybrid neural network 2247 may have, for example, a neural network component that makes a classification or prediction based on processing social media data 22114 (such as predicting a high level of attendance of an event by processing images on many social media feeds that indicate interest in the event by many people, prediction of traffic, classification of interest by an individual in a topic, and many others) and another component that optimizes an operating state of a transportation system, such as an in-vehicle state, a routing state (for an individual vehicle 2210 or a set of vehicles 2294), a user-experience state, or other state described throughout this disclosure (e.g., routing an individual early to a venue like a music festival where there is likely to be very high attendance, playing music content in a vehicle 2210 for bands who will be at the music festival, or the like). [00602] An aspect provided herein includes a system for transportation, comprising: a data processing system 2211 for taking social media data 22114 from a plurality 2269 of social data sources 22107 and using a hybrid neural network 2247 to optimize an operating state of a transportation system based on processing the data 22114 from the plurality 2269 of social data sources 22107 with the hybrid neural network 2247.
SFT-106-A-PCT [00603] An aspect provided herein includes a hybrid neural network system 22115 for transportation system optimization, the hybrid neural network system 22115 comprising a hybrid neural network 2247, including: a first neural network 2222 that predicts a localized effect 22116 on a transportation system through analysis of social medial data 22114 sourced from a plurality 2269 of social media data sources 22107; and a second neural network 2220 that optimizes an operating state of the transportation system based on the predicted localized effect 22116. [00604] In embodiments, at least one of the first neural network 2222 and the second neural network 2220 is a convolutional neural network. In embodiments, the second neural network 2220 is to optimize an in-vehicle rider experience state. In embodiments, the first neural network 2222 identifies a set of vehicles 2294 contributing to the localized effect 22116 based on correlation of vehicle location and an area of the localized effect 22116. In embodiments, the second neural network 2220 is to optimize a routing state of the transportation system for vehicles proximal to a location of the localized effect 22116. In embodiments, the hybrid neural network 2247 is trained for at least one of the predicting and optimizing based on keywords in the social media data indicative of an outcome of a transportation system optimization action. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on social media posts. [00605] In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on social media feeds. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on ratings derived from the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on like or dislike activity detected in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on indications of relationships in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on user behavior detected in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on discussion threads in the social media data 22114. [00606] In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on chats in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on photographs in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on traffic-affecting information in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on an indication of a specific individual at a location in the social media data
SFT-106-A-PCT 22114. In embodiments, the specific individual is a celebrity. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based a presence of a rare or transient phenomena at a location in the social media data 22114. [00607] In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based a commerce-related event at a location in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based an entertainment event at a location in the social media data 22114. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes traffic conditions. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes weather conditions. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes entertainment options. [00608] In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes risk-related conditions. In embodiments, the risk-related conditions include crowds gathering for potentially dangerous reasons. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes commerce-related conditions. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes goal-related conditions. [00609] In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes estimates of attendance at an event. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes predictions of attendance at an event. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes modes of transportation. In embodiments, the modes of transportation include car traffic. In embodiments, the modes of transportation include public transportation options. [00610] In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes hash tags. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes trending of topics. In embodiments, an outcome of a transportation system optimization action is reducing fuel consumption. In embodiments, an outcome of a transportation system optimization action is reducing traffic congestion. In embodiments, an outcome of a transportation system optimization action is reduced pollution. In embodiments, an outcome of a transportation system optimization action is bad weather avoidance. In embodiments, an operating state of the transportation system being optimized includes an in-vehicle state. In embodiments, an operating state of the transportation system being optimized includes a routing state.
SFT-106-A-PCT [00611] In embodiments, the routing state is for an individual vehicle 2210. In embodiments, the routing state is for a set of vehicles 2294. In embodiments, an operating state of the transportation system being optimized includes a user-experience state. [00612] Fig. 23 illustrates a method 2300 of optimizing an operating state of a transportation system in accordance with embodiments of the systems and methods disclosed herein. At 2302 the method includes gathering social media-sourced data about a plurality of individuals, the data being sourced from a plurality of social media sources. At 2304 the method includes optimizing, using a hybrid neural network, the operating state of the transportation system. At 2306 the method includes predicting, by a first neural network of the hybrid neural network, an effect on the transportation system through an analysis of the social media-sourced data. At 2308 the method includes optimizing, by a second neural network of the hybrid neural network, at least one operating state of the transportation system responsive to the predicted effect thereon. [00613] Referring to Fig.22 and Fig.23, in embodiments, at least one of the first neural network 2222 and the second neural network 2220 is a convolutional neural network. In embodiments, the second neural network 2220 optimizes an in-vehicle rider experience state. In embodiments, the first neural network 2222 identifies a set of vehicles contributing to the effect based on correlation of vehicle location and an effect area. In embodiments, the second neural network 2220 optimizes a routing state of the transportation system for vehicles proximal to a location of the effect. [00614] In embodiments, the hybrid neural network 2247 is trained for at least one of the predicting and optimizing based on keywords in the social media data indicative of an outcome of a transportation system optimization action. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on social media posts. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on social media feeds. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on ratings derived from the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on like or dislike activity detected in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on indications of relationships in the social media data 22114. [00615] In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on user behavior detected in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on discussion threads in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on chats in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting
SFT-106-A-PCT and optimizing based on photographs in the social media data 22114. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on traffic- affecting information in the social media data 22114. [00616] In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based on an indication of a specific individual at a location in the social media data. In embodiments, the specific individual is a celebrity. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based a presence of a rare or transient phenomena at a location in the social media data. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based a commerce-related event at a location in the social media data. In embodiments, the hybrid neural network 2247 is trained for at least one of predicting and optimizing based an entertainment event at a location in the social media data. In embodiments, the social media data analyzed to predict an effect on a transportation system includes traffic conditions. [00617] In embodiments, the social media data analyzed to predict an effect on a transportation system includes weather conditions. In embodiments, the social media data analyzed to predict an effect on a transportation system includes entertainment options. In embodiments, the social media data analyzed to predict an effect on a transportation system includes risk-related conditions. In embodiments, the risk-related conditions include crowds gathering for potentially dangerous reasons. In embodiments, the social media data analyzed to predict an effect on a transportation system includes commerce-related conditions. In embodiments, the social media data analyzed to predict an effect on a transportation system includes goal-related conditions. [00618] In embodiments, the social media data analyzed to predict an effect on a transportation system includes estimates of attendance at an event. In embodiments, the social media data analyzed to predict an effect on a transportation system includes predictions of attendance at an event. In embodiments, the social media data analyzed to predict an effect on a transportation system includes modes of transportation. In embodiments, the modes of transportation include car traffic. In embodiments, the modes of transportation include public transportation options. In embodiments, the social media data analyzed to predict an effect on a transportation system includes hash tags. In embodiments, the social media data analyzed to predict an effect on a transportation system includes trending of topics. [00619] In embodiments, an outcome of a transportation system optimization action is reducing fuel consumption. In embodiments, an outcome of a transportation system optimization action is reducing traffic congestion. In embodiments, an outcome of a transportation system optimization action is reduced pollution. In embodiments, an outcome of a transportation system optimization action is bad weather avoidance. In embodiments, the operating state of the transportation system
SFT-106-A-PCT being optimized includes an in-vehicle state. In embodiments, the operating state of the transportation system being optimized includes a routing state. In embodiments, the routing state is for an individual vehicle. In embodiments, the routing state is for a set of vehicles. In embodiments, the operating state of the transportation system being optimized includes a user- experience state. [00620] Fig. 24 illustrates a method 2400 of optimizing an operating state of a transportation system in accordance with embodiments of the systems and methods disclosed herein. At 2402 the method includes using a first neural network of a hybrid neural network to classify social media data sourced from a plurality of social media sources as affecting a transportation system. At 2404 the method includes using a second network of the hybrid neural network to predict at least one operating objective of the transportation system based on the classified social media data. At 2406 the method includes using a third network of the hybrid neural network to optimize the operating state of the transportation system to achieve the at least one operating objective of the transportation system. [00621] Referring to Fig. 22 and Fig. 24, in embodiments, at least one of the neural networks in the hybrid neural network 2247 is a convolutional neural network. [00622] Referring to Fig. 25, in embodiments provided herein are transportation systems 2511 having a data processing system 2562 for taking social media data 25114 from a plurality of social data sources 25107 and using a hybrid neural network 2547 to optimize an operating state 2545 of a vehicle 2510 based on processing the social data sources with the hybrid neural network 2547. In embodiments, the hybrid neural network 2547 can include one neural network category for prediction, another for classification, and another for optimization of one or more operating states, such as based on optimizing one or more desired outcomes (such a providing efficient travel, highly satisfying rider experiences, comfortable rides, on-time arrival, or the like). Social data sources 2569 may be used by distinct neural network categories (such as any of the types described herein) to predict travel times, to classify content such as for profiling interests of a user, to predict objectives for a transportation plan (such as what will provide overall satisfaction for an individual or a group) and the like. Social data sources 2569 may also inform optimization, such as by providing indications of successful outcomes (e.g., a social data source 25107 like a Facebook feed might indicate that a trip was “amazing” or “horrible,” a Yelp review might indicate a restaurant was terrible, or the like). Thus, social data sources 2569, by contributing to outcome tracking, can be used to train a system to optimize transportation plans, such as relating to timing, destinations, trip purposes, what individuals should be invited, what entertainment options should be selected, and many others.
SFT-106-A-PCT [00623] An aspect provided herein includes a system for transportation 2511, comprising: a data processing system 2562 for taking social media data 25114 from a plurality of social data sources 25107 and using a hybrid neural network 2547 to optimize an operating state 2545 of a vehicle 2510 based on processing the data 25114 from the plurality of social data sources 25107 with the hybrid neural network 2547. [00624] Fig. 26 illustrates a method 2600 of optimizing an operating state of a vehicle in accordance with embodiments of the systems and methods disclosed herein. At 2602 the method includes classifying, using a first neural network 2522 (Fig.25) of a hybrid neural network, social media data 25119 (Fig. 25) sourced from a plurality of social media sources as affecting a transportation system. At 2604 the method includes predicting, using a second neural network 2520 (Fig.25) of the hybrid neural network, one or more effects 25118 (Fig.25) of the classified social media data on the transportation system. At 2606 the method includes optimizing, using a third neural network 25117 (Fig. 25) of the hybrid neural network, a state of at least one vehicle of the transportation system, wherein the optimizing addresses an influence of the predicted one or more effects on the at least one vehicle. [00625] Referring to Fig. 25 and Fig. 26, in embodiments, at least one of the neural networks in the hybrid neural network 2547 is a convolutional neural network. In embodiments, the social media data 25114 includes social media posts. In embodiments, the social media data 25114 includes social media feeds. In embodiments, the social media data 25114 includes like or dislike activity detected in the social media. In embodiments, the social media data 25114 includes indications of relationships. In embodiments, the social media data 25114 includes user behavior. In embodiments, the social media data 25114 includes discussion threads. In embodiments, the social media data 25114 includes chats. In embodiments, the social media data 25114 includes photographs. [00626] In embodiments, the social media data 25114 includes traffic-affecting information. In embodiments, the social media data 25114 includes an indication of a specific individual at a location. In embodiments, the social media data 25114 includes an indication of a celebrity at a location. In embodiments, the social media data 25114 includes presence of a rare or transient phenomena at a location. In embodiments, the social media data 25114 includes a commerce- related event. In embodiments, the social media data 25114 includes an entertainment event at a location. In embodiments, the social media data 25114 includes traffic conditions. In embodiments, the social media data 25114 includes weather conditions. In embodiments, the social media data 25114 includes entertainment options. [00627] In embodiments, the social media data 25114 includes risk-related conditions. In embodiments, the social media data 25114 includes predictions of attendance at an event. In
SFT-106-A-PCT embodiments, the social media data 25114 includes estimates of attendance at an event. In embodiments, the social media data 25114 includes modes of transportation used with an event. In embodiments, the effect 25118 on the transportation system includes reducing fuel consumption. In embodiments, the effect 25118 on the transportation system includes reducing traffic congestion. In embodiments, the effect 25118 on the transportation system includes reduced carbon footprint. In embodiments, the effect 25118 on the transportation system includes reduced pollution. [00628] In embodiments, the optimized state 2544 of the at least one vehicle 2510 is an operating state of the vehicle 2545. In embodiments, the optimized state of the at least one vehicle includes an in-vehicle state. In embodiments, the optimized state of the at least one vehicle includes a rider state. In embodiments, the optimized state of the at least one vehicle includes a routing state. In embodiments, the optimized state of the at least one vehicle includes user experience state. In embodiments, a characterization of an outcome of the optimizing in the social media data 25114 is used as feedback to improve the optimizing. In embodiments, the feedback includes likes and dislikes of the outcome. In embodiments, the feedback includes social medial activity referencing the outcome. [00629] In embodiments, the feedback includes trending of social media activity referencing the outcome. In embodiments, the feedback includes hash tags associated with the outcome. In embodiments, the feedback includes ratings of the outcome. In embodiments, the feedback includes requests for the outcome. [00630] Fig. 26A illustrates a method 26A00 of optimizing an operating state of a vehicle in accordance with embodiments of the systems and methods disclosed herein. At 26A02 the method includes classifying, using a first neural network of a hybrid neural network, social media data sourced from a plurality of social media sources as affecting a transportation system. At 26A04 the method includes predicting, using a second neural network of the hybrid neural network, at least one vehicle-operating objective of the transportation system based on the classified social media data. At 26A06 the method includes optimizing, using a third neural network of the hybrid neural network, a state of a vehicle in the transportation system to achieve the at least one vehicle- operating objective of the transportation system. [00631] Referring to Fig.25 and Fig.26A, in embodiments, at least one of the neural networks in the hybrid neural network 2547 is a convolutional neural network. In embodiments, the vehicle- operating objective comprises achieving a rider state of at least one rider in the vehicle. In embodiments, the social media data 25114 includes social media posts. [00632] In embodiments, the social media data 25114 includes social media feeds. In embodiments, the social media data 25114 includes like and dislike activity detected in the social
SFT-106-A-PCT media. In embodiments, the social media data 25114 includes indications of relationships. In embodiments, the social media data 25114 includes user behavior. In embodiments, the social media data 25114 includes discussion threads. In embodiments, the social media data 25114 includes chats. In embodiments, the social media data 25114 includes photographs. In embodiments, the social media data 25114 includes traffic-affecting information. [00633] In embodiments, the social media data 25114 includes an indication of a specific individual at a location. In embodiments, the social media data 25114 includes an indication of a celebrity at a location. In embodiments, the social media data 25114 includes presence of a rare or transient phenomena at a location. In embodiments, the social media data 25114 includes a commerce-related event. In embodiments, the social media data 25114 includes an entertainment event at a location. In embodiments, the social media data 25114 includes traffic conditions. In embodiments, the social media data 25114 includes weather conditions. In embodiments, the social media data 25114 includes entertainment options. [00634] In embodiments, the social media data 25114 includes risk-related conditions. In embodiments, the social media data 25114 includes predictions of attendance at an event. In embodiments, the social media data 25114 includes estimates of attendance at an event. In embodiments, the social media data 25114 includes modes of transportation used with an event. In embodiments, the effect on the transportation system includes reducing fuel consumption. In embodiments, the effect on the transportation system includes reducing traffic congestion. In embodiments, the effect on the transportation system includes reduced carbon footprint. In embodiments, the effect on the transportation system includes reduced pollution. In embodiments, the optimized state of the vehicle is an operating state of the vehicle. [00635] In embodiments, the optimized state of the vehicle includes an in-vehicle state. In embodiments, the optimized state of the vehicle includes a rider state. In embodiments, the optimized state of the vehicle includes a routing state. In embodiments, the optimized state of the vehicle includes user experience state. In embodiments, a characterization of an outcome of the optimizing in the social media data is used as feedback to improve the optimizing. In embodiments, the feedback includes likes or dislikes of the outcome. In embodiments, the feedback includes social medial activity referencing the outcome. In embodiments, the feedback includes trending of social media activity referencing the outcome. [00636] In embodiments, the feedback includes hash tags associated with the outcome. In embodiments, the feedback includes ratings of the outcome. In embodiments, the feedback includes requests for the outcome. [00637] Referring to Fig. 27, in embodiments provided herein are transportation systems 2711 having a data processing system 2762 for taking social data 27114 from a plurality 2769 of social
SFT-106-A-PCT data sources 27107 and using a hybrid neural network 2747 to optimize satisfaction 27121 of at least one rider 27120 in a vehicle 2710 based on processing the social data sources with the hybrid neural network 2747. Social data sources 2769 may be used, for example, to predict what entertainment options are most likely to be effective for a rider 27120 by one neural network category, while another neural network category may be used to optimize a routing plan (such as based on social data that indicates likely traffic, points of interest, or the like). Social data 27114 may also be used for outcome tracking and feedback to optimize the system, both as to entertainment options and as to transportation planning, routing, or the like. [00638] An aspect provided herein includes a system for transportation 2711, comprising: a data processing system 2762 for taking social data 27114 from a plurality 2769 of social data sources 27107 and using a hybrid neural network 2747 to optimize satisfaction 27121 of at least one rider 27120 in a vehicle 2710 based on processing the social data 27114 from the plurality 2769 of social data sources 27107 with the hybrid neural network 2747. [00639] Fig. 28 illustrates a method 2800 of optimizing rider satisfaction in accordance with embodiments of the systems and methods disclosed herein. At 2802 the method includes classifying, using a first neural network 2722 (Fig. 27) of a hybrid neural network, social media data 27119 (Fig.27) sourced from a plurality of social media sources as indicative of an effect on a transportation system. At 2804 the method includes predicting, using a second neural network 2720 (Fig.27) of the hybrid neural network, at least one aspect 27122 (Fig.27) of rider satisfaction affected by an effect on the transportation system derived from the social media data classified as indicative of an effect on the transportation system. At 2806 the method includes optimizing, using a third neural network 27117 (Fig.27) of the hybrid neural network, the at least one aspect of rider satisfaction for at least one rider occupying a vehicle in the transportation system. [00640] Referring to Fig. 27 and Fig. 28, in embodiments, at least one of the neural networks in the hybrid neural network 2547 is a convolutional neural network. In embodiments, the at least one aspect of rider satisfaction 27121 is optimized by predicting an entertainment option for presenting to the rider. In embodiments, the at least one aspect of rider satisfaction 27121 is optimized by optimizing route planning for a vehicle occupied by the rider. In embodiments, the at least one aspect of rider satisfaction 27121 is a rider state and optimizing the aspects of rider satisfaction comprising optimizing the rider state. In embodiments, social media data specific to the rider is analyzed to determine at least one optimizing action likely to optimize the at least one aspect of rider satisfaction 27121. In embodiments, the optimizing action is selected from the group of actions consisting of adjusting a routing plan to include passing points of interest to the user, avoiding traffic congestion predicted from the social media data, and presenting entertainment options.
SFT-106-A-PCT [00641] In embodiments, the social media data includes social media posts. In embodiments, the social media data includes social media feeds. In embodiments, the social media data includes like or dislike activity detected in the social media. In embodiments, the social media data includes indications of relationships. In embodiments, the social media data includes user behavior. In embodiments, the social media data includes discussion threads. In embodiments, the social media data includes chats. In embodiments, the social media data includes photographs. [00642] In embodiments, the social media data includes traffic-affecting information. In embodiments, the social media data includes an indication of a specific individual at a location. In embodiments, the social media data includes an indication of a celebrity at a location. In embodiments, the social media data includes presence of a rare or transient phenomena at a location. In embodiments, the social media data includes a commerce-related event. In embodiments, the social media data includes an entertainment event at a location. In embodiments, the social media data includes traffic conditions. In embodiments, the social media data includes weather conditions. In embodiments, the social media data includes entertainment options. In embodiments, the social media data includes risk-related conditions. In embodiments, the social media data includes predictions of attendance at an event. In embodiments, the social media data includes estimates of attendance at an event. In embodiments, the social media data includes modes of transportation used with an event. In embodiments, the effect on the transportation system includes reducing fuel consumption. In embodiments, the effect on the transportation system includes reducing traffic congestion. In embodiments, the effect on the transportation system includes reduced carbon footprint. In embodiments, the effect on the transportation system includes reduced pollution. In embodiments, the optimized at least one aspect of rider satisfaction is an operating state of the vehicle. In embodiments, the optimized at least one aspect of rider satisfaction includes an in-vehicle state. In embodiments, the optimized at least one aspect of rider satisfaction includes a rider state. In embodiments, the optimized at least one aspect of rider satisfaction includes a routing state. In embodiments, the optimized at least one aspect of rider satisfaction includes user experience state. [00643] In embodiments, a characterization of an outcome of the optimizing in the social media data is used as feedback to improve the optimizing. In embodiments, the feedback includes likes or dislikes of the outcome. In embodiments, the feedback includes social medial activity referencing the outcome. In embodiments, the feedback includes trending of social media activity referencing the outcome. In embodiments, the feedback includes hash tags associated with the outcome. In embodiments, the feedback includes ratings of the outcome. In embodiments, the feedback includes requests for the outcome.
SFT-106-A-PCT [00644] An aspect provided herein includes a rider satisfaction system 27123 for optimizing rider satisfaction 27121, the system comprising: a first neural network 2722 of a hybrid neural network 2747 to classify social media data 27114 sourced from a plurality 2769 of social media sources 27107 as indicative of an effect on a transportation system 2711; a second neural network 2720 of the hybrid neural network 2747 to predict at least one aspect 27122 of rider satisfaction 27121 affected by an effect on the transportation system derived from the social media data classified as indicative of the effect on the transportation system; and a third neural network 27117 of the hybrid neural network 2747 to optimize the at least one aspect of rider satisfaction 27121 for at least one rider 2744 occupying a vehicle 2710 in the transportation system 2711. In embodiments, at least one of the neural networks in the hybrid neural network 2747 is a convolutional neural network. [00645] In embodiments, the at least one aspect of rider satisfaction 27121 is optimized by predicting an entertainment option for presenting to the rider 2744. In embodiments, the at least one aspect of rider satisfaction 27121 is optimized by optimizing route planning for a vehicle 2710 occupied by the rider 2744. In embodiments, the at least one aspect of rider satisfaction 27121 is a rider state 2737 and optimizing the at least one aspect of rider satisfaction 27121 comprises optimizing the rider state 2737. In embodiments, social media data specific to the rider 2744 is analyzed to determine at least one optimizing action likely to optimize the at least one aspect of rider satisfaction 27121. In embodiments, the at least one optimizing action is selected from the group consisting of: adjusting a routing plan to include passing points of interest to the user, avoiding traffic congestion predicted from the social media data, deriving an economic benefit, deriving an altruistic benefit, and presenting entertainment options. [00646] In embodiments, the economic benefit is saved fuel. In embodiments, the altruistic benefit is reduction of environmental impact. In embodiments, the social media data includes social media posts. In embodiments, the social media data includes social media feeds. In embodiments, the social media data includes like or dislike activity detected in the social media. In embodiments, the social media data includes indications of relationships. In embodiments, the social media data includes user behavior. In embodiments, the social media data includes discussion threads. In embodiments, the social media data includes chats. In embodiments, the social media data includes photographs. In embodiments, the social media data includes traffic-affecting information. In embodiments, the social media data includes an indication of a specific individual at a location. [00647] In embodiments, the social media data includes an indication of a celebrity at a location. In embodiments, the social media data includes presence of a rare or transient phenomena at a location. In embodiments, the social media data includes a commerce-related event. In embodiments, the social media data includes an entertainment event at a location. In embodiments, the social media data includes traffic conditions. In embodiments, the social media data includes
SFT-106-A-PCT weather conditions. In embodiments, the social media data includes entertainment options. In embodiments, the social media data includes risk-related conditions. In embodiments, the social media data includes predictions of attendance at an event. In embodiments, the social media data includes estimates of attendance at an event. In embodiments, the social media data includes modes of transportation used with an event. [00648] In embodiments, the effect on the transportation system includes reducing fuel consumption. In embodiments, the effect on the transportation system includes reducing traffic congestion. In embodiments, the effect on the transportation system includes reduced carbon footprint. In embodiments, the effect on the transportation system includes reduced pollution. In embodiments, the optimized at least one aspect of rider satisfaction is an operating state of the vehicle. In embodiments, the optimized at least one aspect of rider satisfaction includes an in- vehicle state. In embodiments, the optimized at least one aspect of rider satisfaction includes a rider state. In embodiments, the optimized at least one aspect of rider satisfaction includes a routing state. In embodiments, the optimized at least one aspect of rider satisfaction includes user experience state. In embodiments, a characterization of an outcome of the optimizing in the social media data is used as feedback to improve the optimizing. In embodiments, the feedback includes likes or dislikes of the outcome. In embodiments, the feedback includes social medial activity referencing the outcome. In embodiments, the feedback includes trending of social media activity referencing the outcome. In embodiments, the feedback includes hash tags associated with the outcome. In embodiments, the feedback includes ratings of the outcome. In embodiments, the feedback includes requests for the outcome. [00649] Referring to Fig. 29, in embodiments provided herein are transportation systems 2911 having a hybrid neural network 2947 wherein one neural network 2922 processes a sensor input 29125 about a rider 2944 of a vehicle 2910 to determine an emotional state 29126 and another neural network optimizes at least one operating parameter 29124 of the vehicle to improve the rider’s emotional state 2966. For example, a neural net 2922 that includes one or more perceptrons 29127 that mimic human senses may be used to mimic or assist with determining the likely emotional state of a rider 29126 based on the extent to which various senses have been stimulated, while another neural network 2920 is used in an expert system that performs random and/or systematized variations of various combinations of operating parameters (such as entertainment settings, seat settings, suspension settings, route types and the like) with genetic programming that promotes favorable combinations and eliminates unfavorable ones, optionally based on input from the output of the perceptron-containing neural network 2922 that predict emotional state. These and many other such combinations are encompassed by the present disclosure. In Fig 29, perceptrons 29127 are depicted as optional.
SFT-106-A-PCT [00650] An aspect provided herein includes a system for transportation 2911, comprising: a hybrid neural network 2947 wherein one neural network 2922 processes a sensor input 29125 corresponding to a rider 2944 of a vehicle 2910 to determine an emotional state 2966 of the rider 2944 and another neural network 2920 optimizes at least one operating parameter 29124 of the vehicle to improve the emotional state 2966 of the rider 2944. [00651] An aspect provided herein includes a hybrid neural network 2947 for rider satisfaction, comprising: a first neural network 2922 to detect a detected emotional state 29126 of a rider 2944 occupying a vehicle 2910 through analysis of the sensor input 29125 gathered from sensors 2925 deployed in a vehicle 2910 for gathering physiological conditions of the rider; and a second neural network 2920 to optimize, for achieving a favorable emotional state of the rider, an operational parameter 29124 of the vehicle in response to the detected emotional state 29126 of the rider. [00652] In embodiments, the first neural network 2922 is a recurrent neural network and the second neural network 2920 is a radial basis function neural network. In embodiments, at least one of the neural networks in the hybrid neural network 2947 is a convolutional neural network. In embodiments, the second neural network 2920 is to optimize the operational parameter 29124 based on a correlation between a vehicle operating state 2945 and a rider emotional state 2966 of the rider. In embodiments, the second neural network 2920 optimizes the operational parameter 29124 in real time responsive to the detecting of the detected emotional state 29126 of the rider 2944 by the first neural network 2922. In embodiments, the first neural network 2922 comprises a plurality of connected nodes that form a directed cycle, the first neural network 2922 further facilitating bi-directional flow of data among the connected nodes. In embodiments, the operational parameter 29124 that is optimized affects at least one of: a route of the vehicle, in-vehicle audio contents, a speed of the vehicle, an acceleration of the vehicle, a deceleration of the vehicle, a proximity to objects along the route, and a proximity to other vehicles along the route. [00653] An aspect provided herein includes an artificial intelligence system 2936 for optimizing rider satisfaction, comprising: a hybrid neural network 2947, including: a recurrent neural network (e.g., in Fig. 29, neural network 2922 may be a recurrent neural network) to indicate a change in an emotional state of a rider 2944 in a vehicle 2910 through recognition of patterns of physiological data of the rider captured by at least one sensor 2925 deployed for capturing rider emotional state- indicative data while occupying the vehicle 2910; and a radial basis function neural network (e.g., in Fig. 29, the second neural network 2920 may be a radial basis function neural network) to optimize, for achieving a favorable emotional state of the rider, an operational parameter 29124 of the vehicle in response to the indication of change in the emotional state of the rider. In embodiments, the operational parameter 29124 of the vehicle that is to be optimized is to be determined and adjusted to induce the favorable emotional state of the rider.
SFT-106-A-PCT [00654] An aspect provided herein includes an artificial intelligence system 2936 for optimizing rider satisfaction, comprising: a hybrid neural network 2947, including: a convolutional neural network (in Fig. 29, neural network 1, depicted at reference numeral 2922, may optionally be a convolutional neural network) to indicate a change in an emotional state of a rider in a vehicle through recognitions of patterns of visual data of the rider captured by at least one image sensor (in Fig.29, the sensor 2925 may optionally be an image sensor) deployed for capturing images of the rider while occupying the vehicle; and a second neural network 2920 to optimize, for achieving a favorable emotional state of the rider, an operational parameter 29124 of the vehicle in response to the indication of change in the emotional state of the rider. [00655] In embodiments, the operational parameter 19124 of the vehicle that is to be optimized is to be determined and adjusted to induce the favorable emotional state of the rider. [00656] Referring to Fig. 30, in embodiments provided herein are transportation systems 3011 having an artificial intelligence system 3036 for processing feature vectors of an image of a face of a rider in a vehicle to determine an emotional state and optimizing at least one operating parameter of the vehicle to improve the rider’s emotional state. A face may be classified based on images from in-vehicle cameras, available cellphone or other mobile device cameras, or other sources. An expert system, optionally trained based on a training set of data provided by humans or trained by deep learning, may learn to adjust vehicle parameters (such as any described herein) to provide improved emotional states. For example, if a rider’s face indicates stress, the vehicle may select a less stressful route, play relaxing music, play humorous content, or the like. [00657] An aspect provided herein includes a transportation system 3011, comprising: an artificial intelligence system 3036 for processing feature vectors 30130 of an image 30129 of a face 30128 of a rider 3044 in a vehicle 3010 to determine an emotional state 3066 of the rider and optimizing an operational parameter 30124 of the vehicle to improve the emotional state 3066 of the rider 3044. [00658] In embodiments, the artificial intelligence system 3036 includes: a first neural network 3022 to detect the emotional state 30126 of the rider through recognition of patterns of the feature vectors 30130 of the image 30129 of the face 30128 of the rider 3044 in the vehicle 3010, the feature vectors 30130 indicating at least one of a favorable emotional state of the rider and an unfavorable emotional state of the rider; and a second neural network 3020 to optimize, for achieving the favorable emotional state of the rider, the operational parameter 30124 of the vehicle in response to the detected emotional state 30126 of the rider. [00659] In embodiments, the first neural network 3022 is a recurrent neural network and the second neural network 3020 is a radial basis function neural network. In embodiments, the second neural network 3020 optimizes the operational parameter 30124 based on a correlation between the
SFT-106-A-PCT vehicle operating state 3045 and the emotional state 3066 of the rider. In embodiments, the second neural network 3020 is to determine an optimum value for the operational parameter of the vehicle, and the transportation system 3011 is to adjust the operational parameter 30124 of the vehicle to the optimum value to induce the favorable emotional state of the rider. In embodiments, the first neural network 3022 further learns to classify the patterns in the feature vectors and associate the patterns with a set of emotional states and changes thereto by processing a training data set 30131. In embodiments, the training data set 30131 is sourced from at least one of a stream of data from an unstructured data source, a social media source, a wearable device, an in-vehicle sensor, a rider helmet, a rider headgear, and a rider voice recognition system. [00660] In embodiments, the second neural network 3020 optimizes the operational parameter 30124 in real time responsive to the detecting of the emotional state of the rider by the first neural network 3022. In embodiments, the first neural network 3022 is to detect a pattern of the feature vectors. In embodiments, the pattern is associated with a change in the emotional state of the rider from a first emotional state to a second emotional state. In embodiments, the second neural network 3020 optimizes the operational parameter of the vehicle in response to the detection of the pattern associated with the change in the emotional state. In embodiments, the first neural network 3022 comprises a plurality of interconnected nodes that form a directed cycle, the first neural network 3022 further facilitating bi-directional flow of data among the interconnected nodes. In embodiments, the transportation system 3011 further comprises: a feature vector generation system to process a set of images of the face of the rider, the set of images captured over an interval of time from by a plurality of image capture devices 3027 while the rider 3044 is in the vehicle 3010, wherein the processing of the set of images is to produce the feature vectors 30130 of the image of the face of the rider. In embodiments, the transportation system further comprises: image capture devices 3027 disposed to capture a set of images of the face of the rider in the vehicle from a plurality of perspectives; and an image processing system to produce the feature vectors from the set of images captured from at least one of the plurality of perspectives. [00661] In embodiments, the transportation system 3011 further comprises an interface 30133 between the first neural network and the image processing system 30132 to communicate a time sequence of the feature vectors, wherein the feature vectors are indicative of the emotional state of the rider. In embodiments, the feature vectors indicate at least one of a changing emotional state of the rider, a stable emotional state of the rider, a rate of change of the emotional state of the rider, a direction of change of the emotional state of the rider, a polarity of a change of the emotional state of the rider; the emotional state of the rider is changing to the unfavorable emotional state; and the emotional state of the rider is changing to the favorable emotional state.
SFT-106-A-PCT [00662] In embodiments, the operational parameter that is optimized affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the second neural network is to interact with a vehicle control system to adjust the operational parameter. In embodiments, the artificial intelligence system further comprises a neural network that includes one or more perceptrons that mimic human senses that facilitates determining the emotional state of the rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the artificial intelligence system includes: a recurrent neural network to indicate a change in the emotional state of the rider through recognition of patterns of the feature vectors of the image of the face of the rider in the vehicle; and a radial basis function neural network to optimize, for achieving the favorable emotional state of the rider, the operational parameter of the vehicle in response to the indication of the change in the emotional state of the rider. [00663] In embodiments, the radial basis function neural network is to optimize the operational parameter based on a correlation between a vehicle operating state and a rider emotional state. In embodiments, the operational parameter of the vehicle that is optimized is determined and adjusted to induce a favorable rider emotional state. In embodiments, the recurrent neural network further learns to classify the patterns of the feature vectors and associate the patterns of the feature vectors to emotional states and changes thereto from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the radial basis function neural network is to optimize the operational parameter in real time responsive to the detecting of the change in the emotional state of the rider by the recurrent neural network. In embodiments, the recurrent neural network detects a pattern of the feature vectors that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the radial basis function neural network is to optimize the operational parameter of the vehicle in response to the indicated change in emotional state. [00664] In embodiments, the recurrent neural network comprises a plurality of connected nodes that form a directed cycle, the recurrent neural network further facilitating bi-directional flow of data among the connected nodes. In embodiments, the feature vectors indicate at least one of the emotional state of the rider is changing, the emotional state of the rider is stable, a rate of change of the emotional state of the rider, a direction of change of the emotional state of the rider, and a polarity of a change of the emotional state of the rider; the emotional state of a rider is changing to an unfavorable emotional state; and an emotional state of a rider is changing to a favorable emotional state. In embodiments, the operational parameter that is optimized affects at least one of
SFT-106-A-PCT a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. [00665] In embodiments, the radial basis function neural network is to interact with a vehicle control system 30134 to adjust the operational parameter 30124. In embodiments, the artificial intelligence system 3036 further comprises a neural network that includes one or more perceptrons that mimic human senses that facilitates determining the emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the artificial intelligence system 3036 is to maintain the favorable emotional state of the rider via a modular neural network, the modular neural network comprising: a rider emotional state determining neural network to process the feature vectors of the image of the face of the rider in the vehicle to detect patterns. In embodiments, the patterns in the feature vectors indicate at least one of the favorable emotional state and the unfavorable emotional state; an intermediary circuit to convert data from the rider emotional state determining neural network into vehicle operational state data; and a vehicle operational state optimizing neural network to adjust an operational parameter of the vehicle in response to the vehicle operational state data. [00666] In embodiments, the vehicle operational state optimizing neural network is to adjust the operational parameter 30124 of the vehicle for achieving a favorable emotional state of the rider. In embodiments, the vehicle operational state optimizing neural network is to optimize the operational parameter based on a correlation between a vehicle operating state 3045 and a rider emotional state 3066. In embodiments, the operational parameter of the vehicle that is optimized is determined and adjusted to induce a favorable rider emotional state. In embodiments, the rider emotional state determining neural network further learns to classify the patterns of the feature vectors and associate the pattern of the feature vectors to emotional states and changes thereto from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. [00667] In embodiments, the vehicle operational state optimizing neural network is to optimize the operational parameter 30124 in real time responsive to the detecting of a change in an emotional state 30126 of the rider by the rider emotional state determining neural network. In embodiments, the rider emotional state determining neural network is to detect a pattern of the feature vectors 30130 that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the vehicle operational state optimizing neural network is to optimize the operational parameter of the vehicle in response to the indicated change in emotional state. In embodiments, the artificial intelligence system 3036 comprises a plurality of
SFT-106-A-PCT connected nodes that form a directed cycle, the artificial intelligence system further facilitating bi- directional flow of data among the connected nodes. [00668] In embodiments, the feature vectors 30130 indicate at least one of the emotional state of the rider is changing, the emotional state of the rider is stable, a rate of change of the emotional state of the rider, a direction of change of the emotional state of the rider, and a polarity of a change of the emotional state of the rider; the emotional state of a rider is changing to an unfavorable emotional state; and the emotional state of the rider is changing to a favorable emotional state. In embodiments, the operational parameter that is optimized affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the vehicle operational state optimizing neural network interacts with a vehicle control system to adjust the operational parameter. [00669] In embodiments, the artificial intelligence system 3036 further comprises a neural net that includes one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. It is to be understood that the terms “neural net” and “neural network” are used interchangeably in the present disclosure. In embodiments, the rider emotional state determining neural network comprises one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the artificial intelligence system 3036 includes a recurrent neural network to indicate a change in the emotional state of the rider in the vehicle through recognition of patterns of the feature vectors of the image of the face of the rider in the vehicle; the transportation system further comprising: a vehicle control system 30134 to control operation of the vehicle by adjusting a plurality of vehicle operational parameters 30124; and a feedback loop to communicate the indicated change in the emotional state of the rider between the vehicle control system 30134 and the artificial intelligence system 3036. In embodiments, the vehicle control system is to adjust at least one of the plurality of vehicle operational parameters 30124 in response to the indicated change in the emotional state of the rider. In embodiments, the vehicle controls system adjusts the at least one of the plurality of vehicle operational parameters based on a correlation between vehicle operational state and rider emotional state. [00670] In embodiments, the vehicle control system adjusts the at least one of the plurality of vehicle operational parameters 30124 that are indicative of a favorable rider emotional state. In embodiments, the vehicle control system 30134 selects an adjustment of the at least one of the plurality of vehicle operational parameters 30124 that is indicative of producing a favorable rider emotional state. In embodiments, the recurrent neural network further learns to classify the patterns
SFT-106-A-PCT of feature vectors and associate them to emotional states and changes thereto from a training data set 30131 sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the vehicle control system 30134 adjusts the at least one of the plurality of vehicle operation parameters 30124 in real time. In embodiments, the recurrent neural network detects a pattern of the feature vectors that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the vehicle operation control system adjusts an operational parameter of the vehicle in response to the indicated change in emotional state. In embodiments, the recurrent neural network comprises a plurality of connected nodes that form a directed cycle, the recurrent neural network further facilitating bi- directional flow of data among the connected nodes. [00671] In embodiments, the feature vectors indicating at least one of an emotional state of the rider is changing, an emotional state of the rider is stable, a rate of change of an emotional state of the rider, a direction of change of an emotional state of the rider, and a polarity of a change of an emotional state of the rider; an emotional state of a rider is changing to an unfavorable state; an emotional state of a rider is changing to a favorable state. In embodiments, the at least one of the plurality of vehicle operational parameters responsively adjusted affects a route of the vehicle, in- vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, proximity to other vehicles along the route. In embodiments, the at least one of the plurality of vehicle operation parameters that is responsively adjusted affects operation of a powertrain of the vehicle and a suspension system of the vehicle. In embodiments, the radial basis function neural network interacts with the recurrent neural network via an intermediary component of the artificial intelligence system 3036 that produces vehicle control data indicative of an emotional state response of the rider to a current operational state of the vehicle. In embodiments, the recognition of patterns of feature vectors comprises processing the feature vectors of the image of the face of the rider captured during at least two of before the adjusting at least one of the plurality of vehicle operational parameters, during the adjusting at least one of the plurality of vehicle operational parameters, and after adjusting at least one of the plurality of vehicle operational parameters. [00672] In embodiments, the adjusting at least one of the plurality of vehicle operational parameters 30124 improves an emotional state of a rider in a vehicle. In embodiments, the adjusting at least one of the plurality of vehicle operational parameters causes an emotional state of the rider to change from an unfavorable emotional state to a favorable emotional state. In embodiments, the change is indicated by the recurrent neural network. In embodiments, the recurrent neural network indicates a change in the emotional state of the rider responsive to a change in an operating
SFT-106-A-PCT parameter of the vehicle by determining a difference between a first set of feature vectors of an image of the face of a rider captured prior to the adjusting at least one of the plurality of operating parameters and a second set of feature vectors of an image of the face of the rider captured during or after the adjusting at least one of the plurality of operating parameters. [00673] In embodiments, the recurrent neural network detects a pattern of the feature vectors that indicates an emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the vehicle operation control system adjusts an operational parameter of the vehicle in response to the indicated change in emotional state. [00674] Referring to Fig. 31, in embodiments, provided herein are transportation systems having an artificial intelligence system for processing a voice of a rider in a vehicle to determine an emotional state and optimizing at least one operating parameter of the vehicle to improve the rider’s emotional state. A voice-analysis module may take voice input and, using a training set of labeled data where individuals indicate emotional states while speaking and/or whether others tag the data to indicate perceived emotional states while individuals are talking, a machine learning system (such as any of the types described herein) may be trained (such as using supervised learning, deep learning, or the like) to classify the emotional state of the individual based on the voice. Machine learning may improve classification by using feedback from a large set of trials, where feedback in each instance indicates whether the system has correctly assessed the emotional state of the individual in the case of an instance of speaking. Once trained to classify the emotional state, an expert system (optionally using a different machine learning system or other artificial intelligence system) may, based on feedback of outcomes of the emotional states of a set of individuals, be trained to optimize various vehicle parameters noted throughout this disclosure to maintain or induce more favorable states. For example, among many other indicators, where a voice of an individual indicates happiness, the expert system may select or recommend upbeat music to maintain that state. Where a voice indicates stress, the system may recommend or provide a control signal to change a planned route to one that is less stressful (e.g., has less stop-and-go traffic, or that has a higher probability of an on-time arrival). In embodiments, the system may be configured to engage in a dialog (such as on on-screen dialog or an audio dialog), such as using an intelligent agent module of the system, that is configured to use a series of questions to help obtain feedback from a user about the user’s emotional state, such as asking the rider about whether the rider is experiencing stress, what the source of the stress may be (e.g., traffic conditions, potential for late arrival, behavior of other drivers, or other sources unrelated to the nature of the ride), what might mitigate the stress (route options, communication options (such as offering to send a note that arrival may be delayed), entertainment options, ride configuration options, and the like), and the like. Driver responses may be fed as inputs to the expert system as indicators of emotional state,
SFT-106-A-PCT as well as to constrain efforts to optimize one or more vehicle parameters, such as by eliminating options for configuration that are not related to a driver’s source of stress from a set of available configurations. [00675] An aspect provided herein includes a system for transportation 3111, comprising: an artificial intelligence system 3136 for processing a voice 31135 of a rider 3144 in a vehicle 3110 to determine an emotional state 3166 of the rider 3144 and optimizing at least one operating parameter 31124 of the vehicle 3110 to improve the emotional state 3166 of the rider 3144. [00676] An aspect provided herein includes an artificial intelligence system 3136 for voice processing to improve rider satisfaction in a transportation system 3111, comprising: a rider voice capture system 30136 deployed to capture voice output 31128 of a rider 3144 occupying a vehicle 3110; a voice-analysis circuit 31132 trained using machine learning that classifies an emotional state 31138 of the rider for the captured voice output of the rider; and an expert system 31139 trained using machine learning that optimizes at least one operating parameter 31124 of the vehicle to change the rider emotional state to an emotional state classified as an improved emotional state. [00677] In embodiments, the rider voice capture system 31136 comprises an intelligent agent 31140 that engages in a dialog with the rider to obtain rider feedback for use by the voice-analysis circuit 31132 for rider emotional state classification. In embodiments, the voice-analysis circuit 31132 uses a first machine learning system and the expert system 31139 uses a second machine learning system. In embodiments, the expert system 31139 is trained to optimize the at least one operating parameter 31124 based on feedback of outcomes of the emotional states when adjusting the at least one operating parameter 31124 for a set of individuals. In embodiments, the emotional state 3166 of the rider is determined by a combination of the captured voice output 31128 of the rider and at least one other parameter. In embodiments, the at least one other parameter is a camera- based emotional state determination of the rider. In embodiments, the at least one other parameter is traffic information. In embodiments, the at least one other parameter is weather information. In embodiments, the at least one other parameter is a vehicle state. In embodiments, the at least one other parameter is at least one pattern of physiological data of the rider. In embodiments, the at least one other parameter is a route of the vehicle. In embodiments, the at least one other parameter is in-vehicle audio content. In embodiments, the at least one other parameter is a speed of the vehicle. In embodiments, the at least one other parameter is acceleration of the vehicle. In embodiments, the at least one other parameter is deceleration of the vehicle. In embodiments, the at least one other parameter is proximity to objects along the route. In embodiments, the at least one other parameter is proximity to other vehicles along the route. [00678] An aspect provided herein includes an artificial intelligence system 3136 for voice processing to improve rider satisfaction, comprising: a first neural network 3122 trained to classify
SFT-106-A-PCT emotional states based on analysis of human voices detects an emotional state of a rider through recognition of aspects of the voice output 31128 of the rider captured while the rider is occupying the vehicle 3110 that correlate to at least one emotional state 3166 of the rider; and a second neural network 3120 that optimizes, for achieving a favorable emotional state of the rider, an operational parameter 31124 of the vehicle in response to the detected emotional state 31126 of the rider 3144. In embodiments, at least one of the neural networks is a convolutional neural network. In embodiments, the first neural network 3122 is trained through use of a training data set that associates emotional state classes with human voice patterns. In embodiments, the first neural network 3122 is trained through the use of a training data set of voice recordings that are tagged with emotional state identifying data. In embodiments, the emotional state of the rider is determined by a combination of the captured voice output of the rider and at least one other parameter. In embodiments, the at least one other parameter is a camera-based emotional state determination of the rider. In embodiments, the at least one other parameter is traffic information. In embodiments, the at least one other parameter is weather information. In embodiments, the at least one other parameter is a vehicle state. [00679] In embodiments, the at least one other parameter is at least one pattern of physiological data of the rider. In embodiments, the at least one other parameter is a route of the vehicle. In embodiments, the at least one other parameter is in-vehicle audio content. In embodiments, the at least one other parameter is a speed of the vehicle. In embodiments, the at least one other parameter is acceleration of the vehicle. In embodiments, the at least one other parameter is deceleration of the vehicle. In embodiments, the at least one other parameter is proximity to objects along the route. In embodiments, the at least one other parameter is proximity to other vehicles along the route. [00680] Referring now to Fig.32, in embodiments provided herein are transportation systems 3211 having an artificial intelligence system 3236 for processing data from an interaction of a rider with an electronic commerce system of a vehicle to determine a rider state and optimizing at least one operating parameter of the vehicle to improve the rider’s state. Another common activity for users of device interfaces is e-commerce, such as shopping, bidding in auctions, selling items and the like. E-commerce systems use search functions, undertake advertising and engage users with various work flows that may eventually result in an order, a purchase, a bid, or the like. As described herein with search, a set of in-vehicle-relevant search results may be provided for e- commerce, as well as in-vehicle relevant advertising. In addition, in-vehicle-relevant interfaces and workflows may be configured based on detection of an in-vehicle rider, which may be quite different than workflows that are provided for e-commerce interfaces that are configured for smart phones or for desktop systems. Among other factors, an in-vehicle system may have access to
SFT-106-A-PCT information that is unavailable to conventional e-commerce systems, including route information (including direction, planned stops, planned duration and the like), rider mood and behavior information (such as from past routes, as well as detected from in-vehicle sensor sets), vehicle configuration and state information (such as make and model), and any of the other vehicle-related parameters described throughout this disclosure. As one example, a rider who is bored (as detected by an in-vehicle sensor set, such as using an expert system that is trained to detect boredom) and is on a long trip (as indicated by a route that is being undertaken by a car) may be far more patient, and likely to engage in deeper, richer content, and longer workflows, than a typical mobile user. As another example, an in-vehicle rider may be far more likely to engage in free trials, surveys, or other behaviors that promote brand engagement. Also, an in-vehicle user may be motivated to use otherwise down time to accomplish specific goals, such as shopping for needed items. Presenting the same interfaces, content, and workflows to in-vehicle users may miss excellent opportunities for deeper engagement that would be highly unlikely in other settings where many more things may compete for a user’s attention. In embodiments, an e-commerce system interface may be provided for in-vehicle users, where at least one of interface displays, content, search results, advertising, and one or more associated workflows (such as for shopping, bidding, searching, purchasing, providing feedback, viewing products, entering ratings or reviews, or the like) is configured based on the detection of the use of an in-vehicle interface. Displays and interactions may be further configured (optionally based on a set of rules or based on machine learning), such as based on detection of display types (e.g., allowing richer or larger images for large, HD displays), network capabilities (e.g., enabling faster loading and lower latency by caching low- resolution images that initially render), audio system capabilities (such as using audio for dialog management and intelligence assistant interactions) and the like for the vehicle. Display elements, content, and workflows may be configured by machine learning, such as by A/B testing and/or using genetic programming techniques, such as configuring alternative interaction types and tracking outcomes. Outcomes used to train automatic configuration of workflows for in-vehicle e- commerce interfaces may include extent of engagement, yield, purchases, rider satisfaction, ratings, and others. In-vehicle users may be profiled and clustered, such as by behavioral profiling, demographic profiling, psychographic profiling, location-based profiling, collaborative filtering, similarity-based clustering, or the like, as with conventional e-commerce, but profiles may be enhanced with route information, vehicle information, vehicle configuration information, vehicle state information, rider information and the like. A set of in-vehicle user profiles, groups and clusters may be maintained separately from conventional user profiles, such that learning on what content to present, and how to present it, is accomplished with increased likelihood that the
SFT-106-A-PCT differences in in-vehicle shopping area accounted for when targeting search results, advertisements, product offers, discounts, and the like. [00681] An aspect provided herein includes a system for transportation 3211, comprising: an artificial intelligence system 3236 for processing data from an interaction of a rider 3244 with an electronic commerce system of a vehicle to determine a rider state and optimizing at least one operating parameter of the vehicle to improve the rider state. [00682] An aspect provided herein includes a rider satisfaction system 32123 for optimizing rider satisfaction 32121, the rider satisfaction system comprising: an electronic commerce interface 32141 deployed for access by a rider in a vehicle 3210; a rider interaction circuit that captures rider interactions with the deployed interface 32141; a rider state determination circuit 32143 that processes the captured rider interactions 32144 to determine a rider state 32145; and an artificial intelligence system 3236 trained to optimize, responsive to a rider state 3237, at least one parameter 32124 affecting operation of the vehicle to improve the rider state 3237. In embodiments, the vehicle 3210 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle is at least a semi-autonomous vehicle. In embodiments, the vehicle is automatically routed. In embodiments, the vehicle is a self-driving vehicle. In embodiments, the electronic commerce interface is self-adaptive and responsive to at least one of an identity of the rider, a route of the vehicle, a rider mood, rider behavior, vehicle configuration, and vehicle state. [00683] In embodiments, the electronic commerce interface 32141 provides in-vehicle-relevant content 32146 that is based on at least one of an identity of the rider, a route of the vehicle, a rider mood, rider behavior, vehicle configuration, and vehicle state. In embodiments, the electronic commerce interface executes a user interaction workflow 32147 adapted for use by a rider 3244 in a vehicle 3210. In embodiments, the electronic commerce interface provides one or more results of a search query 32148 that are adapted for presentation in a vehicle. In embodiments, the search query results adapted for presentation in a vehicle are presented in the electronic commerce interface along with advertising adapted for presentation in a vehicle. In embodiments, the rider interaction circuit 32142 captures rider interactions 32144 with the interface responsive to content 32146 presented in the interface. [00684] Fig. 33 illustrates a method 3300 for optimizing a parameter of a vehicle in accordance with embodiments of the systems and methods disclosed herein. At 3302 the method includes capturing rider interactions with an in-vehicle electronic commerce system. At 3304 the method includes determining a rider state based on the captured rider interactions and a least one operating parameter of the vehicle. At 3306 the method includes processing the rider state with a rider satisfaction model that is adapted to suggest at least one operating parameter of a vehicle the
SFT-106-A-PCT influences the rider state. At 3308 the method includes optimizing the suggested at least one operating parameter for at least one of maintaining and improving a rider state. [00685] Referring to Fig. 32 and Fig. 33, an aspect provided herein includes an artificial intelligence system 3236 for improving rider satisfaction, comprising: a first neural network 3222 trained to classify rider states based on analysis of rider interactions 32144 with an in-vehicle electronic commerce system to detect a rider state 32149 through recognition of aspects of the rider interactions 32144 captured while the rider is occupying the vehicle that correlate to at least one state 3237 of the rider; and a second neural network 3220 that optimizes, for achieving a favorable state of the rider, an operational parameter of the vehicle in response to the detected state of the rider. [00686] Referring to Fig. 34, in embodiments provided herein are transportation systems 3411 having an artificial intelligence system 3436 for processing data from at least one Internet of Things (IoT) device 34150 in the environment 34151 of a vehicle 3410 to determine a state 34152 of the vehicle and optimizing at least one operating parameter 34124 of the vehicle to improve a rider’s state 3437 based on the determined state 34152 of the vehicle. [00687] An aspect provided herein includes a system for transportation 3411, comprising: an artificial intelligence system 3436 for processing data from at least one Internet of Things device 34150 in an environment 34151 of a vehicle 3410 to determine a determined state 34152 of the vehicle and optimizing at least one operating parameter 34124 of the vehicle to improve a state 3437 of the rider based on the determined state 34152 of the vehicle 3410. [00688] Fig. 35 illustrates a method 3500 for improving a state of a rider through optimization of operation of a vehicle in accordance with embodiments of the systems and methods disclosed herein. At 3502 the method includes capturing vehicle operation-related data with at least one Internet-of-things device. At 3504 the method includes analyzing the captured data with a first neural network that determines a state of the vehicle based at least in part on a portion of the captured vehicle operation-related data. At 3506 the method includes receiving data descriptive of a state of a rider occupying the operating vehicle. At 3508 the method includes using a neural network to determine at least one vehicle operating parameter that affects a state of a rider occupying the operating vehicle. At 3509 the method includes using an artificial intelligence-based system to optimize the at least one vehicle operating parameter so that a result of the optimizing comprises an improvement in the state of the rider. [00689] Referring to Fig. 34 and Fig. 35, in embodiments, the vehicle 3410 comprises a system for automating at least one control parameter 34153 of the vehicle 3410. In embodiments, the vehicle 3410 is at least a semi-autonomous vehicle. In embodiments, the vehicle 3410 is automatically routed. In embodiments, the vehicle 3410 is a self-driving vehicle. In embodiments,
SFT-106-A-PCT the at least one Internet-of-things device 34150 is disposed in an operating environment 34154 of the vehicle. In embodiments, the at least one Internet-of-things device 34150 that captures the data about the vehicle 3410 is disposed external to the vehicle 3410. In embodiments, the at least one Internet-of-things device is a dashboard camera. In embodiments, the at least one Internet-of-things device is a mirror camera. In embodiments, the at least one Internet-of-things device is a motion sensor. In embodiments, the at least one Internet-of-things device is a seat-based sensor system. In embodiments, the at least one Internet-of-things device is an IoT enabled lighting system. In embodiments, the lighting system is a vehicle interior lighting system. In embodiments, the lighting system is a headlight lighting system. In embodiments, the at least one Internet-of-things device is a traffic light camera or sensor. In embodiments, the at least one Internet-of-things device is a roadway camera. In embodiments, the roadway camera is disposed on at least one of a telephone phone and a light pole. In embodiments, the at least one Internet-of-things device is an in-road sensor. In embodiments, the at least one Internet-of-things device is an in-vehicle thermostat. In embodiments, the at least one Internet-of-things device is a toll booth. In embodiments, the at least one Internet-of-things device is a street sign. In embodiments, the at least one Internet-of-things device is a traffic control light. In embodiments, the at least one Internet-of-things device is a vehicle mounted sensor. In embodiments, the at least one Internet-of-things device is a refueling system. In embodiments, the at least one Internet-of-things device is a recharging system. In embodiments, the at least one Internet-of-things device is a wireless charging station. [00690] An aspect provided herein includes a rider state modification system 34155 for improving a state 3437 of a rider 3444 in a vehicle 3410, the system comprising: a first neural network 3422 that operates to classify a state of the vehicle through analysis of information about the vehicle captured by an Internet-of-things device 34150 during operation of the vehicle 3410; and a second neural network 3420 that operates to optimize at least one operating parameter 34124 of the vehicle based on the classified state 34152 of the vehicle, information about a state of a rider occupying the vehicle, and information that correlates vehicle operation with an effect on rider state. [00691] In embodiments, the vehicle comprises a system for automating at least one control parameter 34153 of the vehicle 3410. In embodiments, the vehicle 3410 is at least a semi- autonomous vehicle. In embodiments, the vehicle 3410 is automatically routed. In embodiments, the vehicle 3410 is a self-driving vehicle. In embodiments, the at least one Internet-of-things device 34150 is disposed in an operating environment of the vehicle 3410. In embodiments, the at least one Internet-of-things device 34150 that captures the data about the vehicle 3410 is disposed external to the vehicle 3410. In embodiments, the at least one Internet-of-things device is a dashboard camera. In embodiments, the at least one Internet-of-things device is a mirror camera. In embodiments, the at least one Internet-of-things device is a motion sensor. In embodiments, the
SFT-106-A-PCT at least one Internet-of-things device is a seat-based sensor system. In embodiments, the at least one Internet-of-things device is an IoT enabled lighting system. [00692] In embodiments, the lighting system is a vehicle interior lighting system. In embodiments, the lighting system is a headlight lighting system. In embodiments, the at least one Internet-of- things device is a traffic light camera or sensor. In embodiments, the at least one Internet-of-things device is a roadway camera. In embodiments, the roadway camera is disposed on at least one of a telephone phone and a light pole. In embodiments, the at least one Internet-of-things device is an in-road sensor. In embodiments, the at least one Internet-of-things device is an in-vehicle thermostat. In embodiments, the at least one Internet-of-things device is a toll booth. In embodiments, the at least one Internet-of-things device is a street sign. In embodiments, the at least one Internet-of-things device is a traffic control light. In embodiments, the at least one Internet-of- things device is a vehicle mounted sensor. In embodiments, the at least one Internet-of-things device is a refueling system. In embodiments, the at least one Internet-of-things device is a recharging system. In embodiments, the at least one Internet-of-things device is a wireless charging station. [00693] An aspect provided herein includes an artificial intelligence system 3436 comprising: a first neural network 3422 trained to determine an operating state 34152 of a vehicle 3410 from data about the vehicle captured in an operating environment 34154 of the vehicle, wherein the first neural network 3422 operates to identify an operating state 34152 of the vehicle by processing information about the vehicle 3410 that is captured by at least one Internet-of things device 34150 while the vehicle is operating; a data structure 34156 that facilitates determining operating parameters that influence an operating state of a vehicle; a second neural network 3420 that operates to optimize at least one of the determined operating parameters 34124 of the vehicle based on the identified operating state 34152 by processing information about a state of a rider 3444 occupying the vehicle 3410, and information that correlates vehicle operation with an effect on rider state. [00694] In embodiments, the improvement in the state of the rider is reflected in updated data that is descriptive of a state of the rider captured responsive to the vehicle operation based on the optimized at least one vehicle operating parameter. In embodiments, the improvement in the state of the rider is reflected in data captured by at least one Internet-of-things device 34150 disposed to capture information about the rider 3444 while occupying the vehicle 3410 responsive to the optimizing. In embodiments, the vehicle 3410 comprises a system for automating at least one control parameter 34153 of the vehicle. In embodiments, the vehicle 3410 is at least a semi- autonomous vehicle. In embodiments, the vehicle 3410 is automatically routed. In embodiments, the vehicle 3410 is a self-driving vehicle. In embodiments, the at least one Internet-of-things device
SFT-106-A-PCT 34150 is disposed in an operating environment 34154 of the vehicle. In embodiments, the at least one Internet-of-things device 34150 that captures the data about the vehicle is disposed external to the vehicle. In embodiments, the at least one Internet-of-things device 34150 is a dashboard camera. In embodiments, the at least one Internet-of-things device 34150 is a mirror camera. In embodiments, the at least one Internet-of-things device 34150 is a motion sensor. In embodiments, the at least one Internet-of-things device 34150 is a seat-based sensor system. In embodiments, the at least one Internet-of-things device 34150 is an IoT enabled lighting system. [00695] In embodiments, the lighting system is a vehicle interior lighting system. In embodiments, the lighting system is a headlight lighting system. In embodiments, the at least one Internet-of- things device 34150 is a traffic light camera or sensor. In embodiments, the at least one Internet- of-things device 34150 is a roadway camera. In embodiments, the roadway camera is disposed on at least one of a telephone phone and a light pole. In embodiments, the at least one Internet-of- things device 34150 is an in-road sensor. In embodiments, the at least one Internet-of-things device 34150 is an in-vehicle thermostat. In embodiments, the at least one Internet-of-things device 34150 is a toll booth. In embodiments, the at least one Internet-of-things device 34150 is a street sign. In embodiments, the at least one Internet-of-things device 34150 is a traffic control light. In embodiments, the at least one Internet-of-things device 34150 is a vehicle mounted sensor. In embodiments, the at least one Internet-of-things device 34150 is a refueling system. In embodiments, the at least one Internet-of-things device 34150 is a recharging system. In embodiments, the at least one Internet-of-things device 34150 is a wireless charging station. [00696] Referring to Fig. 36, in embodiments provided herein are transportation systems 3611 having an artificial intelligence system 3636 for processing a sensory input from a wearable device 36157 in a vehicle 3610 to determine an emotional state 36126 and optimizing at least one operating parameter 36124 of the vehicle 3610 to improve the rider’s emotional state 3637. A wearable device 36157, such as any described throughout this disclosure, may be used to detect any of the emotional states described herein (favorable or unfavorable) and used both as an input to a real-time control system (such as a model-based, rule-based, or artificial intelligence system of any of the types described herein), such as to indicate an objective to improve an unfavorable state or maintain a favorable state, as well as a feedback mechanism to train an artificial intelligence system 3636 to configure sets of operating parameters 36124 to promote or maintain favorable states. [00697] An aspect provided herein includes a system for transportation 3611, comprising: an artificial intelligence system 3636 for processing a sensory input from a wearable device 36157 in a vehicle 3610 to determine an emotional state 36126 of a rider 3644 in the vehicle 3610 and optimizing an operating parameter 36124 of the vehicle to improve the emotional state 3637 of the
SFT-106-A-PCT rider 3644. In embodiments, the vehicle is a self-driving vehicle. In embodiments, the artificial intelligence system 3636 is to detect the emotional state 36126 of the rider riding in the self-driving vehicle by recognition of patterns of emotional state indicative data from a set of wearable sensors 36157 worn by the rider 3644. In embodiments, the patterns are indicative of at least one of a favorable emotional state of the rider and an unfavorable emotional state of the rider. In embodiments, the artificial intelligence system 3636 is to optimize, for achieving at least one of maintaining a detected favorable emotional state of the rider and achieving a favorable emotional state of a rider subsequent to a detection of an unfavorable emotional state, the operating parameter 36124 of the vehicle in response to the detected emotional state of the rider. In embodiments, the artificial intelligence system 3636 comprises an expert system that detects an emotional state of the rider by processing rider emotional state indicative data received from the set of wearable sensors 36157 worn by the rider. In embodiments, the expert system processes the rider emotional state indicative data using at least one of a training set of emotional state indicators of a set of riders and trainer-generated rider emotional state indicators. In embodiments, the artificial intelligence system comprises a recurrent neural network 3622 that detects the emotional state of the rider. [00698] In embodiments, the recurrent neural network comprises a plurality of connected nodes that form a directed cycle, the recurrent neural network further facilitating bi-directional flow of data among the connected nodes. In embodiments, the artificial intelligence system 3636 comprises a radial basis function neural network that optimizes the operational parameter 36124. In embodiments, the optimizing an operational parameter 36124 is based on a correlation between a vehicle operating state 3645 and a rider emotional state 3637. In embodiments, the correlation is determined using at least one of a training set of emotional state indicators of a set of riders and human trainer-generated rider emotional state indicators. In embodiments, the operational parameter of the vehicle that is optimized is determined and adjusted to induce a favorable rider emotional state. [00699] In embodiments, the artificial intelligence system 3636 further learns to classify the patterns of the emotional state indicative data and associate the patterns to emotional states and changes thereto from a training data set 36131 sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the artificial intelligence system 3636 detects a pattern of the rider emotional state indicative data that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state, the optimizing of the operational parameter of the vehicle being response to the indicated change in emotional state. In embodiments, the patterns of rider emotional state indicative data indicates at least one of an emotional state of the rider is changing, an emotional state of the rider is stable, a
SFT-106-A-PCT rate of change of an emotional state of the rider, a direction of change of an emotional state of the rider, and a polarity of a change of an emotional state of the rider; an emotional state of a rider is changing to an unfavorable state; and an emotional state of a rider is changing to a favorable state. [00700] In embodiments, the operational parameter 36124 that is optimized affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the artificial intelligence system 3636 interacts with a vehicle control system to optimize the operational parameter. In embodiments, the artificial intelligence system 3636 further comprises a neural net 3622 that includes one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the set of wearable sensors 36157 comprises at least two of a watch, a ring, a wrist band, an arm band, an ankle band, a torso band, a skin patch, a head-worn device, eye glasses, foot wear, a glove, an in-ear device, clothing, headphones, a belt, a finger ring, a thumb ring, a toe ring, and a necklace. In embodiments, the artificial intelligence system 3636 uses deep learning for determining patterns of wearable sensor- generated emotional state indicative data that indicate an emotional state of the rider as at least one of a favorable emotional state and an unfavorable emotional state. In embodiments, the artificial intelligence system 3636 is responsive to a rider indicated emotional state by at least optimizing the operation parameter to at least one of achieve and maintain the rider indicated emotional state. [00701] In embodiments, the artificial intelligence system 3636 adapts a characterization of a favorable emotional state of the rider based on context gathered from a plurality of sources including data indicating a purpose of the rider riding in the self-driving vehicle, a time of day, traffic conditions, weather conditions and optimizes the operating parameter 36124 to at least one of achieve and maintain the adapted favorable emotional state. In embodiments, the artificial intelligence system 3636 optimizes the operational parameter in real time responsive to the detecting of an emotional state of the rider. In embodiments, the vehicle is a self-driving vehicle. In embodiments, the artificial intelligence system comprises: a first neural network 3622 to detect the emotional state of the rider through expert system-based processing of rider emotional state indicative wearable sensor data of a plurality of wearable physiological condition sensors worn by the rider in the vehicle, the emotional state indicative wearable sensor data indicative of at least one of a favorable emotional state of the rider and an unfavorable emotional state of the rider; and a second neural network 3620 to optimize, for at least one of achieving and maintaining a favorable emotional state of the rider, the operating parameter 36124 of the vehicle in response to the detected emotional state of the rider. In embodiments, the first neural network 3622 is a recurrent neural network and the second neural network 3620 is a radial basis function neural network.
SFT-106-A-PCT [00702] In embodiments, the second neural network 3620 optimizes the operational parameter 36124 based on a correlation between a vehicle operating state 3645 and a rider emotional state 3637. In embodiments, the operational parameter of the vehicle that is optimized is determined and adjusted to induce a favorable rider emotional state. In embodiments, the first neural network 3622 further learns to classify patterns of the rider emotional state indicative wearable sensor data and associate the patterns to emotional states and changes thereto from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the second neural network 3620 optimizes the operational parameter in real time responsive to the detecting of an emotional state of the rider by the first neural network 3622. In embodiments, the first neural network 3622 detects a pattern of the rider emotional state indicative wearable sensor data that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the second neural network 3620 optimizes the operational parameter of the vehicle in response to the indicated change in emotional state. [00703] In embodiments, the first neural network 3622 comprises a plurality of connected nodes that form a directed cycle, the first neural network 3622 further facilitating bi-directional flow of data among the connected nodes. In embodiments, the first neural network 3622 includes one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the rider emotional state indicative wearable sensor data indicates at least one of an emotional state of the rider is changing, an emotional state of the rider is stable, a rate of change of an emotional state of the rider, a direction of change of an emotional state of the rider, and a polarity of a change of an emotional state of the rider; an emotional state of a rider is changing to an unfavorable state; and an emotional state of a rider is changing to a favorable state. In embodiments, the operational parameter that is optimized affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the second neural network 3620 interacts with a vehicle control system to adjust the operational parameter. In embodiments, the first neural network 3622 includes one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. [00704] In embodiments, the vehicle is a self-driving vehicle. In embodiments, the artificial intelligence system 3636 is to detect a change in the emotional state of the rider riding in the self- driving vehicle at least in part by recognition of patterns of emotional state indicative data from a
SFT-106-A-PCT set of wearable sensors worn by the rider. In embodiments, the patterns are indicative of at least one of a diminishing of a favorable emotional state of the rider and an onset of an unfavorable emotional state of the rider. In embodiments, the artificial intelligence system 3636 is to determine at least one operating parameter 36124 of the self-driving vehicle that is indicative of the change in emotional state based on a correlation of the patterns of emotional state indicative data with a set of operating parameters of the vehicle. In embodiments, the artificial intelligence system 3636 is to determine an adjustment of the at least one operating parameter 36124 for achieving at least one of restoring the favorable emotional state of the rider and achieving a reduction in the onset of the unfavorable emotional state of a rider. [00705] In embodiments, the correlation of patterns of rider emotional indicative state wearable sensor data is determined using at least one of a training set of emotional state wearable sensor indicators of a set of riders and human trainer-generated rider emotional state wearable sensor indicators. In embodiments, the artificial intelligence system 3636 further learns to classify the patterns of the emotional state indicative wearable sensor data and associate the patterns to changes in rider emotional states from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the patterns of rider emotional state indicative wearable sensor data indicates at least one of an emotional state of the rider is changing, an emotional state of the rider is stable, a rate of change of an emotional state of the rider, a direction of change of an emotional state of the rider, and a polarity of a change of an emotional state of the rider; an emotional state of a rider is changing to an unfavorable state; and an emotional state of a rider is changing to a favorable state. [00706] In embodiments, the operational parameter determined from a result of processing the rider emotional state indicative wearable sensor data affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the artificial intelligence system 3636 further interacts with a vehicle control system for adjusting the operational parameter. In embodiments, the artificial intelligence system 3636 further comprises a neural net that includes one or more perceptrons that mimic human senses that facilitate determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. [00707] In embodiments, the set of wearable sensors comprises at least two of a watch, a ring, a wrist band, an arm band, an ankle band, a torso band, a skin patch, a head-worn device, eye glasses, foot wear, a glove, an in-ear device, clothing, headphones, a belt, a finger ring, a thumb ring, a toe ring, and a necklace. In embodiments, the artificial intelligence system 3636 uses deep learning for
SFT-106-A-PCT determining patterns of wearable sensor-generated emotional state indicative data that indicate the change in the emotional state of the rider. In embodiments, the artificial intelligence system 3636 further determines the change in emotional state of the rider based on context gathered from a plurality of sources including data indicating a purpose of the rider riding in the self-driving vehicle, a time of day, traffic conditions, weather conditions and optimizes the operating parameter 36124 to at least one of achieve and maintain the adapted favorable emotional state. In embodiments, the artificial intelligence system 3636 adjusts the operational parameter in real time responsive to the detecting of a change in rider emotional state. [00708] In embodiments, the vehicle is a self-driving vehicle. In embodiments, the artificial intelligence system 3636 includes: a recurrent neural network to indicate a change in the emotional state of a rider in the self-driving vehicle by a recognition of patterns of emotional state indicative wearable sensor data from a set of wearable sensors worn by the rider. In embodiments, the patterns are indicative of at least one of a first degree of an favorable emotional state of the rider and a second degree of an unfavorable emotional state of the rider; and a radial basis function neural network to optimize, for achieving a target emotional state of the rider, the operating parameter 36124 of the vehicle in response to the indication of the change in the emotional state of the rider. [00709] In embodiments, the radial basis function neural network optimizes the operational parameter based on a correlation between a vehicle operating state and a rider emotional state. In embodiments, the target emotional state is a favorable rider emotional state and the operational parameter of the vehicle that is optimized is determined and adjusted to induce the favorable rider emotional state. In embodiments, the recurrent neural network further learns to classify the patterns of emotional state indicative wearable sensor data and associate them to emotional states and changes thereto from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the radial basis function neural network optimizes the operational parameter in real time responsive to the detecting of a change in an emotional state of the rider by the recurrent neural network. In embodiments, the recurrent neural network detects a pattern of the emotional state indicative wearable sensor data that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the radial basis function neural network optimizes the operational parameter of the vehicle in response to the indicated change in emotional state. In embodiments, the recurrent neural network comprises a plurality of connected nodes that form a directed cycle, the recurrent neural network further facilitating bi-directional flow of data among the connected nodes. [00710] In embodiments, the patterns of emotional state indicative wearable sensor data indicate at least one of an emotional state of the rider is changing, an emotional state of the rider is stable,
SFT-106-A-PCT a rate of change of an emotional state of the rider, a direction of change of an emotional state of the rider, and a polarity of a change of an emotional state of the rider; an emotional state of a rider is changing to an unfavorable state; and an emotional state of a rider is changing to a favorable state. In embodiments, the operational parameter that is optimized affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the radial basis function neural network interacts with a vehicle control system to adjust the operational parameter. In embodiments, the recurrent neural net includes one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. [00711] In embodiments, the artificial intelligence system 3636 is to maintain a favorable emotional state of the rider through use of a modular neural network, the modular neural network comprising: a rider emotional state determining neural network to process emotional state indicative wearable sensor data of a rider in the vehicle to detect patterns. In embodiments, the patterns found in the emotional state indicative wearable sensor data are indicative of at least one of a favorable emotional state of the rider and an unfavorable emotional state of the rider; an intermediary circuit to convert output data from the rider emotional state determining neural network into vehicle operational state data; and a vehicle operational state optimizing neural network to adjust the operating parameter 36124 of the vehicle in response to the vehicle operational state data. [00712] In embodiments, the vehicle operational state optimizing neural network adjusts an operational parameter of the vehicle for achieving a favorable emotional state of the rider. In embodiments, the vehicle operational state optimizing neural network optimizes the operational parameter based on a correlation between a vehicle operating state and a rider emotional state. In embodiments, the operational parameter of the vehicle that is optimized is determined and adjusted to induce a favorable rider emotional state. In embodiments, the rider emotional state determining neural network further learns to classify the patterns of emotional state indicative wearable sensor data and associate them to emotional states and changes thereto from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. [00713] In embodiments, the vehicle operational state optimizing neural network optimizes the operational parameter in real time responsive to the detecting of a change in an emotional state of the rider by the rider emotional state determining neural network. In embodiments, the rider emotional state determining neural network detects a pattern of emotional state indicative wearable sensor data that indicates the emotional state of the rider is changing from a first emotional state to
SFT-106-A-PCT a second emotional state. In embodiments, the vehicle operational state optimizing neural network optimizes the operational parameter of the vehicle in response to the indicated change in emotional state. In embodiments, the artificial intelligence system 3636 comprises a plurality of connected nodes that forms a directed cycle, the artificial intelligence system 3636 further facilitating bi- directional flow of data among the connected nodes. In embodiments, the pattern of emotional state indicative wearable sensor data indicate at least one of an emotional state of the rider is changing, an emotional state of the rider is stable, a rate of change of an emotional state of the rider, a direction of change of an emotional state of the rider, and a polarity of a change of an emotional state of the rider; an emotional state of a rider is changing to an unfavorable state; and an emotional state of a rider is changing to a favorable state. [00714] In embodiments, the operational parameter that is optimized affects at least one of a route of the vehicle, in-vehicle audio content, speed of the vehicle, acceleration of the vehicle, deceleration of the vehicle, proximity to objects along the route, and proximity to other vehicles along the route. In embodiments, the vehicle operational state optimizing neural network interacts with a vehicle control system to adjust the operational parameter. In embodiments, the artificial intelligence system 3636 further comprises a neural net that includes one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the rider emotional state determining neural network comprises one or more perceptrons that mimic human senses that facilitates determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. [00715] In embodiments, the artificial intelligence system 3636 is to indicate a change in the emotional state of a rider in the vehicle through recognition of patterns of emotional state indicative wearable sensor data of the rider in the vehicle; the transportation system further comprising: a vehicle control system to control an operation of the vehicle by adjusting a plurality of vehicle operating parameters; and a feedback loop through which the indication of the change in the emotional state of the rider is communicated between the vehicle control system and the artificial intelligence system 3636. In embodiments, the vehicle control system adjusts at least one of the plurality of vehicle operating parameters responsive to the indication of the change. In embodiments, the vehicle controls system adjusts the at least one of the plurality of vehicle operational parameters based on a correlation between vehicle operational state and rider emotional state. [00716] In embodiments, the vehicle control system adjusts the at least one of the plurality of vehicle operational parameters that are indicative of a favorable rider emotional state. In embodiments, the vehicle control system selects an adjustment of the at least one of the plurality
SFT-106-A-PCT of vehicle operational parameters that is indicative of producing a favorable rider emotional state. In embodiments, the artificial intelligence system 3636 further learns to classify the patterns of emotional state indicative wearable sensor data and associate them to emotional states and changes thereto from a training data set sourced from at least one of a stream of data from unstructured data sources, social media sources, wearable devices, in-vehicle sensors, a rider helmet, a rider headgear, and a rider voice system. In embodiments, the vehicle control system adjusts the at least one of the plurality of vehicle operation parameters in real time. [00717] In embodiments, the artificial intelligence system 3636 further detects a pattern of the emotional state indicative wearable sensor data that indicates the emotional state of the rider is changing from a first emotional state to a second emotional state. In embodiments, the vehicle operation control system adjusts an operational parameter of the vehicle in response to the indicated change in emotional state. In embodiments, the artificial intelligence system 3636 comprises a plurality of connected nodes that form a directed cycle, the artificial intelligence system 3636 further facilitating bi-directional flow of data among the connected nodes. In embodiments, the at least one of the plurality of vehicle operation parameters that is responsively adjusted affects operation of a powertrain of the vehicle and a suspension system of the vehicle. [00718] In embodiments, the radial basis function neural network interacts with the recurrent neural network via an intermediary component of the artificial intelligence system 3636 that produces vehicle control data indicative of an emotional state response of the rider to a current operational state of the vehicle. In embodiments, the artificial intelligence system 3636 further comprises a modular neural network comprising a rider emotional state recurrent neural network for indicating the change in the emotional state of a rider, a vehicle operational state radial based function neural network, and an intermediary system. In embodiments, the intermediary system processes rider emotional state characterization data from the recurrent neural network into vehicle control data that the radial based function neural network uses to interact with the vehicle control system for adjusting the at least one operational parameter. [00719] In embodiments, the artificial intelligence system 3636 comprises a neural net that includes one or more perceptrons that mimic human senses that facilitate determining an emotional state of a rider based on an extent to which at least one of the senses of the rider is stimulated. In embodiments, the recognition of patterns of emotional state indicative wearable sensor data comprises processing the emotional state indicative wearable sensor data captured during at least two of before the adjusting at least one of the plurality of vehicle operational parameters, during the adjusting at least one of the plurality of vehicle operational parameters, and after adjusting at least one of the plurality of vehicle operational parameters.
SFT-106-A-PCT [00720] In embodiments, the artificial intelligence system 3636 indicates a change in the emotional state of the rider responsive to a change in an operating parameter 36124 of the vehicle by determining a difference between a first set of emotional state indicative wearable sensor data of a rider captured prior to the adjusting at least one of the plurality of operating parameters and a second set of emotional state indicative wearable sensor data of the rider captured during or after the adjusting at least one of the plurality of operating parameters. [00721] Referring to Fig. 37, in embodiments provided herein are transportation systems 3711 having a cognitive system 37158 for managing an advertising market for in-seat advertising for riders 3744 of self-driving vehicles. In embodiments, the cognitive system 37158 takes inputs relating to at least one parameter 37124 of the vehicle and/or the rider 3744 to determine at least one of a price, a type and a location of an advertisement to be delivered within an interface 37133 to a rider 3744 in a seat 3728 of the vehicle. As described above in connection with search, in- vehicle riders, particularly in self-driving vehicles, may be situationally disposed quite differently toward advertising when riding in a vehicle than at other times. Bored riders may be more willing to watch advertising content, click on offers or promotions, engage in surveys, or the like. In embodiments, an advertising marketplace platform may segment and separately handle advertising placements (including handling bids and asks for advertising placement and the like) for in-vehicle ads. Such an advertising marketplace platform may use information that is unique to a vehicle, such as vehicle type, display type, audio system capabilities, screen size, rider demographic information, route information, location information, and the like when characterizing advertising placement opportunities, such that bids for in-vehicle advertising placement reflect such vehicle, rider and other transportation-related parameters. For example, an advertiser may bid for placement of advertising on in-vehicle display systems of self-driving vehicles that are worth more than $50,000 and that are routed north on highway 101 during the morning commute. The advertising marketplace platform may be used to configure many such vehicle-related placement opportunities, to handle bidding for such opportunities, to place advertisements (such as by load- balanced servers that cache the ads) and to resolve outcomes. Yield metrics may be tracked and used to optimize configuration of the marketplace. [00722] An aspect provided herein includes a system for transportation, comprising: a cognitive system 37158 for managing an advertising market for in-seat advertising for riders of self-driving vehicles, wherein the cognitive system 37158 takes inputs corresponding to at least one parameter 37159 of the vehicle or the rider 3744 to determine a characteristic 37160 of an advertisement to be delivered within an interface 37133 to a rider 3744 in a seat 3728 of the vehicle, wherein the characteristic 37160 of the advertisement is selected from the group consisting of a price, a category, a location and combinations thereof.
SFT-106-A-PCT [00723] Fig. 38 illustrates a method 3800 of vehicle in-seat advertising in accordance with embodiments of the systems and methods disclosed herein. At 3802 the method includes taking inputs relating to at least one parameter of a vehicle. At 3804 the method includes taking inputs relating to at least one parameter of a rider occupying the vehicle. At 3806 the method includes determining at least one of a price, classification, content, and location of an advertisement to be delivered within an interface of the vehicle to a rider in a seat in the vehicle based on the vehicle- related inputs and the rider-related inputs. [00724] Referring to Fig.37 and Fig.38, in embodiments, the vehicle 3710 is automatically routed. In embodiments, the vehicle 3710 is a self-driving vehicle. In embodiments, the cognitive system 37158 further determines at least one of a price, classification, content and location of an advertisement placement. In embodiments, an advertisement is delivered from an advertiser who places a winning bid. In embodiments, delivering an advertisement is based on a winning bid. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include vehicle classification. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include screen size. [00725] In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider demographic information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider social media activity. [00726] Fig. 39 illustrates a method 3900 of in-vehicle advertising interaction tracking in accordance with embodiments of the systems and methods disclosed herein. At 3902 the method includes taking inputs relating to at least one parameter of a vehicle and inputs relating to at least one parameter of a rider occupying the vehicle. At 3904 the method includes aggregating the inputs across a plurality of vehicles. At 3906 the method includes using a cognitive system to determine opportunities for in-vehicle advertisement placement based on the aggregated inputs. At 3907 the method includes offering the placement opportunities in an advertising network that facilitates bidding for the placement opportunities. At 3908 the method includes based on a result of the bidding, delivering an advertisement for placement within a user interface of the vehicle. At 3909
SFT-106-A-PCT the method includes monitoring vehicle rider interaction with the advertisement presented in the user interface of the vehicle. [00727] Referring to Fig. 37 and 39, in embodiments, the vehicle 3710 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle 3710 is at least a semi-autonomous vehicle. In embodiments, the vehicle 3710 is automatically routed. In embodiments, the vehicle 3710 is a self-driving vehicle. In embodiments, an advertisement is delivered from an advertiser who places a winning bid. In embodiments, delivering an advertisement is based on a winning bid. In embodiments, the monitored vehicle rider interaction information includes information for resolving click-based payments. In embodiments, the monitored vehicle rider interaction information includes an analytic result of the monitoring. In embodiments, the analytic result is a measure of interest in the advertisement. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include vehicle classification. [00728] In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include screen size. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider demographic information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider social media activity. [00729] Fig. 40 illustrates a method 4000 of in-vehicle advertising in accordance with embodiments of the systems and methods disclosed herein. At 4002 the method includes taking inputs relating to at least one parameter of a vehicle and inputs relating to at least one parameter of a rider occupying the vehicle. At 4004 the method includes aggregating the inputs across a plurality of vehicles. At 4006 the method includes using a cognitive system to determine opportunities for in-vehicle advertisement placement based on the aggregated inputs. At 4008 the method includes offering the placement opportunities in an advertising network that facilitates bidding for the placement opportunities. At 4009 the method includes based on a result of the bidding, delivering an advertisement for placement within an interface of the vehicle. [00730] Referring to Fig. 37 and Fig. 40, in embodiments, the vehicle 3710 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle 3710 is at least a semi-autonomous vehicle. In embodiments, the vehicle 3710 is automatically routed. In
SFT-106-A-PCT embodiments, the vehicle 3710 is a self-driving vehicle. In embodiments, the cognitive system 37158 further determines at least one of a price, classification, content and location of an advertisement placement. In embodiments, an advertisement is delivered from an advertiser who places a winning bid. In embodiments, delivering an advertisement is based on a winning bid. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include vehicle classification. [00731] In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include screen size. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider demographic information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider social media activity. [00732] An aspect provided herein includes an advertising system of vehicle in-seat advertising, the advertising system comprising: a cognitive system 37158 that takes inputs 37162 relating to at least one parameter 37124 of a vehicle 3710 and takes inputs relating to at least one parameter 37161 of a rider occupying the vehicle, and determines at least one of a price, classification, content and location of an advertisement to be delivered within an interface 37133 of the vehicle 3710 to a rider 3744 in a seat 3728 in the vehicle 3710 based on the vehicle-related inputs 37162 and the rider-related inputs 37163. [00733] In embodiments, the vehicle 4110 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle 4110 is at least a semi-autonomous vehicle. In embodiments, the vehicle 4110 is automatically routed. In embodiments, the vehicle 4110 is a self-driving vehicle. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include vehicle classification. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include screen size. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs 37162 relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider
SFT-106-A-PCT include rider demographic information. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs 37163 relating to the at least one parameter of a rider include rider social media activity. [00734] In embodiments, the advertising system is further to determine a vehicle operating state from the inputs 37162 related to at least one parameter of the vehicle. In embodiments, the advertisement to be delivered is determined based at least in part on the determined vehicle operating state. In embodiments, the advertising system is further to determine a rider state 37149 from the inputs 37163 related to at least one parameter of the rider. In embodiments, the advertisement to be delivered is determined based at least in part on the determined rider state 37149. [00735] Referring to Fig. 41, in embodiments provided herein are transportation systems 4111 having a hybrid cognitive system 41164 for managing an advertising market for in-seat advertising to riders of vehicles 4110. In embodiments, at least one part of the hybrid cognitive system 41164 processes inputs 41162 relating to at least one parameter 41124 of the vehicle to determine a vehicle operating state and at least one other part of the cognitive system processes inputs relating to a rider to determine a rider state. In embodiments, the cognitive system determines at least one of a price, a type and a location of an advertisement to be delivered within an interface to a rider in a seat of the vehicle. [00736] An aspect provided herein includes a system for transportation 4111, comprising: a hybrid cognitive system 41164 for managing an advertising market for in-seat advertising to riders 4144 of vehicles 4110. In embodiments, at least one part 41165 of the hybrid cognitive system processes inputs 41162 corresponding to at least one parameter of the vehicle to determine a vehicle operating state 41168 and at least one other part 41166 of the cognitive system 41164 processes inputs 41163 relating to a rider to determine a rider state 41149. In embodiments, the cognitive system 41164 determines a characteristic 41160 of an advertisement to be delivered within an interface 41133 to the rider 4144 in a seat 4128 of the vehicle 4110. In embodiments, the characteristic 41160 of the advertisement is selected from the group consisting of a price, a category, a location and combinations thereof. [00737] An aspect provided herein includes an artificial intelligence system 4136 for vehicle in- seat advertising, comprising: a first portion 41165 of the artificial intelligence system 4136 that determines a vehicle operating state 41168 of the vehicle by processing inputs 41162 relating to at least one parameter of the vehicle; a second portion 41166 of the artificial intelligence system 4136 that determines a state 41149 of the rider of the vehicle by processing inputs 41163 relating to at
SFT-106-A-PCT least one parameter of the rider; and a third portion 41167 of the artificial intelligence system 4136 that determines at least one of a price, classification, content and location of an advertisement to be delivered within an interface 41133 of the vehicle to a rider 4144 in a seat in the vehicle 4110 based on the vehicle (operating) state 41168 and the rider state 41149. [00738] In embodiments, the vehicle 4110 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle is at least a semi-autonomous vehicle. In embodiments, the vehicle is automatically routed. In embodiments, the vehicle is a self-driving vehicle. In embodiments, the cognitive system 41164 further determines at least one of a price, classification, content and location of an advertisement placement. In embodiments, an advertisement is delivered from an advertiser who places a winning bid. In embodiments, delivering an advertisement is based on a winning bid. In embodiments, the inputs relating to the at least one parameter of a vehicle include vehicle classification. [00739] In embodiments, the inputs relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs relating to the at least one parameter of a vehicle include screen size. In embodiments, the inputs relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs relating to the at least one parameter of a rider include rider demographic information. In embodiments, the inputs relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs relating to the at least one parameter of a rider include rider social media activity. [00740] Fig. 42 illustrates a method 4200 of in-vehicle advertising interaction tracking in accordance with embodiments of the systems and methods disclosed herein. At 4202 the method includes taking inputs relating to at least one parameter of a vehicle and inputs relating to at least one parameter of a rider occupying the vehicle. At 4204 the method includes aggregating the inputs across a plurality of vehicles. At 4206 the method includes using a hybrid cognitive system to determine opportunities for in-vehicle advertisement placement based on the aggregated inputs. At 4207 the method includes offering the placement opportunities in an advertising network that facilitates bidding for the placement opportunities. At 4208 the method includes based on a result of the bidding, delivering an advertisement for placement within a user interface of the vehicle. At 4209 the method includes monitoring vehicle rider interaction with the advertisement presented in the user interface of the vehicle.
SFT-106-A-PCT [00741] Referring to Fig. 41 and Fig. 42, in embodiments, the vehicle 4110 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle 4110 is at least a semi-autonomous vehicle. In embodiments, the vehicle 4110 is automatically routed. In embodiments, the vehicle 4110 is a self-driving vehicle. In embodiments, a first portion 41165 of the hybrid cognitive system 41164 determines an operating state of the vehicle by processing inputs relating to at least one parameter of the vehicle. In embodiments, a second portion 41166 of the hybrid cognitive system 41164 determines a state 41149 of the rider of the vehicle by processing inputs relating to at least one parameter of the rider. In embodiments, a third portion 41167 of the hybrid cognitive system 41164 determines at least one of a price, classification, content and location of an advertisement to be delivered within an interface of the vehicle to a rider in a seat in the vehicle based on the vehicle state and the rider state. In embodiments, an advertisement is delivered from an advertiser who places a winning bid. In embodiments, delivering an advertisement is based on a winning bid. In embodiments, the monitored vehicle rider interaction information includes information for resolving click-based payments. In embodiments, the monitored vehicle rider interaction information includes an analytic result of the monitoring. In embodiments, the analytic result is a measure of interest in the advertisement. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include vehicle classification. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include screen size. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider demographic information. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider social media activity. [00742] Fig. 43 illustrates a method 4300 of in-vehicle advertising in accordance with embodiments of the systems and methods disclosed herein. At 4302 the method includes taking inputs relating to at least one parameter of a vehicle and inputs relating to at least one parameter of a rider occupying the vehicle. At 4304 the method includes aggregating the inputs across a plurality of vehicles. At 4306 the method includes using a hybrid cognitive system to determine opportunities for in-vehicle advertisement placement based on the aggregated inputs. At 4308 the method includes offering the placement opportunities in an advertising network that facilitates
SFT-106-A-PCT bidding for the placement opportunities. At 4309 the method includes based on a result of the bidding, delivering an advertisement for placement within an interface of the vehicle. [00743] Referring to Fig. 41 and Fig. 43, in embodiments, the vehicle 4110 comprises a system for automating at least one control parameter of the vehicle. In embodiments, the vehicle 4110 is at least a semi-autonomous vehicle. In embodiments, the vehicle 4110 is automatically routed. In embodiments, the vehicle 4110 is a self-driving vehicle. In embodiments, a first portion 41165 of the hybrid cognitive system 41164 determines an operating state 41168 of the vehicle by processing inputs 41162 relating to at least one parameter of the vehicle. In embodiments, a second portion 41166 of the hybrid cognitive system 41164 determines a state 41149 of the rider of the vehicle by processing inputs 41163 relating to at least one parameter of the rider. In embodiments, a third portion 41167 of the hybrid cognitive system 41164 determines at least one of a price, classification, content and location of an advertisement to be delivered within an interface 41133 of the vehicle 4110 to a rider 4144 in a seat 4128 in the vehicle 4110 based on the vehicle (operating) state 41168 and the rider state 41149. In embodiments, an advertisement is delivered from an advertiser who places a winning bid. In embodiments, delivering an advertisement is based on a winning bid. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include vehicle classification. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include display classification. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include audio system capability. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include screen size. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include route information. In embodiments, the inputs 41162 relating to the at least one parameter of a vehicle include location information. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider demographic information. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider emotional state. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider response to prior in-seat advertising. In embodiments, the inputs 41163 relating to the at least one parameter of a rider include rider social media activity. [00744] Referring to Fig. 44, in embodiments provided herein are transportation systems 4411 having a motorcycle helmet 44170 that is configured to provide an augmented reality experience based on registration of the location and orientation of the wearer 44172 in an environment 44171. [00745] An aspect provided herein includes a system for transportation 4411, comprising: a motorcycle helmet 44170 to provide an augmented reality experience based on registration of a location and orientation of a wearer 44172 of the helmet 44170 in an environment 44171.
SFT-106-A-PCT [00746] An aspect provided herein includes a motorcycle helmet 44170 comprising: a data processor 4488 configured to facilitate communication between a rider 44172 wearing the helmet 44170 and a motorcycle 44169, the motorcycle 44169 and the helmet 44170 communicating location and orientation 44173 of the motorcycle 44169; and an augmented reality system 44174 with a display 44175 disposed to facilitate presenting an augmentation of content in an environment 44171 of a rider wearing the helmet, the augmentation responsive to a registration of the communicated location and orientation 44128 of the motorcycle 44169. In embodiments, at least one parameter of the augmentation is determined by machine learning on at least one input relating to at least one of the rider 44172 and the motorcycle 44180. [00747] In embodiments, the motorcycle 44169 comprises a system for automating at least one control parameter of the motorcycle. In embodiments, the motorcycle 44169 is at least a semi- autonomous motorcycle. In embodiments, the motorcycle 44169 is automatically routed. In embodiments, the motorcycle 44169 is a self-driving motorcycle. In embodiments, the content in the environment is content that is visible in a portion of a field of view of the rider wearing the helmet. In embodiments, the machine learning on the input of the rider determines an emotional state of the rider and a value for the at least one parameter is adapted responsive to the rider emotional state. In embodiments, the machine learning on the input of the motorcycle determines an operational state of the motorcycle and a value for the at least one parameter is adapted responsive to the motorcycle operational state. In embodiments, the helmet 44170 further comprises a motorcycle configuration expert system 44139 for recommending an adjustment of a value of the at least one parameter 44156 to the augmented reality system responsive to the at least one input. [00748] An aspect provided herein includes a motorcycle helmet augmented reality system comprising: a display 44175 disposed to facilitate presenting an augmentation of content in an environment of a rider wearing the helmet; a circuit 4488 for registering at least one of location and orientation of a motorcycle that the rider is riding; a machine learning circuit 44179 that determines at least one augmentation parameter 44156 by processing at least one input relating to at least one of the rider 44163 and the motorcycle 44180; and a reality augmentation circuit 4488 that, responsive to the registered at least one of a location and orientation of the motorcycle generates an augmentation element 44177 for presenting in the display 44175, the generating based at least in part on the determined at least one augmentation parameter 44156. [00749] In embodiments, the motorcycle 44169 comprises a system for automating at least one control parameter of the motorcycle. In embodiments, the motorcycle 44169 is at least a semi- autonomous motorcycle. In embodiments, the motorcycle 44169 is automatically routed. In embodiments, the motorcycle 44169 is a self-driving motorcycle. In embodiments, the content
SFT-106-A-PCT 44176 in the environment is content that is visible in a portion of a field of view of the rider 44172 wearing the helmet. In embodiments, the machine learning on the input of the rider determines an emotional state of the rider and a value for the at least one parameter is adapted responsive to the rider emotional state. In embodiments, the machine learning on the input of the motorcycle determines an operational state of the motorcycle and a value for the at least one parameter is adapted responsive to the motorcycle operational state. [00750] In embodiments, the helmet further comprises a motorcycle configuration expert system 44139 for recommending an adjustment of a value of the at least one parameter 44156 to the augmented reality system 4488 responsive to the at least one input. [00751] In embodiments, leveraging network technologies for a transportation system may support a cognitive collective charging or refueling plan for vehicles in the transportation system. Such a transportation system may include an artificial intelligence system for taking inputs relating to a plurality of vehicles, such as self-driving vehicles, and determining at least one parameter of a re- charging or refueling plan for at least one of the plurality of vehicles based on the inputs. [00752] Referring to Fig. 45, in embodiments, the transportation system may be a vehicle transportation system. Such a vehicle transportation system may include a network-enabled vehicle information ingestion port 4532 that may provide a network (e.g., Internet and the like) interface through which inputs, such as inputs comprising operational state and energy consumption information from at least one of a plurality of network-enabled vehicles 4510 may be gathered. In embodiments, such inputs may be gathered in real time as the plurality of network-enabled vehicles 4510 connect to and deliver vehicle operational state, energy consumption and other related information. In embodiments, the inputs may relate to vehicle energy consumption and may be determined from a battery charge state of a portion of the plurality of vehicles. The inputs may include a route plan for the vehicle, an indicator of the value of charging of the vehicle, and the like. The inputs may include predicted traffic conditions for the plurality of vehicles. The transportation system may also include vehicle charging or refueling infrastructure that may include one or more vehicle charging infrastructure control system(s) 4534. These control system(s) 4534 may receive the operational state and energy consumption information for the plurality of network-enabled vehicles 4510 via the ingestion port 4532 or directly through a common or set of connected networks, such as the Internet and the like. Such a transportation system may further include an artificial intelligence system 4536 that may be functionally connected with the vehicle charging infrastructure control system(s) 4534 that, for example, responsive to the receiving of the operational state and energy consumption information, may determine, provide, adjust or create at least one charging plan parameter 4514 upon which a charging plan 4512 for at least a portion of the plurality of network-enabled vehicles 4510 is
SFT-106-A-PCT dependent. This dependency may yield changes in the application of the charging plan 4512 by the control system(s) 4534, such as when a processor of the control system(s) 4534 executes a program derived from or based on the charging plan 4512. The charging infrastructure control system(s) 4534 may include a cloud-based computing system remote from charging infrastructure systems (e.g., remote from an electric vehicle charging kiosk and the like); it may also include a local charging infrastructure system 4538 that may be disposed with and/or integrated with an infrastructure element, such as a fuel station, a charging kiosk and the like. In embodiments, the artificial intelligence system 4536 may interface and coordinate with the cloud-based system 4534, the local charging infrastructure system 4538 or both. In embodiments, coordination of the cloud- based system may take on a different form of interfacing, such as providing parameters that affect more than one charging kiosk and the like than may coordination with the local charging infrastructure system 4538, which may provide information that the local system could use to adapt charging system control commands and the like that may be provided from, for example, a cloud- based control system 4534. In an example, a cloud-based control system (that may control only a portion, such as a localized set, of available charging/refueling infrastructure devices) may respond to the charging plan parameter 4514 of the artificial intelligence system 4536 by setting a charging rate that facilitates highly parallel vehicle charging. However, the local charging infrastructure system 4538 may adapt this control plan, such as based on a control plan parameter provided to it by the artificial intelligence system 4536, to permit a different charging rate (e.g., a faster charging rate), such as for a brief period to accommodate an accumulation of vehicles queued up or estimated to use a local charging kiosk in the period. In this way, an adjustment to the at least one parameter 4514 that when made to the charge infrastructure operation plan 4512 ensures that the at least one of the plurality of vehicles 4510 has access to energy renewal in a target energy renewal geographic region 4516. [00753] In embodiments, a charging or refueling plan may have a plurality of parameters that may impact a wide range of transportation aspects ranging from vehicle-specific to vehicle group- specific to vehicle location-specific and infrastructure impacting aspects. Therefore, a parameter of the plan may impact or relate to any of vehicle routing to charging infrastructure, amount of charge permitted to be provided, duration of time or rate for charging, battery conditions or state, battery charging profile, time required to charge to a minimum value that may be based on consumption needs of the vehicle(s), market value of charging, indicators of market value, market price, infrastructure provider profit, bids or offers for providing fuel or electricity to one or more charging or refueling infrastructure kiosks, available supply capacity, recharge demand (local, regional, system wide), and the like.
SFT-106-A-PCT [00754] In embodiments, to facilitate a cognitive charging or refueling plan, the transportation system may include a recharging plan update facility that interacts with the artificial intelligence system 4536 to apply an adjustment value 4524 to the at least one of the plurality of recharging plan parameters 4514. An adjustment value 4524 may be further adjusted based on feedback of applying the adjustment value. In embodiments, the feedback may be used by the artificial intelligence system 4534 to further adjust the adjustment value. In an example, feedback may impact the adjustment value applied to charging or refueling infrastructure facilities in a localized way, such as for a target recharging geographic region 4516 or geographic range relative to one or more vehicles. In embodiments, providing a parameter adjustment value may facilitate optimizing consumption of a remaining battery charge state of at least one of the plurality of vehicles. [00755] By processing energy-related consumption, demand, availability, and access information and the like, the artificial intelligence system 4536 may optimize aspects of the transportation system, such as vehicle electricity usage as shown in the box at 4526. The artificial intelligence system 4536 may further optimize at least one of recharging time, location, and amount. In an example, a recharging plan parameter that may be configured and updated based on feedback may be a routing parameter for the at least one of the plurality of vehicles as shown in the box at 4526. [00756] The artificial intelligence system 4536 may further optimize a transportation system charging or refueling control plan parameter 4514 to, for example, accommodate near-term charging needs for the plurality of rechargeable vehicles 4510 based on the optimized at least one parameter. The artificial intelligence system 4536 may execute an optimizing algorithm that may calculate energy parameters (including vehicle and non-vehicle energy), optimizes electricity usage for at least vehicles and/or charging or refueling infrastructure, and optimizes at least one charging or refueling infrastructure-specific recharging time, location, and amount. [00757] In embodiments, the artificial intelligence system 4534 may predict a geolocation 4518 of one or more vehicles within a geographic region 4516. The geographic region 4516 may include vehicles that are currently located in or predicted to be in the region and optionally may require or prefer recharging or refueling. As an example of predicting geolocation and its impact on a charging plan, a charging plan parameter may include allocation of vehicles currently in or predicted to be in the region to charging or refueling infrastructure in the geographic region 4516. In embodiments, geolocation prediction may include receiving inputs relating to charging states of a plurality of vehicles within or predicted to be within a geolocation range so that the artificial intelligence system can optimize at least one charging plan parameter 4514 based on a prediction of geolocations of the plurality of vehicles.
SFT-106-A-PCT [00758] There are many aspects of a charging plan that may be impacted. Some aspects may be financial related, such as automated negotiation of at least one of a duration, a quantity and a price for charging or refueling a vehicle. [00759] The transportation system cognitive charging plan system may include the artificial intelligence system being configured with a hybrid neural network. A first neural network 4522 of the hybrid neural network may be used to process inputs relating to charge or fuel states of the plurality of vehicles (directly received from the vehicles or through the vehicle information port 4532) and a second neural network 4520 of the hybrid neural network is used to process inputs relating to charging or refueling infrastructure and the like. In embodiments, the first neural network 4522 may process inputs comprising vehicle route and stored energy state information for a plurality of vehicles to predict for at least one of the plurality of vehicles a target energy renewal region. The second neural network 4520 may process vehicle energy renewal infrastructure usage and demand information for vehicle energy renewal infrastructure facilities within the target energy renewal region to determine at least one parameter 4514 of a charge infrastructure operational plan 4512 that facilitates access by the at least one of the plurality vehicles to renewal energy in the target energy renewal region 4516. In embodiments, the first and/or second neural networks may be configured as any of the neural networks described herein including without limitation convolutional type networks. [00760] In embodiments, a transportation system may be distributed and may include an artificial intelligence system 4536 for taking inputs relating to a plurality of vehicles 4510 and determining at least one parameter 4514 of a re-charging and refueling plan 4512 for at least one of the plurality of vehicles based on the inputs. In embodiments, such inputs may be gathered in real time as plurality of vehicles 4510 connect to and deliver vehicle operational state, energy consumption and other related information. In embodiments, the inputs may relate to vehicle energy consumption and may be determined from a battery charge state of a portion of the plurality of vehicles. The inputs may include a route plan for the vehicle, an indicator of the value of charging of the vehicle, and the like. The inputs may include predicted traffic conditions for the plurality of vehicles. The distributed transportation system may also include cloud-based and vehicle-based systems that exchange information about the vehicle, such as energy consumption and operational information and information about the transportation system, such as recharging or refueling infrastructure. The artificial intelligence system may respond to transportation system and vehicle information shared by the cloud and vehicle-based system with control parameters that facilitate executing a cognitive charging plan for at least a portion of charging or refueling infrastructure of the transportation system. The artificial intelligence system 4536 may determine, provide, adjust or create at least one charging plan parameter 4514 upon which a charging plan 4512 for at least a
SFT-106-A-PCT portion of the plurality of vehicles 4510 is dependent. This dependency may yield changes in the execution of the charging plan 4512 by at least one the cloud-based and vehicle-based systems, such as when a processor executes a program derived from or based on the charging plan 4512. [00761] In embodiments, an artificial intelligence system of a transportation system may facilitate execution of a cognitive charging plan by applying a vehicle recharging facility utilization optimization algorithm to a plurality of rechargeable vehicle-specific inputs, e.g., current operating state data for rechargeable vehicles present in a target recharging range of one of the plurality of rechargeable vehicles. The artificial intelligence system may also evaluate an impact of a plurality of recharging plan parameters on recharging infrastructure of the transportation system in the target recharging range. The artificial intelligence system may select at least one of the plurality of recharging plan parameters that facilitates, for example optimizing energy usage by the plurality of rechargeable vehicles and generate an adjustment value for the at least one of the plurality of recharging plan parameters. The artificial intelligence system may further predict a near-term need for recharging for a portion of the plurality of rechargeable vehicles within the target region based on, for example, operational status of the plurality of rechargeable vehicles that may be determined from the rechargeable vehicle-specific inputs. Based on this prediction and near-term recharging infrastructure availability and capacity information, the artificial intelligence system may optimize at least one parameter of the recharging plan. In embodiments, the artificial intelligence system may operate a hybrid neural network for the predicting and parameter selection or adjustment. In an example, a first portion of the hybrid neural network may process inputs that relate to route plans for one more rechargeable vehicles. In the example, a second portion of the hybrid neural network that is distinct from the first portion may process inputs relating to recharging infrastructure within a recharging range of at least one of the rechargeable vehicles. In this example, the second distinct portion of the hybrid neural net predicts the geolocation of a plurality of vehicles within the target region. To facilitate execution of the recharging plan, the parameter may impact an allocation of vehicles to at least a portion of recharging infrastructure within the predicted geographic region. [00762] In embodiments, vehicles described herein may comprise a system for automating at least one control parameter of the vehicle. The vehicles may further at least operate as a semi- autonomous vehicle. The vehicles may be automatically routed. Also, the vehicles, recharging and otherwise may be self-driving vehicles. [00763] In embodiments, leveraging network technologies for a transportation system may support a cognitive collective charging or refueling plan for vehicles in the transportation system. Such a transportation system may include an artificial intelligence system for taking inputs relating to battery status of a plurality of vehicles, such as self-driving vehicles and determining at least one
SFT-106-A-PCT parameter of a re-charging and/or refueling plan for optimizing battery operation of at least one of the plurality of vehicles based on the inputs. [00764] Referring to Fig.46, in embodiments, such a vehicle transportation system may include a network-enabled vehicle information ingestion port 4632 that may provide a network (e.g., Internet and the like) interface through which inputs, such as inputs comprising operational state and energy consumption information and battery state from at least one of a plurality of network-enabled vehicles 4610 may be gathered. In embodiments, such inputs may be gathered in real time as a plurality of vehicles 4610 connect to a network and deliver vehicle operational state, energy consumption, battery state and other related information. In embodiments, the inputs may relate to vehicle energy consumption and may include a battery charge state of a portion of the plurality of vehicles. The inputs may include a route plan for the vehicle, an indicator of the value of charging of the vehicle, and the like. The inputs may include predicted traffic conditions for the plurality of vehicles. The transportation system may also include vehicle charging or refueling infrastructure that may include one or more vehicle charging infrastructure control systems 4634. These control systems may receive the battery status information and the like for the plurality of network-enabled vehicles 4610 via the ingestion port 4632 and/or directly through a common or set of connected networks, such as an Internet infrastructure including wireless networks and the like. Such a transportation system may further include an artificial intelligence system 4636 that may be functionally connected with the vehicle charging infrastructure control systems that may, based on at least the battery status information from the portion of the plurality of vehicles determine, provide, adjust or create at least one charging plan parameter 4614 upon which a charging plan 4612 for at least a portion of the plurality of network-enabled vehicles 4610 is dependent. This parameter dependency may yield changes in the application of the charging plan 4612 by the control system(s) 4634, such as when a processor of the control system(s) 4634 executes a program derived from or based on the charging plan 4612. These changes may be applied to optimize anticipated battery usage of one or more of the vehicles. The optimizing may be vehicle-specific, aggregated across a set of vehicles, and the like. The charging infrastructure control system(s) 4634 may include a cloud-based computing system remote from charging infrastructure systems (e.g., remote from an electric vehicle charging kiosk and the like); it may also include a local charging infrastructure system 4638 that may be disposed with and/or integrated into an infrastructure element, such as a fuel station, a charging kiosk and the like. In embodiments, the artificial intelligence system 4636 may interface with the cloud-based system 4634, the local charging infrastructure system 4638 or both. In embodiments, the artificial intelligence system may interface with individual vehicles to facilitate optimizing anticipated battery usage. In embodiments, interfacing with the cloud-based system may affect infrastructure-wide impact of a charging plan,
SFT-106-A-PCT such as providing parameters that affect more than one charging kiosk. Interfacing with the local charging infrastructure system 4638 may include providing information that the local system could use to adapt charging system control commands and the like that may be provided from, for example, a regional or broader control system, such as a cloud-based control system 4634. In an example, a cloud-based control system (that may control only a target or geographic region, such as a localized set, a town, a county, a city, a ward, county and the like of available charging or refueling infrastructure devices) may respond to the charging plan parameter 4614 of the artificial intelligence system 4636 by setting a charging rate that facilitates highly parallel vehicle charging so that vehicle battery usage can be optimized. However, the local charging infrastructure system 4638 may adapt this control plan, such as based on a control plan parameter provided to it by the artificial intelligence system 4636, to permit a different charging rate (e.g., a faster charging rate), such as for a brief period to accommodate an accumulation of vehicles for which anticipated battery usage is not yet optimized. In this way, an adjustment to the at least one parameter 4614 that when made to the charge infrastructure operation plan 4612 ensures that the at least one of the plurality of vehicles 4610 has access to energy renewal in a target energy renewal region 4616. In embodiments, a target energy renewal region may be defined by a geofence that may be configured by an administrator of the region. In an example an administrator may have control or responsibility for a jurisdiction (e.g., a township, and the like). In the example, the administrator may configure a geofence for a region that is substantially congruent with the jurisdiction. [00765] In embodiments, a charging or refueling plan may have a plurality of parameters that may impact a wide range of transportation aspects ranging from vehicle-specific to vehicle group- specific to vehicle location-specific and infrastructure impacting aspects. Therefore, a parameter of the plan may impact or relate to any of vehicle routing to charging infrastructure, amount of charge permitted to be provided, duration of time or rate for charging, battery conditions or state, battery charging profile, time required to charge to a minimum value that may be based on consumption needs of the vehicle(s), market value of charging, indicators of market value, market price, infrastructure provider profit, bids or offers for providing fuel or electricity to one or more charging or refueling infrastructure kiosks, available supply capacity, recharge demand (local, regional, system wide), maximum energy usage rate, time between battery charging, and the like. [00766] In embodiments, to facilitate a cognitive charging or refueling plan, the transportation system may include a recharging plan update facility that interacts with the artificial intelligence system 4636 to apply an adjustment value 4624 to the at least one of the plurality of recharging plan parameters 4614. An adjustment value 4624 may be further adjusted based on feedback of applying the adjustment value. In embodiments, the feedback may be used by the artificial intelligence system 4634 to further adjust the adjustment value. In an example, feedback may
SFT-106-A-PCT impact the adjustment value applied to charging or refueling infrastructure facilities in a localized way, such as impacting only a set of vehicles that are impacted by or projected to be impacted by a traffic jam so that their battery operation is optimized, so as to, for example, ensure that they have sufficient battery power throughout the duration of the traffic jam. In embodiments, providing a parameter adjustment value may facilitate optimizing consumption of a remaining battery charge state of at least one of the plurality of vehicles. [00767] By processing energy-related consumption, demand, availability, and access information and the like, the artificial intelligence system 4636 may optimize aspects of the transportation system, such as vehicle electricity usage as shown in the box at 4626. The artificial intelligence system 4636 may further optimize at least one of recharging time, location, and amount as shown in the box at 4626. In an example, a recharging plan parameter that may be configured and updated based on feedback may be a routing parameter for the at least one of the plurality of vehicles. [00768] The artificial intelligence system 4636 may further optimize a transportation system charging or refueling control plan parameter 4614 to, for example accommodate near-term charging needs for the plurality of rechargeable vehicles 4610 based on the optimized at least one parameter. The artificial intelligence system 4636 may execute a vehicle recharging optimizing algorithm that may calculate energy parameters (including vehicle and non-vehicle energy) that may impact an anticipated battery usage, optimizes electricity usage for at least vehicles and/or charging or refueling infrastructure, and optimizes at least one charging or refueling infrastructure- specific recharging time, location, and amount. [00769] In embodiments, the artificial intelligence system 4634 may predict a geolocation 4618 of one or more vehicles within a geographic region 4616. The geographic region 4616 may include vehicles that are currently located in or predicted to be in the region and optionally may require or prefer recharging or refueling. As an example of predicting geolocation and its impact on a charging plan, a charging plan parameter may include allocation of vehicles currently in or predicted to be in the region to charging or refueling infrastructure in the geographic region 4616. In embodiments, geolocation prediction may include receiving inputs relating to battery and battery charging states and recharging needs of a plurality of vehicles within or predicted to be within a geolocation range so that the artificial intelligence system can optimize at least one charging plan parameter 4614 based on a prediction of geolocations of the plurality of vehicles. [00770] There are many aspects of a charging plan that may be impacted. Some aspects may be financial related, such as automated negotiation of at least one of a duration, a quantity and a price for charging or refueling a vehicle. [00771] The transportation system cognitive charging plan system may include the artificial intelligence system being configured with a hybrid neural network. A first neural network 4622 of
SFT-106-A-PCT the hybrid neural network may be used to process inputs relating to battery charge or fuel states of the plurality of vehicles (directly received from the vehicles or through the vehicle information port 4632) and a second neural network 4620 of the hybrid neural network is used to process inputs relating to charging or refueling infrastructure and the like. In embodiments, the first neural network 4622 may process inputs comprising information about a charging system of the vehicle and vehicle route and stored energy state information for a plurality of vehicles to predict for at least one of the plurality of vehicles a target energy renewal region. The second neural network 4620 may further predict a geolocation of a portion of the plurality of vehicles relative to another vehicle or set of vehicles. The second neural network 4620 may process vehicle energy renewal infrastructure usage and demand information for vehicle energy renewal infrastructure facilities within the target energy renewal region to determine at least one parameter 4614 of a charge infrastructure operational plan 4612 that facilitates access by the at least one of the plurality vehicles to renewal energy in the target energy renewal region 4616. In embodiments, the first and/or second neural networks may be configured as any of the neural networks described herein including without limitation convolutional type networks. [00772] In embodiments, a transportation system may be distributed and may include an artificial intelligence system 4636 for taking inputs relating to a plurality of vehicles 4610 and determining at least one parameter 4614 of a re-charging and refueling plan 4612 for at least one of the plurality of vehicles based on the inputs. In embodiments, such inputs may be gathered in real time as plurality of vehicles 4610 connect to a network and deliver vehicle operational state, energy consumption and other related information. In embodiments, the inputs may relate to vehicle energy consumption and may be determined from a battery charge state of a portion of the plurality of vehicles. The inputs may include a route plan for the vehicle, an indicator of the value of charging of the vehicle, and the like. The inputs may include predicted traffic conditions for the plurality of vehicles. The distributed transportation system may also include cloud-based and vehicle-based systems that exchange information about the vehicle, such as energy consumption and operational information and information about the transportation system, such as recharging or refueling infrastructure. The artificial intelligence system may respond to transportation system and vehicle information shared by the cloud and vehicle-based system with control parameters that facilitate executing a cognitive charging plan for at least a portion of charging or refueling infrastructure of the transportation system. The artificial intelligence system 4636 may determine, provide, adjust or create at least one charging plan parameter 4614 upon which a charging plan 4612 for at least a portion of the plurality of vehicles 4610 is dependent. This dependency may yield changes in the execution of the charging plan 4612 by at least one of the cloud-based and
SFT-106-A-PCT vehicle-based systems, such as when a processor executes a program derived from or based on the charging plan 4612. [00773] In embodiments, an artificial intelligence system of a transportation system may facilitate execution of a cognitive charging plan by applying a vehicle recharging facility utilization of a vehicle battery operation optimization algorithm to a plurality of rechargeable vehicle-specific inputs, e.g., current operating state data for rechargeable vehicles present in a target recharging range of one of the plurality of rechargeable vehicles. The artificial intelligence system may also evaluate an impact of a plurality of recharging plan parameters on recharging infrastructure of the transportation system in the target recharging range. The artificial intelligence system may select at least one of the plurality of recharging plan parameters that facilitates, for example optimizing energy usage by the plurality of rechargeable vehicles and generate an adjustment value for the at least one of the plurality of recharging plan parameters. The artificial intelligence system may further predict a near-term need for recharging for a portion of the plurality of rechargeable vehicles within the target region based on, for example, operational status of the plurality of rechargeable vehicles that may be determined from the rechargeable vehicle-specific inputs. Based on this prediction and near-term recharging infrastructure availability and capacity information, the artificial intelligence system may optimize at least one parameter of the recharging plan. In embodiments, the artificial intelligence system may operate a hybrid neural network for the predicting and parameter selection or adjustment. In an example, a first portion of the hybrid neural network may process inputs that relate to route plans for one more rechargeable vehicles. In the example, a second portion of the hybrid neural network that is distinct from the first portion may process inputs relating to recharging infrastructure within a recharging range of at least one of the rechargeable vehicles. In this example, the second distinct portion of the hybrid neural net predicts the geolocation of a plurality of vehicles within the target region. To facilitate execution of the recharging plan, the parameter may impact an allocation of vehicles to at least a portion of recharging infrastructure within the predicted geographic region. [00774] In embodiments, vehicles described herein may comprise a system for automating at least one control parameter of the vehicle. The vehicles may further at least operate as a semi- autonomous vehicle. The vehicles may be automatically routed. Also, the vehicles, recharging and otherwise may be self-driving vehicles. [00775] In embodiments, leveraging network technologies for a transportation system may support a cognitive collective charging or refueling plan for vehicles in the transportation system. Such a transportation system may include a cloud-based artificial intelligence system for taking inputs relating to a plurality of vehicles, such as self-driving vehicles and determining at least one
SFT-106-A-PCT parameter of a re-charging and/or refueling plan for at least one of the plurality of vehicles based on the inputs. [00776] Referring to Fig.47, in embodiments, such a vehicle transportation system may include a cloud-enabled vehicle information ingestion port 4732 that may provide a network (e.g., Internet and the like) interface through which inputs, such as inputs comprising operational state and energy consumption information from at least one of a plurality of network-enabled vehicles 4710 may be gathered and provided into cloud resources, such as the cloud-based control and artificial intelligence systems described herein. In embodiments, such inputs may be gathered in real time as a plurality of vehicles 4710 connect to the cloud and deliver vehicle operational state, energy consumption and other related information through at least the port 4732. In embodiments, the inputs may relate to vehicle energy consumption and may be determined from a battery charge state of a portion of the plurality of vehicles. The inputs may include a route plan for the vehicle, an indicator of the value of charging of the vehicle, and the like. The inputs may include predicted traffic conditions for the plurality of vehicles. The transportation system may also include vehicle charging or refueling infrastructure that may include one or more vehicle charging infrastructure cloud-based control system(s) 4734. These cloud-based control system(s) 4734 may receive the operational state and energy consumption information for the plurality of network-enabled vehicles 4710 via the cloud-enabled ingestion port 4732 and/or directly through a common or set of connected networks, such as the Internet and the like. Such a transportation system may further include a cloud-based artificial intelligence system 4736 that may be functionally connected with the vehicle charging infrastructure cloud-based control system(s) 4734 that, for example, may determine, provide, adjust or create at least one charging plan parameter 4714 upon which a charging plan 4712 for at least a portion of the plurality of network-enabled vehicles 4710 is dependent. This dependency may yield changes in the application of the charging plan 4712 by the cloud-based control system(s) 4734, such as when a processor of the cloud-based control system(s) 4734 executes a program derived from or based on the charging plan 4712. The charging infrastructure cloud-based control system(s) 4734 may include a cloud-based computing system remote from charging infrastructure systems (e.g., remote from an electric vehicle charging kiosk and the like); it may also include a local charging infrastructure system 4738 that may be disposed with and/or integrated into an infrastructure element, such as a fuel station, a charging kiosk and the like. In embodiments, the cloud-based artificial intelligence system 4736 may interface and coordinate with the cloud-based charging infrastructure control system 4734, the local charging infrastructure system 4738 or both. In embodiments, coordination of the cloud-based system may take on a form of interfacing, such as providing parameters that affect more than one charging kiosk and the like than may be different from coordination with the local charging infrastructure
SFT-106-A-PCT system 4738, which may provide information that the local system could use to adapt cloud-based charging system control commands and the like that may be provided from, for example, a cloud- based control system 4734. In an example, a cloud-based control system (that may control only a portion, such as a localized set, of available charging or refueling infrastructure devices) may respond to the charging plan parameter 4714 of the cloud-based artificial intelligence system 4736 by setting a charging rate that facilitates highly parallel vehicle charging. However, the local charging infrastructure system 4738 may adapt this control plan, such as based on a control plan parameter provided to it by the cloud-based artificial intelligence system 4736, to permit a different charging rate (e.g., a faster charging rate), such as for a brief period to accommodate an accumulation of vehicles queued up or estimated to use a local charging kiosk in the period. In this way, an adjustment to the at least one parameter 4714 that when made to the charge infrastructure operation plan 4712 ensures that the at least one of the plurality of vehicles 4710 has access to energy renewal in a target energy renewal region 4716. [00777] In embodiments, a charging or refueling plan may have a plurality of parameters that may impact a wide range of transportation aspects ranging from vehicle-specific to vehicle group- specific to vehicle location-specific and infrastructure impacting aspects. Therefore, a parameter of the plan may impact or relate to any of vehicle routing to charging infrastructure, amount of charge permitted to be provided, duration of time or rate for charging, battery conditions or state, battery charging profile, time required to charge to a minimum value that may be based on consumption needs of the vehicle(s), market value of charging, indicators of market value, market price, infrastructure provider profit, bids or offers for providing fuel or electricity to one or more charging or refueling infrastructure kiosks, available supply capacity, recharge demand (local, regional, system wide), and the like. [00778] In embodiments, to facilitate a cognitive charging or refueling plan, the transportation system may include a recharging plan update facility that interacts with the cloud-based artificial intelligence system 4736 to apply an adjustment value 4724 to the at least one of the plurality of recharging plan parameters 4714. An adjustment value 4724 may be further adjusted based on feedback of applying the adjustment value. In embodiments, the feedback may be used by the cloud-based artificial intelligence system 4734 to further adjust the adjustment value. In an example, feedback may impact the adjustment value applied to charging or refueling infrastructure facilities in a localized way, such as for a target recharging area 4716 or geographic range relative to one or more vehicles. In embodiments, providing a parameter adjustment value may facilitate optimizing consumption of a remaining battery charge state of at least one of the plurality of vehicles.
SFT-106-A-PCT [00779] By processing energy-related consumption, demand, availability, and access information and the like, the cloud-based artificial intelligence system 4736 may optimize aspects of the transportation system, such as vehicle electricity usage. The cloud-based artificial intelligence system 4736 may further optimize at least one of recharging time, location, and amount. In an example, a recharging plan parameter that may be configured and updated based on feedback may be a routing parameter for the at least one of the plurality of vehicles. [00780] The cloud-based artificial intelligence system 4736 may further optimize a transportation system charging or refueling control plan parameter 4714 to, for example, accommodate near-term charging needs for the plurality of rechargeable vehicles 4710 based on the optimized at least one parameter. The cloud-based artificial intelligence system 4736 may execute an optimizing algorithm that may calculate energy parameters (including vehicle and non-vehicle energy), optimizes electricity usage for at least vehicles and/or charging or refueling infrastructure, and optimizes at least one charging or refueling infrastructure-specific recharging time, location, and amount. [00781] In embodiments, the cloud-based artificial intelligence system 4734 may predict a geolocation 4718 of one or more vehicles within a geographic region 4716. The geographic region 4716 may include vehicles that are currently located in or predicted to be in the region and optionally may require or prefer recharging or refueling. As an example of predicting geolocation and its impact on a charging plan, a charging plan parameter may include allocation of vehicles currently in or predicted to be in the region to charging or refueling infrastructure in the geographic region 4716. In embodiments, geolocation prediction may include receiving inputs relating to charging states of a plurality of vehicles within or predicted to be within a geolocation range so that the cloud-based artificial intelligence system can optimize at least one charging plan parameter 4714 based on a prediction of geolocations of the plurality of vehicles. [00782] There are many aspects of a charging plan that may be impacted. Some aspects may be financially related, such as automated negotiation of at least one of a duration, a quantity and a price for charging or refueling a vehicle. [00783] The transportation system cognitive charging plan system may include the cloud-based artificial intelligence system being configured with a hybrid neural network. A first neural network 4722 of the hybrid neural network may be used to process inputs relating to charge or fuel states of the plurality of vehicles (directly received from the vehicles or through the vehicle information port 4732) and a second neural network 4720 of the hybrid neural network is used to process inputs relating to charging or refueling infrastructure and the like. In embodiments, the first neural network 4722 may process inputs comprising vehicle route and stored energy state information for a plurality of vehicles to predict for at least one of the plurality of vehicles a target energy renewal
SFT-106-A-PCT region. The second neural network 4720 may process vehicle energy renewal infrastructure usage and demand information for vehicle energy renewal infrastructure facilities within the target energy renewal region to determine at least one parameter 4714 of a charge infrastructure operational plan 4712 that facilitates access by the at least one of the plurality vehicles to renewal energy in the target energy renewal region 4716. In embodiments, the first and/or second neural networks may be configured as any of the neural networks described herein including without limitation convolutional type networks. [00784] In embodiments, a transportation system may be distributed and may include a cloud- based artificial intelligence system 4736 for taking inputs relating to a plurality of vehicles 4710 and determining at least one parameter 4714 of a re-charging and refueling plan 4712 for at least one of the plurality of vehicles based on the inputs. In embodiments, such inputs may be gathered in real time as plurality of vehicles 4710 connect to and deliver vehicle operational state, energy consumption and other related information. In embodiments, the inputs may relate to vehicle energy consumption and may be determined from a battery charge state of a portion of the plurality of vehicles. The inputs may include a route plan for the vehicle, an indicator of the value of charging of the vehicle, and the like. The inputs may include predicted traffic conditions for the plurality of vehicles. The distributed transportation system may also include cloud-based and vehicle-based systems that exchange information about the vehicle, such as energy consumption and operational information and information about the transportation system, such as recharging or refueling infrastructure. The cloud-based artificial intelligence system may respond to transportation system and vehicle information shared by the cloud and vehicle-based system with control parameters that facilitate executing a cognitive charging plan for at least a portion of charging or refueling infrastructure of the transportation system. The cloud-based artificial intelligence system 4736 may determine, provide, adjust or create at least one charging plan parameter 4714 upon which a charging plan 4712 for at least a portion of the plurality of vehicles 4710 is dependent. This dependency may yield changes in the execution of the charging plan 4712 by at least one the cloud-based and vehicle-based systems, such as when a processor executes a program derived from or based on the charging plan 4712. [00785] In embodiments, a cloud-based artificial intelligence system of a transportation system may facilitate execution of a cognitive charging plan by applying a vehicle recharging facility utilization optimization algorithm to a plurality of rechargeable vehicle-specific inputs, e.g., current operating state data for rechargeable vehicles present in a target recharging range of one of the plurality of rechargeable vehicles. The cloud-based artificial intelligence system may also evaluate an impact of a plurality of recharging plan parameters on recharging infrastructure of the transportation system in the target recharging range. The cloud-based artificial intelligence system
SFT-106-A-PCT may select at least one of the plurality of recharging plan parameters that facilitates, for example optimizing energy usage by the plurality of rechargeable vehicles and generate an adjustment value for the at least one of the plurality of recharging plan parameters. The cloud-based artificial intelligence system may further predict a near-term need for recharging for a portion of the plurality of rechargeable vehicles within the target region based on, for example, operational status of the plurality of rechargeable vehicles that may be determined from the rechargeable vehicle-specific inputs. Based on this prediction and near-term recharging infrastructure availability and capacity information, the cloud-based artificial intelligence system may optimize at least one parameter of the recharging plan. In embodiments, the cloud-based artificial intelligence system may operate a hybrid neural network for the predicting and parameter selection or adjustment. In an example, a first portion of the hybrid neural network may process inputs that relates to route plans for one more rechargeable vehicles. In the example, a second portion of the hybrid neural network that is distinct from the first portion may process inputs relating to recharging infrastructure within a recharging range of at least one of the rechargeable vehicles. In this example, the second distinct portion of the hybrid neural net predicts the geolocation of a plurality of vehicles within the target region. To facilitate execution of the recharging plan, the parameter may impact an allocation of vehicles to at least a portion of recharging infrastructure within the predicted geographic region. [00786] In embodiments, vehicles described herein may comprise a system for automating at least one control parameter of the vehicle. The vehicles may further at least operate as a semi- autonomous vehicle. The vehicles may be automatically routed. Also, the vehicles, recharging and otherwise may be self-driving vehicles. [00787] Referring to Fig. 48, provided herein are transportation systems 4811 having a robotic process automation system 48181 (RPA system). In embodiments, data is captured for each of a set of individuals/users 4891 as the individuals/users 4890 interact with a user interface 4823 of a vehicle 4811, and an artificial intelligence system 4836 is trained using the data and interacts with the vehicle 4810 to automatically undertake actions with the vehicle 4810 on behalf of the user 4890. Data 48114 collected for the RPA system 48181 may include a sequence of images, sensor data, telemetry data, or the like, among many other types of data described throughout this disclosure. Interactions of an individual/user 4890 with a vehicle 4810 may include interactions with various vehicle interfaces as described throughout this disclosure. For example, a robotic process automation (RPA) system 4810 may observe patterns of a driver, such as braking patterns, typical following distance behind other vehicles, approach to curves (e.g., entry angle, entry speed, exit angle, exit speed and the like), acceleration patterns, lane preferences, passing preferences, and the like. Such patterns may be obtained through vision systems 48186 (e.g., ones observing the driver, the steering wheel, the brake, the surrounding environment 48171, and the like), through
SFT-106-A-PCT vehicle data systems 48185 (e.g., data streams indicating states and changes in state in steering, braking and the like, as well as forward and rear-facing cameras and sensors), through connected systems 48187 (e.g., GPS, cellular systems, and other network systems, as well as peer-to-peer, vehicle-to-vehicle, mesh and cognitive networks, among others), and other sources. Using a training data set, the RPA system 48181, such as via a neural network 48108 of any of the types described herein, may learn to drive in the same style as a driver. In embodiments, the RPA system 48181 may learn changes in style, such as varying levels of aggressiveness in different situations, such as based on time of day, length of trip, type of trip, or the like. Thus, a self-driving car may learn to drive like its typical driver. Similarly, an RPA system 48181 may be used to observe driver, passenger, or other individual interactions with a navigation system, an audio entertainment system, a video entertainment system, a climate control system, a seat warming and/or cooling system, a steering system, a braking system, a mirror system, a window system, a door system, a trunk system, a fueling system, a moonroof system, a ventilation system, a lumbar support system, a seat positioning system, a GPS system, a WIFI system, a glovebox system, or other systems. [00788] An aspect provided herein includes a system 4811 for transportation, comprising: a robotic process automation system 48181. In embodiments, a set of data is captured for each user 4890 in a set of users 4891 as each user 4890 interacts with a user interface 4823 of a vehicle 4810. In embodiments, an artificial intelligence system 4836 is trained using the set of data 48114 to interact with the vehicle 4810 to automatically undertake actions with the vehicle 4810 on behalf of the user 4890. [00789] Fig. 49 illustrates a method 4900 of robotic process automation to facilitate mimicking human operator operation of a vehicle in accordance with embodiments of the systems and methods disclosed herein. At 4902 the method includes tracking human interactions with a vehicle control-facilitating interface. At 4904 the method includes recording the tracked human interactions in a robotic process automation system training data structure. At 4906 the method includes tracking vehicle operational state information of the vehicle. In embodiments, the vehicle is to be controlled through the vehicle control-facilitating interface. At 4908 the method includes recording the vehicle operational state information in the robotic process automation system training data structure. At 4909 the method includes training, through the use of at least one neural network, an artificial intelligence system to operate the vehicle in a manner consistent with the human interactions based on the human interactions and the vehicle operational state information in the robotic process automation system training data structure. [00790] In embodiments, the method further comprises controlling at least one aspect of the vehicle with the trained artificial intelligence system. In embodiments, the method further comprises applying deep learning to the controlling the at least one aspect of the vehicle by
SFT-106-A-PCT structured variation in the controlling the at least one aspect of the vehicle to mimic the human interactions and processing feedback from the controlling the at least one aspect of the vehicle with machine learning. In embodiments, the controlling at least one aspect of the vehicle is performed via the vehicle control-facilitating interface. [00791] In embodiments, the controlling at least one aspect of the vehicle is performed by the artificial intelligence system emulating the control-facilitating interface being operated by the human. In embodiments, the vehicle control-facilitating interface comprises at least one of an audio capture system to capture audible expressions of the human, a human-machine interface, a mechanical interface, an optical interface and a sensor-based interface. In embodiments, the tracking vehicle operational state information comprises tracking at least one of a set of vehicle systems and a set of vehicle operational processes affected by the human interactions. In embodiments, the tracking vehicle operational state information comprises tracking at least one vehicle system element. In embodiments, the at least one vehicle system element is controlled via the vehicle control-facilitating interface. In embodiments, the at least one vehicle system element is affected by the human interactions. In embodiments, the tracking vehicle operational state information comprises tracking the vehicle operational state information before, during, and after the human interaction. [00792] In embodiments, the tracking vehicle operational state information comprises tracking at least one of a plurality of vehicle control system outputs that result from the human interactions and vehicle operational results achieved in response to the human interactions. In embodiments, the vehicle is to be controlled to achieve results that are consistent with results achieved via the human interactions. In embodiments, the method further comprises tracking and recording conditions proximal to the vehicle with a plurality of vehicle mounted sensors. In embodiments, the training of the artificial intelligence system is further responsive to the conditions proximal to the vehicle tracked contemporaneously to the human interactions. In embodiments, the training is further responsive to a plurality of data feeds from remote sensors, the plurality of data feeds comprising data collected by the remote sensors contemporaneous to the human interactions. In embodiments, the artificial intelligence system employs a workflow that involves decision-making and the robotic process automation system facilitates automation of the decision-making. In embodiments, the artificial intelligence system employs a workflow that involves remote control of the vehicle and the robotic process automation system facilitates automation of remotely controlling the vehicle. [00793] An aspect provided herein includes a transportation system 4811 for mimicking human operation of a vehicle 4810, comprising: a robotic process automation system 48181 comprising: an operator data collection module 48182 to capture human operator interaction with a vehicle
SFT-106-A-PCT control system interface 48191; a vehicle data collection module 48183 to capture vehicle response and operating conditions associated at least contemporaneously with the human operator interaction; and an environment data collection module 48184 to capture instances of environmental information associated at least contemporaneously with the human operator interaction; and an artificial intelligence system 4836 to learn to mimic the human operator (e.g., user 4890) to control the vehicle 4810 responsive to the robotic process automation system 48181 detecting data 48114 indicative of at least one of a plurality of the instances of environmental information associated with the contemporaneously captured vehicle response and operating conditions. [00794] In embodiments, the operator data collection module 48182 is to capture patterns of data including braking patterns, follow-behind distance, approach to curve acceleration patterns, lane preferences, and passing preferences. In embodiments, vehicle data collection module 48183 captures data from a plurality of vehicle data systems 48185 that provide data streams indicating states and changes in state in steering, braking, acceleration, forward looking images, and rear- looking images. In embodiments, the artificial intelligence system 4836 includes a neural network 48108 for training the artificial intelligence system 4836. [00795] Fig. 50 illustrates a robotic process automation method 5000 of mimicking human operation of a vehicle in accordance with embodiments of the systems and methods disclosed herein. At 5002 the method includes capturing human operator interactions with a vehicle control system interface. At 5004 the method includes capturing vehicle response and operating conditions associated at least contemporaneously with the human operator interaction. At 5006 the method includes capturing instances of environmental information associated at least contemporaneously with the human operator interaction. At 5008 the method includes training an artificial intelligence system to control the vehicle mimicking the human operator responsive to the environment data collection module detecting data indicative of at least one of a plurality of the instances of environmental information associated with the contemporaneously captured vehicle response and operating conditions. [00796] In embodiments, the method further comprises applying deep learning in the artificial intelligence system to optimize a margin of vehicle operating safety by affecting the controlling of the at least one aspect of the vehicle by structured variation in the controlling of the at least one aspect of the vehicle to mimic the human interactions and processing feedback from the controlling the at least one aspect of the vehicle with machine learning. In embodiments, the robotic process automation system facilitates automation of a decision-making workflow employed by the artificial intelligence system. In embodiments, the robotic process automation system facilitates automation
SFT-106-A-PCT of a remote control workflow that the artificial intelligence system employs to remotely control the vehicle. [00797] Referring to Fig. 51, a transportation system 5111 is provided having an artificial intelligence system 5136 that automatically randomizes a parameter of an in-vehicle experience in order to improve a user state that benefits from variation. In embodiments, a system used to control a driver or passenger experience (such as in a self-driving car, assisted car, or conventional vehicle) may be configured to automatically undertake actions based on an objective or feedback function, such as where an artificial intelligence system 5136 is trained on outcomes from a training data set to provide outputs to one or more vehicle systems to improve health, satisfaction, mood, safety, one or more financial metrics, efficiency, or the like. [00798] Such systems may involve a wide range of in-vehicle experience parameters (including any of the experience parameters described herein, such as driving experience (including assisted and self-driving, as well as vehicle responsiveness to inputs, such as in controlled suspension performance, approaches to curves, braking and the like), seat positioning (including lumbar support, leg room, seatback angle, seat height and angle, etc.), climate control (including ventilation, window or moonroof state (e.g., open or closed), temperature, humidity, fan speed, air motion and the like), sound (e.g., volume, bass, treble, individual speaker control, focus area of sound, etc.), content (audio, video and other types, including music, news, advertising and the like), route selection (e.g., for speed, for road experience (e.g., smooth or rough, flat or hilly, straight or curving), for points of interest (POIs), for view (e.g., scenic routes), for novelty (e.g., to see different locations), and/or for defined purposes (e.g., shopping opportunities, saving fuel, refueling opportunities, recharging opportunities, or the like). [00799] In many situations, variation of one or more vehicle experience parameters may provide or result in a preferred state for a vehicle 5110 (or set of vehicles), a user (such as vehicle rider 51120), or both, as compared to seeking to find a single optimized state of such a parameter. For example, while a user may have a preferred seat position, sitting in the same position every day, or during an extended period on the same day, may have adverse effects, such as placing undue pressure on certain joints, promoting atrophy of certain muscles, diminishing flexibility of soft tissue, or the like. In such a situation, an automated control system (including one that is configured to use artificial intelligence of any of the types described herein) may be configured to induce variation in one or more of the user experience parameters described herein, optionally with random variation or with variation that is according to a prescribed pattern, such as one that may be prescribed according to a regimen, such as one developed to provide physical therapy, chiropractic, or other medical or health benefits. As one example, seat positioning may be varied over time to promote health of joints, muscles, ligaments, cartilage or the like. As another example,
SFT-106-A-PCT consistent with evidence that human health is improved when an individual experiences significant variations in temperature, humidity, and other climate factors, a climate control system may be varied (randomly or according to a defined regimen) to provide varying temperature, humidity, fresh air (including by opening windows or ventilation) or the like in order to improve the health, mood, or alertness of a user. [00800] An artificial intelligence-based control system 5136 may be trained on a set of outcomes (of various types described herein) to provide a level of variation of a user experience that achieves desired outcomes, including selection of the timing and extent of such variations. As another example, an audio system may be varied to preserve hearing (such as based on tracking accumulated sound pressure levels, accumulated dosage, or the like), to promote alertness (such as by varying the type of content), and/or to improve health (such as by providing a mix of stimulating and relaxing content). In embodiments, such an artificial intelligence system 5136 may be fed sensor data 51444, such as from a wearable device 51157 (including a sensor set) or a physiological sensing system 51190, which includes a set of systems and/or sensors capable of providing physiological monitoring within a vehicle 5110 (e.g., a vison-based system 51186 that observes a user, a sensor 5125 embedded in a seat, a steering wheel, or the like that can measure a physiological parameter, or the like). For example, a vehicle interface 51188 (such as a steering wheel or any other interface described herein) can measure a physiological parameter (e.g., galvanic skin response, such as to indicate a stress level, cortisol level, or the like of a driver or other user), which can be used to indicate a current state for purposes of control or can be used as part of a training data set to optimize one or more parameters that may benefit from control, including control of variation of user experience to achieve desired outcomes. In one such example, an artificial intelligence system 5136 may vary parameters, such as driving experience, music and the like, to account for changes in hormonal systems of the user (such as cortisol and other adrenal system hormones), such as to induce healthy changes in state (consistent with evidence that varying cortisol levels over the course of a day are typical in healthy individuals, but excessively high or low levels at certain times of day may be unhealthy or unsafe). Such a system may, for example, “amp up” the experience with more aggressive settings (e.g., more acceleration into curves, tighter suspension, and/or louder music) in the morning when rising cortisol levels are healthy and “mellow out” the experience (such as by softer suspension, relaxing music and/or gentle driving motion) in the afternoon when cortisol levels should be dropping to lower levels to promote health. Experiences may consider both health of the user and safety, such as by ensuring that levels vary over time, but are sufficiently high to assure alertness (and hence safety) in situations where high alertness is required. While cortisol (an important hormone) is provided as an example, user experience parameters may be controlled (optionally with random or configured variation) with
SFT-106-A-PCT respect to other hormonal or biological systems, including insulin-related systems, cardiovascular systems (e.g., relating to pulse and blood pressure), gastrointestinal systems, and many others. [00801] An aspect provided herein includes a system for transportation 5111, comprising: an artificial intelligence system 5136 to automatically randomize a parameter of an in-vehicle experience to improve a user state. In embodiments, the user state benefits from variation of the parameter. [00802] An aspect provided herein includes a system for transportation 5111, comprising: a vehicle interface 51188 for gathering physiological sensed data of a rider 51120 in the vehicle 5110; and an artificial intelligence-based circuit 51189 that is trained on a set of outcomes related to rider in- vehicle experience and that induces, responsive to the sensed rider physiological data, variation in one or more of the user experience parameters to achieve at least one desired outcome in the set of outcomes, the inducing variation including control of timing and extent of the variation. [00803] In embodiments, the induced variation includes random variation. In embodiments, the induced variation includes variation that is according to a prescribed pattern. In embodiments, the prescribed pattern is prescribed according to a regimen. In embodiments, the regimen is developed to provide at least one of physical therapy, chiropractic, and other medical health benefits. In embodiments, the one or more user experience parameters affect at least one of seat position, temperature, humidity, cabin air source, or audio output. In embodiments, the vehicle interface 51188 comprises at least one wearable sensor 51157 disposed to be worn by the rider 51120. In embodiments, the vehicle interface 51188 comprises a vision system 51186 disposed to capture and analyze images from a plurality of perspectives of the rider 51120. In embodiments, the variation in one or more of the user experience parameters comprises variation in control of the vehicle 5110. [00804] In embodiments, variation in control of the vehicle 5110 includes configuring the vehicle 5110 for aggressive driving performance. In embodiments, variation in control of the vehicle 5110 includes configuring the vehicle 5110 for non-aggressive driving performance. In embodiments, the variation is responsive to the physiological sensed data that includes an indication of a hormonal level of the rider 51120, and the artificial intelligence-based circuit 51189 varies the one or more user experience parameters to promote a hormonal state that promotes rider safety. [00805] Referring now to Fig. 52, also provided herein are transportation systems 5211 having a system 52192 for taking an indicator of a hormonal system level of a user 5290 and automatically varying a user experience in the vehicle 5210 to promote a hormonal state that promotes safety. [00806] An aspect provided herein includes a system for transportation 5211, comprising: a system 52192 for detecting an indicator of a hormonal system level of a user 5290 and automatically varying a user experience in a vehicle 5210 to promote a hormonal state that promotes safety.
SFT-106-A-PCT [00807] An aspect provided herein includes a system for transportation 5211 comprising: a vehicle interface 52188 for gathering hormonal state data of a rider (e.g., user 5290) in the vehicle 5210; and an artificial intelligence-based circuit 52189 that is trained on a set of outcomes related to rider in-vehicle experience and that induces, responsive to the sensed rider hormonal state data, variation in one or more of the user experience parameters to achieve at least one desired outcome in the set of outcomes, the set of outcomes including a least one outcome that promotes rider safety, the inducing variation including control of timing and extent of the variation. [00808] In embodiments, the variation in the one or more user experience parameters is controlled by the artificial intelligence system 5236 to promote a desired hormonal state of the rider (e.g., user 5290). In embodiments, the desired hormonal state of the rider promotes safety. In embodiments, the at least one desired outcome in the set of outcomes is the at least one outcome that promotes rider safety. In embodiments, the variation in the one or more user experience parameters includes varying at least one of a food and a beverage offered to the rider (e.g., user 5290). In embodiments, the one or more user experience parameters affect at least one of seat position, temperature, humidity, cabin air source, or audio output. In embodiments, the vehicle interface 52188 comprises at least one wearable sensor 52157 disposed to be worn by the rider (e.g., user 5290). [00809] In embodiments, the vehicle interface 52188 comprises a vision system 52186 disposed to capture and analyze images from a plurality of perspectives of the rider (e.g., user 5290). In embodiments, the variation in one or more of the user experience parameters comprises variation in control of the vehicle 5210. In embodiments, variation in control of the vehicle 5210 includes configuring the vehicle 5210 for aggressive driving performance. In embodiments, variation in control of the vehicle 5210 includes configuring the vehicle 5210 for non-aggressive driving performance. [00810] Referring to Fig.53, provided herein are transportation systems 5311 having a system for optimizing at least one of a vehicle parameter 53159 and a user experience parameter 53205 to provide a margin of safety 53204. In embodiments, the margin of safety 53204 may be a user- selected margin of safety or user-based margin of safety, such as selected based on a profile of a user or actively selected by a user, such as by interaction with a user interface, or selected based on a profile developed by tracking user behavior, including behavior in a vehicle 5310 and in other contexts, such as on social media, in e-commerce, in consuming content, in moving from place-to- place, or the like. In many situations, there is a tradeoff between optimizing the performance of a dynamic system (such as to achieve some objective function, like fuel efficiency) and one or more risks that are present in the system. This is particularly true in situations where there is some asymmetry between the benefits of optimizing one or more parameters and the risks that are present
SFT-106-A-PCT in the dynamic systems in which the parameter plays a role. As an example, seeking to minimize travel time (such as for a daily commute), leads to an increased likelihood of arriving late, because a wide range of effects in dynamic systems, such as ones involving vehicle traffic, tend to cascade and periodically produce travel times that vary widely (and quite often adversely). Variances in many systems are not symmetrical; for example, unusually uncrowded roads may improve a 30- mile commute time by a few minutes, but an accident, or high congestion, can increase the same commute by an hour or more. Thus, to avoid risks that have high adverse consequences, a wide margin of safety may be required. In embodiments, systems are provided herein for using an expert system (which may be model-based, rule-based, deep learning, a hybrid, or other intelligent systems as described herein) to provide a desired margin of safety with respect to adverse events that are present in transportation-related dynamic systems. The margin of safety 53204 may be provided via an output of the expert system 5336, such as an instruction, a control parameter for a vehicle 5310 or an in-vehicle user experience, or the like. An artificial intelligence system 5336 may be trained to provide the margin of safety 53204 based on a training set of data based on outcomes of transportation systems, such as traffic data, weather data, accident data, vehicle maintenance data, fueling and charging system data (including in-vehicle data and data from infrastructure systems, such as charging stations, fueling stations, and energy production, transportation, and storage systems), user behavior data, user health data, user satisfaction data, financial information (e.g., user financial information, pricing information (e.g., for fuel, for food, for accommodations along a route, and the like), vehicle safety data, failure mode data, vehicle information system data, and the like), and many other types of data as described herein and in the documents incorporated by reference herein. [00811] An aspect provided herein includes a system for transportation 5311, comprising: a system for optimizing at least one of a vehicle parameter 53159 and a user experience parameter 53205 to provide a margin of safety 53204. [00812] An aspect provided herein includes a transportation system 5311 for optimizing a margin of safety when mimicking human operation of a vehicle 5310, the transportation system 5311 comprising: a set of robotic process automation systems 53181 comprising: an operator data collection module 53182 to capture human operator 5390 interactions 53201 with a vehicle control system interface 53191; a vehicle data collection module 53183 to capture vehicle response and operating conditions associated at least contemporaneously with the human operator interaction 53201; an environment data collection module 53184 to capture instances of environmental information 53203 associated at least contemporaneously with the human operator interactions 53201; and an artificial intelligence system 5336 to learn to control the vehicle 5310 with an optimized margin of safety while mimicking the human operator. In embodiments, the artificial
SFT-106-A-PCT intelligence system 5336 is responsive to the robotic process automation system 53181. In embodiments, the artificial intelligence system 5336 is to detect data indicative of at least one of a plurality of the instances of environmental information associated with the contemporaneously captured vehicle response and operating conditions. In embodiments, the optimized margin of safety is to be achieved by training the artificial intelligence system 5336 to control the vehicle 5310 based on a set of human operator interaction data collected from interactions of a set of expert human vehicle operators with the vehicle control system interface 53191. [00813] In embodiments, the operator data collection module 53182 captures patterns of data including braking patterns, follow-behind distance, approach to curve acceleration patterns, lane preferences, or passing preferences. In embodiments, the vehicle data collection module 53183 captures data from a plurality of vehicle data systems that provide data streams indicating states and changes in state in steering, braking, acceleration, forward looking images, or rear-looking images. In embodiments, the artificial intelligence system includes a neural network 53108 for training the artificial intelligence system 53114. [00814] Fig.54 illustrates a method 5400 of robotic process automation for achieving an optimized margin of vehicle operational safety in accordance with embodiments of the systems and methods disclosed herein. At 5402 the method includes tracking expert vehicle control human interactions with a vehicle control-facilitating interface. At 5404 the method includes recording the tracked expert vehicle control human interactions in a robotic process automation system training data structure. At 5406 the method includes tracking vehicle operational state information of a vehicle. At 5407 the method includes recording vehicle operational state information in the robotic process automation system training data structure. At 5408 the method includes training, via at least one neural network, the vehicle to operate with an optimized margin of vehicle operational safety in a manner consistent with the expert vehicle control human interactions based on the expert vehicle control human interactions and the vehicle operational state information in the robotic process automation system training data structure. At 5409 the method includes controlling at least one aspect of the vehicle with the trained artificial intelligence system. [00815] Referring to Fig.53 and Fig.54, in embodiments, the method further comprises applying deep learning to optimize the margin of vehicle operational safety by controlling the at least one aspect of the vehicle through structured variation in the controlling the at least one aspect of the vehicle to mimic the expert vehicle control human interactions 53201 and processing feedback from the controlling the at least one aspect of the vehicle with machine learning. In embodiments, the controlling at least one aspect of the vehicle is performed via the vehicle control-facilitating interface 53191. In embodiments, the controlling at least one aspect of the vehicle is performed by the artificial intelligence system emulating the control-facilitating interface being operated by the
SFT-106-A-PCT expert vehicle control human 53202. In embodiments, the vehicle control-facilitating interface 53191 comprises at least one of an audio capture system to capture audible expressions of the expert vehicle control human, a human-machine interface, mechanical interface, an optical interface and a sensor-based interface. In embodiments, the tracking vehicle operational state information comprises tracking at least one of vehicle systems and vehicle operational processes affected by the expert vehicle control human interactions. In embodiments, the tracking vehicle operational state information comprises tracking at least one vehicle system element. In embodiments, the at least one vehicle system element is controlled via the vehicle control- facilitating interface. In embodiments, the at least one vehicle system element is affected by the expert vehicle control human interactions. [00816] In embodiments, the tracking vehicle operational state information comprises tracking the vehicle operational state information before, during, and after the expert vehicle control human interaction. In embodiments, the tracking vehicle operational state information comprises tracking at least one of a plurality of vehicle control system outputs that result from the expert vehicle control human interactions and vehicle operational results achieved responsive to the expert vehicle control human interactions. In embodiments, the vehicle is to be controlled to achieve results that are consistent with results achieved via the expert vehicle control human interactions. [00817] In embodiments, the method further comprises tracking and recording conditions proximal to the vehicle with a plurality of vehicle mounted sensors. In embodiments, the training of the artificial intelligence system is further responsive to the conditions proximal to the vehicle tracked contemporaneously to the expert vehicle control human interactions. In embodiments, the training is further responsive to a plurality of data feeds from remote sensors, the plurality of data feeds comprising data collected by the remote sensors contemporaneous to the expert vehicle control human interactions. [00818] Fig. 55 illustrates a method 5500 for mimicking human operation of a vehicle by robotic process automation in accordance with embodiments of the systems and methods disclosed herein. At 5502 the method includes capturing human operator interactions with a vehicle control system interface operatively connected to a vehicle. At 5504 the method includes capturing vehicle response and operating conditions associated at least contemporaneously with the human operator interaction. At 5506 the method includes capturing environmental information associated at least contemporaneously with the human operator interaction. At 5508 the method includes training an artificial intelligence system to control the vehicle with an optimized margin of safety while mimicking the human operator, the artificial intelligence system taking input from the environment data collection module about the instances of environmental information associated with the contemporaneously collected vehicle response and operating conditions. In embodiments, the
SFT-106-A-PCT optimized margin of safety is achieved by training the artificial intelligence system to control the vehicle based on a set of human operator interaction data collected from interactions of an expert human vehicle operator and a set of outcome data from a set of vehicle safety events. [00819] Referring to Figs.53 and 55 in embodiments, the method further comprises: applying deep learning of the artificial intelligence system 53114 to optimize a margin of vehicle operating safety 53204 by affecting a controlling of at least one aspect of the vehicle through structured variation in control of the at least one aspect of the vehicle to mimic the expert vehicle control human interactions 53201 and processing feedback from the controlling of the at least one aspect of the vehicle with machine learning. In embodiments, the artificial intelligence system employs a workflow that involves decision-making and the robotic process automation system 53181 facilitates automation of the decision-making. In embodiments, the artificial intelligence system employs a workflow that involves remote control of the vehicle and the robotic process automation system facilitates automation of remotely controlling the vehicle 5310. [00820] Referring now to Fig. 56, a transportation system 5611 is depicted which includes an interface 56133 by which a set of expert systems 5657 may be configured to provide respective outputs 56193 for managing at least one of a set of vehicle parameters, a set of fleet parameters and a set of user experience parameters. [00821] Such an interface 56133 may include a graphical user interface (such as having a set of visual elements, menu items, forms, and the like that can be manipulated to enable selection and/or configuration of an expert system 5657), an application programming interface, an interface to a computing platform (e.g., a cloud-computing platform, such as to configure parameters of one or more services, programs, modules, or the like), and others. For example, an interface 56133 may be used to select a type of expert system 5657, such as a model (e.g., a selected model for representing behavior of a vehicle, a fleet or a user, or a model representing an aspect of an environment relevant to transportation, such as a weather model, a traffic model, a fuel consumption model, an energy distribution model, a pricing model or the like), an artificial intelligence system (such as selecting a type of neural network, deep learning system, or the like, of any type described herein), or a combination or hybrid thereof. For example, a user may, in an interface 56133, elect to use the European Center for Medium-Range Weather Forecast (ECMWF) to forecast weather events that may impact a transportation environment, along with a recurrent neural network for forecasting user shopping behavior (such as to indicate likely preferences of a user along a traffic route). [00822] Thus, an interface 56133 may be configured to provide a host, manager, operator, service provider, vendor, or other entity interacting within or with a transportation system 5611 with the ability to review a range of models, expert systems 5657, neural network categories, and the like.
SFT-106-A-PCT The interface 56133 may optionally be provided with one or more indicators of suitability for a given purpose, such as one or more ratings, statistical measures of validity, or the like. The interface 56133 may also be configured to select a set (e.g., a model, expert system, neural network, etc.) that is well adapted for purposes of a given transportation system, environment, and purpose. In embodiments, such an interface 56133 may allow a user 5690 to configure one or more parameters of an expert system 5657, such as one or more input data sources to which a model is to be applied and/or one or more inputs to a neural network, one or more output types, targets, durations, or purposes, one or more weights within a model or an artificial intelligence system, one or more sets of nodes and/or interconnections within a model, graph structure, neural network, or the like, one or more time periods of input, output, or operation, one or more frequencies of operation, calculation, or the like, one or more rules (such as rules applying to any of the parameters configured as described herein or operating upon any of the inputs or outputs noted herein), one or more infrastructure parameters (such as storage parameters, network utilization parameters, processing parameters, processing platform parameters, or the like). As one example among many other possible examples, a user 5690 may configure a selected neural network to take inputs from a weather model, a traffic model, and a real-time traffic reporting system in order to provide a real- time output 56193 to a routing system for a vehicle 5610, where the neural network is configured to have ten million nodes and to undertake processing on a selected cloud platform. [00823] In embodiments, the interface 56133 may include elements for selection and/or configuration of a purpose, an objective or a desired outcome of a system and/or sub-system, such as one that provides input, feedback, or supervision to a model, to a machine learning system, or the like. For example, a user 5690 may be allowed, in an interface 56133, to select among modes (e.g., comfort mode, sports mode, high-efficiency mode, work mode, entertainment mode, sleep mode, relaxation mode, long-distance trip mode, or the like) that correspond to desired outcomes, which may include emotional outcomes, financial outcomes, performance outcomes, trip duration outcomes, energy utilization outcomes, environmental impact outcomes, traffic avoidance outcomes, or the like. Outcomes may be declared with varying levels of specificity. Outcomes may be defined by or for a given user 5690 (such as based on a user profile or behavior) or for a group of users (such as by one or more functions that harmonizes outcomes according to multiple user profiles, such as by selecting a desired configuration that is consistent with an acceptable state for each of a set of riders). As an example, a rider may indicate a preferred outcome of active entertainment, while another rider may indicate a preferred outcome of maximum safety. In such a case, the interface 56133 may provide a reward parameter to a model or expert system 5657 for actions that reduce risk and for actions that increase entertainment, resulting in outcomes that are consistent with objectives of both riders. Rewards may be weighted, such as to optimize a set of
SFT-106-A-PCT outcomes. Competition among potentially conflicting outcomes may be resolved by a model, by rule (e.g., a vehicle owner’s objectives may be weighted higher than other riders, a parent’s over a child, or the like), or by machine learning, such as by using genetic programming techniques (such as by varying combinations of weights and/or outcomes randomly or systematically and determining overall satisfaction of a rider or set of riders). [00824] An aspect provided herein includes a system for transportation 5611, comprising: an interface 56133 to configure a set of expert systems 5657 to provide respective outputs 56193 for managing a set of parameters selected from the group consisting of a set of vehicle parameters, a set of fleet parameters, a set of user experience parameters, and combinations thereof. [00825] An aspect provided herein includes a system for configuration management of components of a transportation system 5611 comprising: an interface 56133 comprising: a first portion 56194 of the interface 56133 for configuring a first expert computing system of the expert computing systems 5657 for managing a set of vehicle parameters; a second portion 56195 of the interface 56133 for configuring a second expert computing system of the expert computing systems 5657 for managing a set of vehicle fleet parameters; and a third portion 56196 of the interface 56133 for configuring a third expert computing system for managing a set of user experience parameters. In embodiments, the interface 56133 is a graphical user interface through which a set of visual elements 56197 presented in the graphical user interface, when manipulated in the interface 56133 causes at least one of selection and configuration of one or more of the first, second, and third expert systems 5657. In embodiments, the interface 56133 is an application programming interface. In embodiments, the interface 56133 is an interface to a cloud-based computing platform through which one or more transportation-centric services, programs and modules are configured. [00826] An aspect provided herein includes a transportation system 5611 comprising: an interface 56133 for configuring a set of expert systems 5657 to provide outputs 56193 based on which the transportation system 5611 manages transportation-related parameters. In embodiments, the parameters facilitate operation of at least one of a set of vehicles, a fleet of vehicles, and a transportation system user experience; and a plurality of visual elements 56197 representing a set of attributes and parameters of the set of expert systems 5657 that are configurable by the interface 56133 and a plurality of the transportation systems 5611. In embodiments, the interface 56133 is configured to facilitate manipulating the visual elements 56197 thereby causing configuration of the set of expert systems 5657. In embodiments, the plurality of the transportation systems comprises a set of vehicles 5610. [00827] In embodiments, the plurality of the transportation systems comprises a set of infrastructure elements 56198 supporting a set of vehicles 5610. In embodiments, the set of
SFT-106-A-PCT infrastructure elements 56198 comprises vehicle fueling elements. In embodiments, the set of infrastructure elements 56198 comprises vehicle charging elements. In embodiments, the set of infrastructure elements 56198 comprises traffic control lights. In embodiments, the set of infrastructure elements 56198 comprises a toll booth. In embodiments, the set of infrastructure elements 56198 comprises a rail system. In embodiments, the set of infrastructure elements 56198 comprises automated parking facilities. In embodiments, the set of infrastructure elements 56198 comprises vehicle monitoring sensors. [00828] In embodiments, the visual elements 56197 display a plurality of models that can be selected for use in the set of expert systems 5657. In embodiments, the visual elements 56197 display a plurality of neural network categories that can be selected for use in the set of expert systems 5657. In embodiments, at least one of the plurality of neural network categories includes a convolutional neural network. In embodiments, the visual elements 56197 include one or more indicators of suitability of items represented by the plurality of visual elements 56197 for a given purpose. In embodiments, configuring a plurality of expert systems 5657 comprises facilitating selection sources of inputs for use by at least a portion of the plurality of expert systems 5657. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, one or more output types, targets, durations, and purposes. [00829] In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of one or more weights within a model or an artificial intelligence system. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of one or more sets of nodes or interconnections within a model. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of a graph structure. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of a neural network. In embodiments, the interface facilitates selection, for at least a portion of the plurality of expert systems, of one or more time periods of input, output, or operation. [00830] In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of one or more frequencies of operation. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of frequencies of calculation. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of one or more rules for applying to the plurality of parameters. In embodiments, the interface 56133 facilitates selection, for at least a portion of the plurality of expert systems 5657, of one or more rules for operating upon any of the inputs or upon the provided outputs. In embodiments, the plurality of parameters comprise one or more
SFT-106-A-PCT infrastructure parameters selected from the group consisting of storage parameters, network utilization parameters, processing parameters, and processing platform parameters. [00831] In embodiments, the interface 56133 facilitates selecting a class of an artificial intelligence computing system, a source of inputs to the selected artificial intelligence computing system, a computing capacity of the selected artificial intelligence computing system, a processor for executing the artificial intelligence computing system, and an outcome objective of executing the artificial intelligence computing system. In embodiments, the interface 56133 facilitates selecting one or more operational modes of at least one of the vehicles 5610 in the transportation system 5611. In embodiments, the interface 56133 facilitates selecting a degree of specificity for outputs 56193 produced by at least one of the plurality of expert systems 5657. [00832] Referring now to Fig. 57, an example of a transportation system 5711 is depicted having an expert system 5757 for configuring a recommendation for a configuration of a vehicle 5710. In embodiments, the recommendation includes at least one parameter of configuration for the expert system 5757 that controls a parameter of at least one of a vehicle parameter 57159 and a user experience parameter 57205. Such a recommendation system may recommend a configuration for a user based on a wide range of information, including data sets indicating degrees of satisfaction of other users, such as user profiles, user behavior tracking (within a vehicle and outside), content recommendation systems (such as collaborative filtering systems used to recommend music, movies, video and other content), content search systems (e.g., such as used to provide relevant search results to queries), e-commerce tracking systems (such as to indicate user preferences, interests, and intents), and many others. The recommendation system 57199 may use the foregoing to profile a rider and, based on indicators of satisfaction by other riders, determine a configuration of a vehicle 5710, or an experience within the vehicle 5710, for the rider. [00833] The configuration may use similarity (such as by similarity matrix approaches, attribute- based clustering approaches (e.g., k-means clustering) or other techniques to group a rider with other similar riders. Configuration may use collaborative filtering, such as by querying a rider about particular content, experiences, and the like and taking input as to whether they are favorable or unfavorable (optionally with a degree of favorability, such as a rating system (e.g., 5 stars for a great item of content). The recommendation system 57199 may use genetic programming, such as by configuring (with random and/or systematic variation) combinations of vehicle parameters and/or user experience parameters and taking inputs from a rider or a set of riders (e.g., a large survey group) to determine a set of favorable configurations. This may occur with machine learning over a large set of outcomes, where outcomes may include various reward functions of the type described herein, including indicators of overall satisfaction and/or indicators of specific objectives. Thus, a machine learning system or other expert systems 5757 may learn to configure
SFT-106-A-PCT the overall ride for a rider or set of riders and to recommend such a configuration for a rider. Recommendations may be based on context, such as whether a rider is alone or in a group, the time of day (or week, month or year), the type of trip, the objective of the trip, the type or road, the duration of a trip, the route, and the like. [00834] An aspect provided herein includes a system for transportation 5711, comprising: an expert system 5757 to configure a recommendation for a vehicle configuration. In embodiments, the recommendation includes at least one parameter of configuration for the expert system 5757 that controls a parameter selected from the group consisting of a vehicle parameter 57159, a user experience parameter 57205, and combinations thereof. [00835] An aspect provided herein includes a recommendation system 57199 for recommending a configuration of a vehicle 5710, the recommendation system 57199 comprising an expert system 5757 that produces a recommendation of a parameter for configuring a vehicle control system 57134 that controls at least one of a vehicle parameter 57159 and a vehicle rider experience parameter 57205. [00836] In embodiments, the vehicle 5710 comprises a system for automating at least one control parameter of the vehicle 5710. In embodiments, the vehicle is at least a semi-autonomous vehicle. In embodiments, the vehicle is automatically routed. In embodiments, the vehicle is a self-driving vehicle. [00837] In embodiments, the expert system 5757 is a neural network system. In embodiments, the expert system 5757 is a deep learning system. In embodiments, the expert system 5757 is a machine learning system. In embodiments, the expert system 5757 is a model-based system. In embodiments, the expert system 5757 is a rule-based system. In embodiments, the expert system 5757 is a random walk-based system. In embodiments, the expert system 5757 is a genetic algorithm system. In embodiments, the expert system 5757 is a convolutional neural network system. In embodiments, the expert system 5757 is a self-organizing system. In embodiments, the expert system 5757 is a pattern recognition system. In embodiments, the expert system 5757 is a hybrid artificial intelligence-based system. In embodiments, the expert system 5757 is an acrylic graph-based system. [00838] In embodiments, the expert system 5757 produces a recommendation based on degrees of satisfaction of a plurality of riders of vehicles 5710 in the transportation system 5711. In embodiments, the expert system 5757 produces a recommendation based on a rider entertainment degree of satisfaction. In embodiments, the expert system 5757 produces a recommendation based on a rider safety degree of satisfaction. In embodiments, the expert system 5757 produces a recommendation based on a rider comfort degree of satisfaction. In embodiments, the expert system 5757 produces a recommendation based on a rider in-vehicle search degree of satisfaction.
SFT-106-A-PCT [00839] In embodiments, the at least one rider (or user) experience parameter 57205 is a parameter of traffic congestion. In embodiments, the at least one rider experience parameter 57205 is a parameter of desired arrival times. In embodiments, the at least one rider experience parameter 57205 is a parameter of preferred routes. In embodiments, the at least one rider experience parameter 57205 is a parameter of fuel efficiency. In embodiments, the at least one rider experience parameter 57205 is a parameter of pollution reduction. In embodiments, the at least one rider experience parameter 57205 is a parameter of accident avoidance. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoiding bad weather. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoiding bad road conditions. In embodiments, the at least one rider experience parameter 57205 is a parameter of reduced fuel consumption. In embodiments, the at least one rider experience parameter 57205 is a parameter of reduced carbon footprint. In embodiments, the at least one rider experience parameter 57205 is a parameter of reduced noise in a region. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoiding high-crime regions. [00840] In embodiments, the at least one rider experience parameter 57205 is a parameter of collective satisfaction. In embodiments, the at least one rider experience parameter 57205 is a parameter of maximum speed limit. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoidance of toll roads. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoidance of city roads. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoidance of undivided highways. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoidance of left turns. In embodiments, the at least one rider experience parameter 57205 is a parameter of avoidance of driver-operated vehicles. [00841] In embodiments, the at least one vehicle parameter 57159 is a parameter of fuel consumption. In embodiments, the at least one vehicle parameter 57159 is a parameter of carbon footprint. In embodiments, the at least one vehicle parameter 57159 is a parameter of vehicle speed. In embodiments, the at least one vehicle parameter 57159 is a parameter of vehicle acceleration. In embodiments, the at least one vehicle parameter 57159 is a parameter of travel time. [00842] In embodiments, the expert system 5757 produces a recommendation based on at least one of user behavior of the rider (e.g., user 5790) and rider interactions with content access interfaces 57206 of the vehicle 5710. In embodiments, the expert system 5757 produces a recommendation based on similarity of a profile of the rider (e.g., user 5790) to profiles of other riders. In embodiments, the expert system 5757 produces a recommendation based on a result of collaborative filtering determined through querying the rider (e.g., user 5790) and taking input that facilitates classifying rider responses thereto on a scale of response classes ranging from favorable
SFT-106-A-PCT to unfavorable. In embodiments, the expert system 5757 produces a recommendation based on content relevant to the rider (e.g., user 5790) including at least one selected from the group consisting of classification of trip, time of day, classification of road, trip duration, configured route, and number of riders. [00843] Referring now to Fig. 58, an example transportation system 5811 is depicted having a search system 58207 that is configured to provide network search results for in-vehicle searchers. [00844] Self-driving vehicles offer their riders greatly increased opportunity to engage with in- vehicle interfaces, such as touch screens, virtual assistants, entertainment system interfaces, communication interfaces, navigation interfaces, and the like. While systems exist to display the interface of a rider’s mobile device on an in-vehicle interface, the content displayed on a mobile device screen is not necessarily tuned to the unique situation of a rider in a vehicle. In fact, riders in vehicles may be collectively quite different in their immediate needs from other individuals who engage with the interfaces, as the presence in the vehicle itself tends to indicate a number of things that are different from a user sitting at home, sitting at a desk, or walking around. One activity that engages almost all device users is searching, which is undertaken on many types of devices (desktops, mobile devices, wearable devices, and others). Searches typically include keyword entry, which may include natural language text entry or spoken queries. Queries are processed to provide search results, in one or more lists or menu elements, often involving delineation between sponsored results and non-sponsored results. Ranking algorithms typically factor in a wide range of inputs, in particular the extent of utility (such as indicated by engagement, clicking, attention, navigation, purchasing, viewing, listening, or the like) of a given search result to other users, such that more useful items are promoted higher in lists. [00845] However, the usefulness of a search result may be very different for a rider in a self-driving vehicle than for more general searchers. For example, a rider who is being driven on a defined route (as the route is a necessary input to the self-driving vehicle) may be far more likely to value search results that are relevant to locations that are ahead of the rider on the route than the same individual would be sitting at the individual’s desk at work or on a computer at home. Accordingly, conventional search engines may fail to deliver the most relevant results, deliver results that crowd out more relevant results, and the like, when considering the situation of a rider in a self-driving vehicle. [00846] In embodiments, of the system 5811 of Fig. 58, a search result ranking system (search system 58207) may be configured to provide in-vehicle-relevant search results. In embodiments, such a configuration may be accomplished by segmenting a search result ranking algorithm to include ranking parameters that are observed in connection only with a set of in-vehicle searches, so that in-vehicle results are ranked based on outcomes with respect to in-vehicle searches by other
SFT-106-A-PCT users. In embodiments, such a configuration may be accomplished by adjusting the weighting parameters applied to one or more weights in a conventional search algorithm when an in-vehicle search is detected (such as by detecting an indicator of an in-vehicle system, such as by communication protocol type, IP address, presence of cookies stored on a vehicle, detection of mobility, or the like). For example, local search results may be weighted more heavily in a ranking algorithm. [00847] In embodiments, routing information from a vehicle 5810 may be used as an input to a ranking algorithm, such as allowing favorable weighting of results that are relevant to local points of interest ahead on a route. [00848] In embodiments, content types may be weighted more heavily in search results based on detection of an in-vehicle query, such as weather information, traffic information, event information and the like. In embodiments, outcomes tracked may be adjusted for in-vehicle search rankings, such as by including route changes as a factor in rankings (e.g., where a search result appears to be associated in time with a route change to a location that was the subject of a search result), by including rider feedback on search results (such as satisfaction indicators for a ride), by detecting in-vehicle behaviors that appear to derive from search results (such as playing music that appeared in a search result), and the like. [00849] In embodiments, a set of in-vehicle-relevant search results may be provided in a separate portion of a search result interface (e.g., a rider interface 58208), such as in a portion of a window that allows a rider 57120 to see conventional search engine results, sponsored search results and in-vehicle relevant search results. In embodiments, both general search results and sponsored search results may be configured using any of the techniques described herein or other techniques that would be understood by skilled in the art to provide in-vehicle-relevant search results. [00850] In embodiments where in-vehicle-relevant search results and conventional search results are presented in the same interface (e.g., the rider interface 58208), selection and engagement with in-vehicle-relevant search results can be used as a success metric to train or reinforce one or more search algorithms 58211. In embodiments, in-vehicle search algorithms 58211 may be trained using machine learning, optionally seeded by one or more conventional search models, which may optionally be provided with adjusted initial parameters based on one or more models of user behavior that may contemplate differences between in-vehicle behavior and other behavior. Machine learning may include use of neural networks, deep learning systems, model-based systems, and others. Feedback to machine learning may include conventional engagement metrics used for search, as well as metrics of rider satisfaction, emotional state, yield metrics (e.g., for sponsored search results, banner ads, and the like), and the like.
SFT-106-A-PCT [00851] An aspect provided herein includes a system for transportation 5811, comprising: a search system 58207 to provide network search results for in-vehicle searchers. [00852] An aspect provided herein includes an in-vehicle network search system 58207 of a vehicle 5810, the search system comprising: a rider interface 58208 through which the rider 58120 of the vehicle 5810 is enabled to engage with the search system 58207; a search result generating circuit 58209 that favors search results based on a set of in-vehicle search criteria that are derived from a plurality of in-vehicle searches previously conducted; and a search result display ranking circuit 58210 that orders the favored search results based on a relevance of a location component of the search results with a configured route of the vehicle 5810. [00853] In embodiments, the vehicle 5810 comprises a system for automating at least one control parameter of the vehicle 5810. In embodiments, the vehicle 5810 is at least a semi-autonomous vehicle. In embodiments, the vehicle 5810 is automatically routed. In embodiments, the vehicle 5810 is a self-driving vehicle. [00854] In embodiments, the rider interface 58208 comprises at least one of a touch screen, a virtual assistant, an entertainment system interface, a communication interface and a navigation interface. [00855] In embodiments, the favored search results are ordered by the search result display ranking circuit 58210 so that results that are proximal to the configured route appear before other results. In embodiments, the in-vehicle search criteria are based on ranking parameters of a set of in-vehicle searches. In embodiments, the ranking parameters are observed in connection only with the set of in-vehicle searches. In embodiments, the search system 58207 adapts the search result generating circuit 58209 to favor search results that correlate to in-vehicle behaviors. In embodiments, the search results that correlate to in-vehicle behaviors are determined through comparison of rider behavior before and after conducting a search. In embodiments, the search system further comprises a machine learning circuit 58212 that facilitates training the search result generating circuit 58209 from a set of search results for a plurality of searchers and a set of search result generating parameters based on an in-vehicle rider behavior model. [00856] An aspect provided herein includes an in-vehicle network search system 58207 of a vehicle 5810, the search system 58207 comprising: a rider interface 58208 through which the rider 58120 of the vehicle 5810 is enabled to engage with the search system 5810; a search result generating circuit 58209 that varies search results based on detection of whether the vehicle 5810 is in self-driving or autonomous mode or being driven by an active driver; and a search result display ranking circuit 58210 that orders the search results based on a relevance of a location component of the search results with a configured route of the vehicle 5810. In embodiments, the
SFT-106-A-PCT search results vary based on whether the user (e.g., the rider 58120) is a driver of the vehicle 5810 or a passenger in the vehicle 5810. [00857] In embodiments, the vehicle 5810 comprises a system for automating at least one control parameter of the vehicle 5810. In embodiments, the vehicle 5810 is at least a semi-autonomous vehicle. In embodiments, the vehicle 5810 is automatically routed. In embodiments, the vehicle 5810 is a self-driving vehicle. [00858] In embodiments, the rider interface 58208 comprises at least one of a touch screen, a virtual assistant, an entertainment system interface, a communication interface and a navigation interface. [00859] In embodiments, the search results are ordered by the search result display ranking circuit 58210 so that results that are proximal to the configured route appear before other results. [00860] In embodiments, search criteria used by the search result generating circuit 58209 are based on ranking parameters of a set of in-vehicle searches. In embodiments, the ranking parameters are observed in connection only with the set of in-vehicle searches. In embodiments, the search system 58207 adapts the search result generating circuit 58209 to favor search results that correlate to in-vehicle behaviors. In embodiments, the search results that correlate to in-vehicle behaviors are determined through comparison of rider behavior before and after conducting a search. In embodiments, the search system 58207 further comprises a machine learning circuit 58212 that facilitates training the search result generating circuit 58209 from a set of search results for a plurality of searchers and a set of search result generating parameters based on an in-vehicle rider behavior model. [00861] An aspect provided herein includes an in-vehicle network search system 58207 of a vehicle 5810, the search system 58207 comprising: a rider interface 58208 through which the rider 58120 of the vehicle 5810 is enabled to engage with the search system 58207; a search result generating circuit 58209 that varies search results based on whether the user (e.g., the rider 58120) is a driver of the vehicle or a passenger in the vehicle; and a search result display ranking circuit 58210 that orders the search results based on a relevance of a location component of the search results with a configured route of the vehicle 5810. [00862] In embodiments, the vehicle 5810 comprises a system for automating at least one control parameter of the vehicle 5810. In embodiments, the vehicle 5810 is at least a semi-autonomous vehicle. In embodiments, the vehicle 5810 is automatically routed. In embodiments, the vehicle 5810 is a self-driving vehicle. [00863] In embodiments, the rider interface 58208 comprises at least one of a touch screen, a virtual assistant, an entertainment system interface, a communication interface and a navigation interface.
SFT-106-A-PCT [00864] In embodiments, the search results are ordered by the search result display ranking circuit 58210 so that results that are proximal to the configured route appear before other results. In embodiments, search criteria used by the search result generating circuit 58209 are based on ranking parameters of a set of in-vehicle searches. In embodiments, the ranking parameters are observed in connection only with the set of in-vehicle searches. [00865] In embodiments, the search system 58207 adapts the search result generating circuit 58209 to favor search results that correlate to in-vehicle behaviors. In embodiments, the search results that correlate to in-vehicle behaviors are determined through comparison of rider behavior before and after conducting a search. In embodiments, the search system 58207, further comprises a machine learning circuit 58212 that facilitates training the search result generating circuit 58209 from a set of search results for a plurality of searchers and a set of search results generating parameters based on an in-vehicle rider behavior model. [00866] Referring to Fig.59, an architecture for transportation system 60100 is depicted, showing certain illustrative components and arrangements relating to certain embodiments described herein. The system 60100 includes a vehicle 60104, which may include various mechanical, electrical, and software components and systems, such as a powertrain, a suspension system, a steering system, a braking system, a fuel system, a charging system, seats, a combustion engine, an electric vehicle drive train, a transmission, a gear set, and the like. The vehicle may have a vehicle user interface, which may include a set of interfaces that include a steering system, buttons, levers, touch screen interfaces, audio interfaces, and the like. The vehicle may have a set of sensors 60108 (including cameras), such as for providing input to an expert system/artificial intelligence system described throughout this disclosure. The sensors 60108 and/or external information may be used to inform the expert system/Artificial Intelligence (AI) system 60112 and to indicate or track one or more vehicle states 60116, such as vehicle operating states including energy utilization state, maintenance state, component state, user experience states, and others described herein. The AI system 60112 may take as input a wide range of vehicle parameters, such as from onboard diagnostic systems, telemetry systems, and other software systems, as well as from the sensors 60108 and from external systems and may control one or more components of the vehicle 60104. The data from the sensors 60108 including data about vehicle states 60116 may be transmitted via a network 60120 to a cloud computing platform 60124 for storage in a memory 60126 and for processing. In embodiments, the cloud computing platform 60124 and all the elements disposed with or operating therein may be separately embodied from the remainder of the elements in the system 60100. A modeling application 60128 on the cloud computing platform 60124 includes code and functions that are operable, when executed by a processor 60132 on the cloud computing platform 60124, to generate and operate a digital twin 60136 of the vehicle 60104. The digital twin
SFT-106-A-PCT 60136 represents, among other things regarding the vehicle and its environment, the operating state of the vehicle 60104 through a virtual model. A user device 60140 connected to the cloud computing platform 60124 and the vehicle 60104 via the network 60120 may interact with the modeling application 60128 and other software on the cloud computing platform 60124 to receive data from and control operation of the vehicle 60104, such as through the digital twin 60136. An interface 60144 on the user device 60140 may display the one or more vehicle states 60116 using the digital twin 60136 to a user associated with the vehicle 60104, such as a driver, a rider, a third party observer, an owner of the vehicle, an operator/owner of a fleet of vehicles, a traffic safety representative, a vehicle designer, a digital twin development engineer, and others. In embodiments, the user device 60140 may receive specific views of data about the vehicle 60104 as the data is processed by one or more applications on the cloud computing platform 60124. For example, the user device 60140 may receive specific views of data including a graphic view of the vehicle, its interior, subsystems and components, an environment proximal to the vehicle, a navigation view, a maintenance timeline, a safety testing view and the like about the vehicle 60104 as the data is processed by one or more applications, such as the digital twin 60136. As another example, the user device 60140 may display a graphical user interface that allows a user to input commands to the digital twin 60136, the vehicle 60104, modeling application 60128, and the like using one or more applications hosted by the cloud computing platform 60124. [00867] In embodiments, cloud computing platform 60124 may comprise a plurality of servers or processors, that are geographically distributed and connected with each other via a network. In embodiments, cloud computing platform 60124 may comprise an AI system 60130 coupled to or included within the cloud computing platform 60124. [00868] In embodiments, cloud computing platform 60124 may include a database management system for creating, monitoring, and controlling access to data in the database 60118 coupled to or included within the cloud computing platform 60124. The cloud computing platform 60124 may also include services that developers can use to build or test applications. The cloud computing platform 60124 may enable remote configuring, and/or controlling user devices 60140 via interface 60144. Also, the cloud computing platform 60124 may facilitate storing and analyzing of data periodically gathered from user devices 60140, and providing analytics, insights and alerts to users including manufacturers, drivers or owners of the user devices 60140 via the interface 60144. [00869] In embodiments, an on-premises server may be used to host the digital twin 60136 instead of the cloud computing platform 60124. [00870] In embodiments, the network 60120 may be a conventional type, wired or wireless, and may have numerous different configurations including a star configuration, token ring configuration, or other configurations. Furthermore, the network 60120 may include a local area
SFT-106-A-PCT network (LAN), a wide area network (WAN) (e.g., the Internet), or other interconnected data paths across which multiple devices and/or entities may communicate. In some embodiments, the network 60120 may include a peer-to-peer network. The network 60120 may also be coupled to or may include portions of a telecommunications network for sending data in a variety of different communication protocols. In some embodiments, the network 60120 includes Bluetooth® communication networks or a cellular communications network for sending and receiving data including via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, wireless application protocol (WAP), e-mail, DSRC, full-duplex wireless communication, etc. The network 60120 may also include a mobile data network that may include 3G, 4G, 5G, LTE, LTE-V2X, VoLTE or any other mobile data network or combination of mobile data networks. Further, the network 60120 may include one or more IEEE 802.11 wireless networks. [00871] In embodiments, digital twin 60136 of the vehicle 60104 is a virtual replication of hardware, software, and processes in the vehicle 60104 that combines real-time and historical operational data and includes structural models, mathematical models, physical process models, software process models, etc. In embodiments, digital twin 60136 encompasses hierarchies and functional relationships between the vehicle and various components and subsystems and may be represented as a system of systems. Thus, the digital twin 60136 of the vehicle 60104 may be seen to encompass the digital twins of the vehicle subsystems like vehicle interior layout, electrical and fuel subsystems as well as digital twins of components like engine, brake, fuel pump, alternator, etc. [00872] The digital twin 60136 may encompass methods and systems to represent other aspects of the vehicle environment including, without limitation a passenger environment, driver and passengers in the vehicle, environment proximal to the vehicle including nearby vehicles, infrastructure, and other objects detectable through, for example, sensors of the vehicle and sensors disposed proximal to the vehicle, such as other vehicles, traffic control infrastructure, pedestrian safety infrastructure, and the like. [00873] In embodiments, the digital twin 60136 of the vehicle 60104 is configured to simulate the operation of the vehicle 60104 or any portion or environment thereof. In embodiments, the digital twin 60136 may be configured to communicate with a user of the vehicle 60104 via a set of communication channels, such as speech, text, gestures, and the like. In embodiments, the digital twin 60136 may be configured to communicate with digital twins of other entities including digital twins of users, nearby vehicles, traffic lights, pedestrians and so on. [00874] In embodiments, the digital twin is linked to an identity of a user, such that the digital twin is automatically provisioned for display and configuration via a mobile device of an identified user.
SFT-106-A-PCT For example, when a user purchases a vehicle and installs the mobile application provided by the manufacturer, the digital twin is pre-configured to be displayed and controlled by the user. [00875] In embodiments, the digital twin is integrated with an identity management system, such that capabilities to view, modify, and configure the digital twin are managed by an authentication and authorization system that parses a set of identities and roles managed by the identity management system. [00876] Fig. 60 shows a schematic illustration of a digital twin system 60200 integrated with an identity and access management system 60204 in accordance with certain embodiments described herein. The Identity Manager 60208 in the identity and access management system 60204 manages the various identities, attributes and roles of users of the digital twin system 200. The Access Manager 60212 in the identity and access management system 60204 evaluates the user attributes based on access policies to provide access to authenticated users and regulates the levels of access for the users. The Identity Manager 60208 includes the credentials management 60216, user management 60220 and provisioning 60224. The credentials management 60216 manages a set of user credentials like usernames, passwords, biometrics, cryptographic keys etc. The user management 60220 manages user identities and profile information including various attributes, role definitions and preferences etc. for each user. The provisioning 60224 creates, manages and maintains the rights and privileges of the users including those related to accessing resources of the digital twin system. The Access Manager 60212 includes authentication 60228, authorization 60232 and access policies 60236. Authentication 60228 verifies the identity of a user by checking the credentials provided by the user against those stored in the credentials management 60216 and provides access to the digital twin system to verified users. The authorization 60232 parses a set of identities and roles to determine the entitlements for each user including the capabilities to view, modify, and configure the digital twin. The authorization 60232 may be performed by checking the resource access request from a user against access policies 60236. The database 60118 may store all the user directories, identity, roles, attributes, and authorization, etc. related data of the identity and access management system 60204. Roles may include driver, manufacturer, dealer, rider, owner, service department, etc. In an example of role-based digital twin authenticated access, the manufacturer role might be authorized to access content and views that are relevant to vehicle wear and tear, maintenance conditions, needs for service, quality testing etc. (e.g., to recommend replacing worn tires, adjust a vehicle operating parameter limit, such as maximum speed for badly worn tires), but not be authorized to access other content, such as content potentially considered sensitive by the vehicle owner. In embodiments, access to content by particular roles may be configured by a set of rules, by the manufacturer, by the owner of the vehicle, or the like.
SFT-106-A-PCT [00877] Fig. 61 illustrates a schematic view of an interface 60300 of the digital twin system presented on the user device of a driver of the vehicle 60104. The interface 60300 includes multiple modes like a graphical user interface (GUI) mode 60304, a voice mode 60308 and an augmented reality (AR) mode 60312. Further, the digital twin 60136 may be configured to communicate with the user via multiple communication channels such as speech, text, gestures, and the like. The GUI mode 60304 may provide the driver with various graphs and charts, diagrams and tables representing the operating state of the vehicle 60104 or one or more of its components. The voice mode 60308 may provide the driver with a speech interface to communicate with the digital twin 60136 whereby the digital twin may receive queries from a driver about the vehicle 60104, generate responses for the queries and communicate such responses to the driver. The augmented reality (AR) mode 60312 may present the user with an augmented reality (AR) view that uses the forward- facing camera of the user device 60140 and enhances the screen with one or more elements from the digital twin 60136 of the vehicle 60104. The digital twin 60136 may display to the user a converged view of the world where a physical view is augmented with computer graphics, including imagery, animation, video, text related to directions, road signs or the like. [00878] The interface 60300 presents the driver with a set of views, with each view showing an operating state, aspect, parameter etc. of the vehicle 60104, or one or more of its components, sub- systems or environment. The 3D view 60316 presents the driver with a three-dimensional rendering of the model of the vehicle 60104. The driver may select one or more components in the 3D view to see a 3D model of the components including relevant data about the components. The navigation view 60320 may show the digital twin inside a navigation screen allowing the driver to view real-time navigation parameters. The navigation view may provide to the driver information about traffic conditions, time to destination, routes to destination and preferred routes, road conditions, weather conditions, parking lots, landmarks, traffic lights and so on. Additionally, the navigation view 60320 may provide increased situational awareness to the driver by establishing communication with nearby vehicles (V2V), pedestrians (V2P) and infrastructure (V2I) and exchanging real-time information. The energy view 60324 shows the state of fuel or battery in the vehicle 60104 including utilization and battery health. The value view 60328 shows the condition and blue book value of the vehicle 60104 based on the condition. Such information may for example, be useful when selling the vehicle 60104 in a used car market. The service view 60332 may present information and view related to wear and failure of components of the vehicle 60104 and predicts the need for service, repairs or replacement based on the current and historic operational state data. The world view 60336 may show the vehicle 60104 immersed in a virtual reality (VR) environment.
SFT-106-A-PCT [00879] The digital twin 60136 may make use of the artificial intelligence system 60112 (including any of the various expert systems, neural networks, supervised learning systems, machine learning systems, deep learning systems, and other systems described throughout this disclosure and in the documents incorporated by reference) for analyzing relevant data and presenting the various views to the driver. [00880] Fig 62 is a schematic diagram showing the interaction between the driver and the digital twin using one or more views and modes of the interface in accordance with an example embodiment of the present disclosure. The driver 60244 of the vehicle 60104 interacts with the digital twin 60136 using the interface 60300 and requests assistance in navigation, at least because the digital twin 60136 may be deployed in a virtual vehicle operating environment in which it interacts with other digital twins that may have knowledge of the environment that is not readily available to an in-vehicle navigation system, such as real-time or near real-time traffic activity, road conditions and the like that may be communicated from real-world vehicles to their respective digital twins in the virtual operating environment. Digital twin 60136 may display a navigation view 60320 to the driver 60244 that may show the position of the vehicle 60104 on a map as well as the position of nearby vehicles anticipated route of nearby vehicles (e.g., a nearby vehicle that is routed to take the next exit, yet the nearby vehicle is not yet in the exit lane), tendencies of drivers in such nearby vehicles (such as if the driver tends to change lanes without using a directional signal, and the like) and one or more candidate routes that the vehicle 60104 can take to a destination. The digital twin 60136 may also use the voice mode 60308, such as to interact with the driver 60244 and provide assistance with navigation and the like. In embodiments, the digital twin may use a combination of the GUI mode 60304 and the voice mode to respond to the driver’s queries. In embodiments, the digital twin 60136 may interact with the digital twins of infrastructure elements including nearby vehicles, pedestrians, traffic lights, toll-booths, street signs, refueling systems, charging systems, etc. for determining their behavior, coordinating traffic and obtaining a 3600 non-line of sight awareness of the environment. In embodiments, the digital twin 60136 may use a combination of the 802.11p/Dedicated short-range communication (DSRC) and the cellular V2X for interaction with infrastructure elements. In embodiments, the digital twin 60136 may inform the driver 60244 about upcoming abrupt sharp left or right turns that the digital twin 60136 may recognize based on behaviors of other digital twins in a multiple digital twin virtual operating environment, such as to help prevent accidents. In embodiments, the digital twin 60136 may interact with digital twins of nearby vehicles to identify any instances of rapid deceleration or lane changes and provide a warning to the driver 60244 about the same. In embodiments, the digital twin 60136 may interact with the digital twins of nearby vehicles to identify any potential driving hazards and inform the driver 60244 about the same. In embodiments, the digital twin 60136 may
SFT-106-A-PCT utilize external sensor data and traffic information to model the driving environment and optimize driving routes for the driver 60244. As an example of optimizing driving routes, the digital twin 60136 may determine that moving into an exit lane behind a nearby vehicle has a higher probability of avoiding unsafe driving conditions than the driver of the vehicle waiting to move into an exit lane further up the road. In embodiments, the digital twin 60136 may interact with digital twins of traffic lights to pick the route with minimal stops, or to suggest, among other things, when to take a bio-break, such as ahead of a long duration of traffic congestion along a route without exits. In embodiments, the digital twin 60136 may assist the driver in finding empty spaces in nearby parking lots and/or alert the driver to spaces which may soon be opening up by interacting with other twins to get the information. In embodiments, the digital twin 60136 may reach out to law enforcement authorities or police, etc. in case of any emergency or distress situation, like an accident or a crime that may be detected through interactions of the digital twin with the vehicle 60104 and the like. In embodiments, the digital twin 60136 may advise the driver with respect to driving speeds or behavior based on an anticipated change in driving conditions either occurring or likely to occur ahead, such as an unexpected slowdown in traffic around a blind curve. For example, the digital twin 60136 may advise the driver to reduce the driving speed to a safe range of 20-40 kmph as the weather changes from “foggy” to “rainy”. In embodiments, the digital twin 60136 assists the driver 60244 in resolving any issues related to the vehicle 60104 by diagnosing such issues and then indicating options for fixing them and/or adjusting an operating parameter or mode of the vehicle 60104 to mitigate a potential for such issues to continue or worsen. For example, the driver 60244 may ask the digital twin 60136 about potential reasons for a rattling noise emerging from the vehicle 60104. In embodiments, the digital twin 60136 may receive an indication of the rattling noise from audio sensors deployed in/with the vehicle (e.g., in a passenger compartment, in an engine compartment, and the like) and may proactively suggest an action that the driver and/or any passenger may take to mitigate the rattling noise (e.g., securing a metal object that is vibrating against a window of the vehicle 60104 and the like). The twin may dissect the data, search for correlations, formulate diagnosis and interact with the driver 60244 to resolve the potential issues. In embodiments, the digital twin 60136 may communicate with other algorithms accessible by and/or through the platform 60124 that may perform, in such an instance, noise analysis and the like. For example, the digital twin 60136 may determine, through any of the means described herein, that the noise is caused by faulty hydraulics of a vehicle door, it may download and install a software update that can tweak the hydraulics of the particular door to fix the problem. Alternatively, the twin may determine that the noise is caused by a faulty exhaust system that can be fixed by replacing the catalytic converter. The twin may then proceed to resolve the issue by
SFT-106-A-PCT ordering a new catalytic converter using an e-commerce website and/or reaching out to a mechanic shop in the vicinity of the vehicle 60104. [00881] Fig.63 illustrates a schematic view of an interface of the digital twin system presented on the user device of a manufacturer 60240 of the vehicle 60104 in accordance with various embodiments of the present disclosure. As shown, the interface provided to the manufacturer 60240 is different from the one displayed to the driver 60244 of the vehicle 60104. The manufacturer 60240 is shown views of the digital twin 60136 that are in line with the manufacturer’s role and needs and which may, for example, provide information useful to make modifications to a vehicle assembly line or an operating vehicle. Yet, some parts of the manufacturer’s interface might be similar to those of the driver’s interface. For example, the 3D view 60516 presents the manufacturer 60240 with a three-dimensional rendering of the model of the vehicle 60104 as well as various components and related data. The design view 60520 includes digital data describing design information for the vehicle 60104 and its individual vehicle components. For example, the design information includes Computer-Aided Design (CAD) data of the vehicle 60104 or its individual vehicle components. The design view enables the manufacturer 60240 to view the vehicle 60104 under a wide variety of representations, rotate in three dimensions allowing viewing from any desired angle, provide accurate dimensions and shapes of vehicle parts. In embodiments, the design view enables the manufacturer 60240 to use simulation methods for optimizing and driving the design of the vehicle and its components and sub-systems. In embodiments, the design view 60520 enables the manufacturer 60240 in determining the optimal system architecture of a new vehicle model through generative design techniques. The assembly view 60524 allows the manufacturer 60240 to run prescriptive models showing how the vehicle would work and to optimize the performance of the vehicle 60104 and its components and subsystems. The manufacturer 60240 may create an integrated workflow by combining design, modeling, engineering and simulation using the view. This may allow the manufacturer 60240 to predict how a vehicle would perform before committing to expensive changes in the manufacturing process. As an example, when the manufacturer 60240 is building a new model of a hybrid vehicle, it may evaluate the effect of different options for transmission, fuel type and engine displacement over metrics such as fuel economy and retail price. The simulations in the assembly view 60524 may then provide the manufacturer 60240 with different fuel economies and retail prices based on the combination of transmission, fuel type and engine displacement used in the simulation. The manufacturer 60240 may use such simulations for making business decisions for example, to determine the combinations of transmission, fuel type and engine displacement to be used in a given model for a given segment of customers. The quality view 60528 allows the manufacturer 60240 to run millions of simulations to test the components
SFT-106-A-PCT in real-world situations and generate “what-if” scenarios that can help the manufacturer 60240 avoid costly quality and recall related issues. For instance, the manufacturer 60240 may run quality scenarios to determine the effect of different hydraulic fluid options on the effectiveness of braking in a sudden-brake situation and select the option with best effectiveness. The Real-time Analytics view 60532 may allow the manufacturer 60240 to run data analytics to build a wide range of charts, graphs and models to help the manufacturer 60240 calculate a wide range of metrics and visualize the effect of change of vehicle and component parameters on the operational performance. The Service & Maintenance view 60536 may present information related to wear and failure of components of the vehicle 60104 and predicts the need for service, repairs or replacement based on the current and historic operational state data. The view may also help the manufacturer 60240 run data analytics and formulate predictions on the remaining useful life of one or more components of the vehicle 60104. [00882] Fig.64 depicts a scenario in which the manufacturer 60240 uses the quality view of digital twin interface to run simulations and generate what-if scenarios for quality testing a vehicle in accordance with an example embodiment. The digital twin interface may provide the manufacturer 60240 with a list of options related to various vehicle states to choose from. For example, the states may include velocity 60604, acceleration 60608, climate 60612, road grade 60616, drive 60620 and transmission 60624. The manufacturer 60240 may be provided with graphical menus to select different values for a given state. The digital twin 60136 may then use this combination of vehicle states to run a simulation to predict the behavior of the vehicle 60104 in a given scenario. In embodiments, the digital twin 60136 may display the trajectory taken by the vehicle 60104 in case of sudden braking and also provide a minimum safe distance from another vehicle driving in front of the vehicle 60104. In embodiments, the digital twin 60136 may display the behavior of the vehicle 60104 in case of a sudden tire blowout as well as the impact on occupants or other vehicles. In embodiments, the digital twin 60136 may generate a large set of realistic accident scenarios and then reliably simulate the response of the vehicle 60104 in such scenarios. In embodiments, the digital twin 60136 may display the trajectory taken by the vehicle 60104 in case of brake failure and the impact on occupants or other vehicles. In embodiments, the digital twin 60136 may communicate with the digital twins of other vehicles in proximity to help prevent the collision. In embodiments, the digital twin 60136 may predict a time to collision (TTC) from another vehicle at a given distance from the vehicle 60104. In embodiments, the digital twin 60136 may determine the crashworthiness and rollover characteristics of the vehicle 60104 in case of a collision. In embodiments, the digital twin 60136 may analyze the structural impact of a head-on collision on the vehicle 60104 to determine the safety of the occupant. In embodiments, the digital twin 60136
SFT-106-A-PCT may analyze the structural impact of a sideways collision on the vehicle 60104 to determine the safety of the occupant. [00883] FIG. 65 illustrates a schematic view of an interface 700 of the digital twin system presented on the user device of a dealer 60702 of the vehicle 60104. As shown, the interface provided to the dealer 60702 is different from the one provided to the driver 60244 and the manufacturer 60240 of the vehicle 60104. The dealer 60702 is shown views of the digital twin 60136 that are in line with the dealer’s role and needs and which may for example, provide information useful to provide superior selling and customer experience. Yet, some parts of the dealer’s interface might be similar to those of the manufacturer’s or driver’s interface. For example, the 3D view 60716 presents the dealer 60702 with a three-dimensional rendering of the model of the vehicle 60104 as well as various components and related data. The performance tuning view 60720 allows the dealer 60702 to alter the vehicle 60104 so as to personalize the characteristics of the vehicle according to the preference of a driver or a rider. For example, vehicles may be tuned to provide better fuel economy, produce more power, or provide better handling and driving. The performance tuning view 60720 may allow the dealer 60702 in modifying or tuning the performance of one or more components like engine, body, suspension etc. The configurator view 60724 enables the dealer 60702 in helping a potential customer with configuring the various components and materials of the vehicle including engine, wheels, interiors, exterior, color, accessories, etc. based on the preference of the potential customer. The configurator view 60724 helps the dealer 60702 in determining the different possible configurations of a vehicle, selecting a configuration based on potential customer preference and then calculating the price of the selected configuration. The test drive view 60728 may allow the dealer 60702 in allowing the potential customer to virtually test drive a new or used vehicle using the digital twin 60136. The certification view 60732 allows a used car dealer to provide certification about the condition of a used vehicle to a potential customer using the digital twin. The Service & Maintenance view 60736 may present information related to wear and failure of components of the vehicle 60104 and predicts the need for service, repairs or replacement based on the current and historic operational state data. The view may also help the dealer 60702 run data analytics and formulate predictions on remaining useful life of one or more components of the vehicle 60104. [00884] Fig. 66 is a diagram illustrating the interaction between the dealer 60702 and the digital twin 60136 using one or more views with the goal of personalizing the experience of a potential customer purchasing the vehicle 60104 in accordance with an example embodiment. The digital twin 60136 enables the dealer 60702 to interactively select one or more components or options to configure a vehicle based on customer preferences as well as the availability and compatibility of the components. Further, the digital twin 60136 enables the dealer 60702 to alter the performance
SFT-106-A-PCT of the vehicle 60104 in line with customer preferences as well as allow the customer to test drive the customized vehicle before finally purchasing the same. [00885] The dealer 60702 of the vehicle 60104 interacts with the digital twin 60136 using a configurator view 60724 of the interface 60700 and requests for assistance in configuring a vehicle for a customer. The digital twin 60136 may display the GUI 60704 of the configurator view 60724 to the dealer 60702 showing all the different options available for one or more components. The dealer 60702 may then select one or more components using a drop-down menu or use drag and drop operations to add one or more components to configure the vehicle as per the preference of the customer. In the example embodiment, the GUI view 60704 of the digital twin displays options for vehicle grade 60804, engine 60808, seats 60812, color 60816 and wheels 60820. [00886] The digital twin 60136 may check for compatibility between one or more components selected by the dealer 60702 with the model of the vehicle 60104 using a set of predefined database of valid relationships. In embodiments, certain combinations of components may not be compatible with a given grade of a vehicle and the dealer 60702 may be advised about the same. For example, grade EX may stand for the based model of the vehicle and may not offer the option of leather seats. Similarly, grade ZX may stand for the premium model of the vehicle and may not offer CVT engine, fabric seats and 20” aluminum wheels. In embodiments, the dealer 60702 is only displayed compatible combinations by the configurator view. The configurator view of digital twin then allows the dealer 60702 to configure the complete vehicle by adding a set of compatible components and subsystems. Upon completing the configuration, the digital twin 60136 calculates the price 60824 of the assembled vehicle based on the price of individual components and presents the same to the dealer 60702. [00887] In embodiments, the digital twin 60136 may also use voice mode 60708 to interact with the dealer 60702 and provide assistance with configuration. In embodiments, the digital twin 136 may use a combination of the GUI mode 60704 and the voice mode 60708 to respond to the dealer’s queries. [00888] In embodiments, the digital twin 60136 may further allow the dealer 60702 to assist the customer in tuning the performance of the vehicle using the performance tuning view 60720. The dealer 60702 may be presented with different modes 60828 including sports, fuel-efficient, outdoor and comfort and may pick one of them to tune the performance of the vehicle 60104 accordingly. [00889] Similarly, the digital twin 60136 may present to an owner of the vehicle 60136 with views showing an operating state, aspect, parameter, etc. of the vehicle 60104, or one or more of its components, subsystems or environment based on the owner’s requirements. Fleet monitoring view may allow an owner to track and monitor the movement/route/condition of one or more vehicles. The driver behavior monitoring view may allow the owner to monitor instances of unsafe
SFT-106-A-PCT or dangerous driving by a driver. The insurance view may assist the owner in determining the insurance policy quote of a vehicle based on the vehicle condition. The compliance view may show a compliance status of the vehicle with respect to emission/pollution and other regulatory norms based on the condition of the vehicle. [00890] Similarly, the digital twin 60136 may present to a rider of the vehicle 60136 with views showing aspects relevant for the rider. For example, the rider may be provided an experience optimization view allowing the rider to select an experience mode to personalize the riding experience based on rider preferences/ ride objectives. The rider may select from one or more experience modes including comfort mode, sports mode, high-efficiency mode, work mode, entertainment mode, sleep mode, relaxation mode, and long-distance trip mode. [00891] Fig. 67 is a diagram illustrating the service & maintenance view presented to a user of a vehicle including a driver 60244, a manufacturer 60240 and a dealer 60702 of the vehicle 60104 in accordance with an example embodiment. The service & maintenance view provided by the digital twin allows a user, like the dealer 60702, to monitor the health of one or more components or subsystems of the vehicle 60104. The view shows some key components including an engine 60904, a steering 60908, a battery 60912, an exhaust & emission 60916, tires 60920, shock absorbers 60924, brake pads 60928 and a gearbox 60932. The dealer 60702 may click an icon of the component to view detailed data and diagnostics associated with that component. For example, the digital twin 60136 may present the dealer 60702 with analytics related to parameters like vibration 60936 and temperature 60940 as well as historical vehicle data 60944 and real-time series sensor data 948. The digital twin 60136 may also conduct a health scan to discover no issues with engine health and present a “0 issues detected” message to the dealer 60702. The digital twin 60136 may also allow the dealer 60702 to conduct a full health scan on the complete vehicle (instead of component-wise scanning). The digital twin may diagnose the issues and assist the dealer 60702 in resolving the issues. In the example, the digital twin detects two issues upon full health scan as “loose shock absorber” and “faulty sparkplug wire”. [00892] In embodiments, the digital twin 60136 may also help predict when one or more components of the vehicle should receive maintenance. The digital twin 60136 may predict the anticipated wear and failure of components of the vehicle 60104 by reviewing historical and current operational data thereby reducing the risk of unplanned downtime and the need for scheduled maintenance. Instead of over-servicing or over-maintaining the vehicle 60104, any issues predicted by the digital twin 60136 may be addressed in a proactive or just-in-time manner to avoid costly downtime, repairs or replacement. [00893] The digital twin 60136 may collect on-board data including real-time sensor data about components that may be communicated through CAN network of the vehicle 60104. The digital
SFT-106-A-PCT twin 60136 may also collect historical or other data around vehicle statistics and maintenance including data on repairs and repair diagnostics from the database 60118. [00894] Predictive analytics powered by the artificial intelligence system 60112 dissect the data, search for correlations, and employ prediction modeling to determine the condition of the vehicle 60104 and predict maintenance needs and remaining useful life for one or more components. [00895] The cloud computing platform 60124 may include a system for learning on a training set of outcomes, parameters, and data collected from data sources relating to a set of vehicle activities to train artificial intelligence (including any of the various expert systems, artificial intelligence systems, neural networks, supervised learning systems, machine learning systems, deep learning systems, and other systems described throughout this disclosure) for performing condition monitoring, anomaly detection, failure forecasting and predictive maintenance of one or more components of the vehicle 60104 using the digital twin 60136. [00896] In embodiments, the cloud computing platform 60124 may include a system for learning on a training set of vehicle maintenance outcomes, parameters, and data collected from data sources relating to a set of vehicle activities to train the artificial intelligence system 60112 to perform predictive maintenance on the vehicle 60104 using the digital twin 60136. [00897] In embodiments, the artificial intelligence system 60112 may train models, such as predictive models (e.g., various types of neural networks, classification based models, regression- based models, and other machine-learned models). In embodiments, training can be supervised, semi-supervised, or unsupervised. In embodiments, training can be done using training data, which may be collected or generated for training purposes. [00898] An example artificial intelligence system trains a vehicle predictive maintenance model. A predictive maintenance model may be a model that receives vehicle-related data and outputs one or more predictions or answers regarding the remaining life of the vehicle 60104. The training data can be gathered from multiple sources including vehicle or component specifications, environmental data, sensor data, operational information, and outcome data. The artificial intelligence system 60112 takes in the raw data, pre-processes it and applies machine learning algorithms to generate the predictive maintenance model. In embodiments, the artificial intelligence system 60112 may store the predictive model in a model datastore within the database 60118. [00899] The artificial intelligence system 60112 may train multiple predictive models to answer different questions on predictive maintenance. For example, a classification model may be trained to predict failure within a given time window, while a regression model may be trained to predict the remaining useful life of the vehicle 60104 or one or more components.
SFT-106-A-PCT [00900] In embodiments, training may be done based on feedback received by the system, which is also referred to as “reinforcement learning.” In embodiments, the artificial intelligence system 60112 may receive a set of circumstances that led to a prediction (e.g., attributes of vehicle, attributes of a model, and the like) and an outcome related to the vehicle and may update the model according to the feedback. [00901] In embodiments, the artificial intelligence system 60112 may use a clustering algorithm to identify the failure pattern hidden in the failure data to train a model for detecting uncharacteristic or anomalous behavior for one or more components. The failure data across multiple vehicles and their historical records may be clustered to understand how different patterns correlate to certain wear-down behavior and develop a maintenance plan to resonant with the failure. [00902] In embodiments, the artificial intelligence system 60112 may output scores for each possible prediction, where each prediction corresponds to a possible outcome. For example, in using a predictive model used to determine a likelihood that the vehicle 60104 or one or more components will fail in the next one week, the predictive model may output a score for a “will fail” outcome and a score for a “will not fail” outcome. The artificial intelligence system 60112 may then select the outcome with the greater score as the prediction. Alternatively, the system 60112 may output the respective scores to a requesting system. In embodiments, the output from system 60112 includes a probability of the prediction’s accuracy. [00903] Fig.68 is an example method used by digital twin 60136 for detecting faults and predicting any future failures of the vehicle 60104 in accordance with an example embodiment. [00904] At 61002, a plurality of streams of vehicle-related data from multiple data sources is received by the digital twin 60136. This includes vehicle specifications like mechanical properties, data from maintenance records, operating data collected from the sensors 60112, historical data including failure data from multiple vehicles running at different times and under different operating conditions and so on. At 61004, the raw data is cleaned by removing any missing or noisy data, which may occur due to any technical problems in the vehicle 60104 at the time of collection of data. At 61006, one or more models are selected for training by the digital twin 60136. The selection of the model is based on the kind of data available at the digital twin 60136 and the desired outcome of the model. For example, there may be cases where failure data from vehicles is not available, or only a limited number of failure datasets exist because of regular maintenance being performed. Classification or regression models may not work well for such cases and clustering models may be the most suitable. As another example, if the desired outcome of the model is determining the current condition of the vehicle and detecting any faults, then fault detection models may be selected, whereas if the desired outcome is predicting future failures then
SFT-106-A-PCT remaining useful life prediction model may be selected. At 61008, the one or more models are trained using training dataset and tested for performance using testing dataset. At 61010, the trained model is used for detecting faults and predicting future failure of the vehicle 60104 on production data. [00905] Fig. 69 is an example embodiment depicting the deployment of the digital twin 60136 to perform predictive maintenance on the vehicle 60104. Digital twin 60136 receives data from the database 60118 on a real-time or near real-time basis. The database 60118 may store different types of data in different datastores. For example, the vehicle datastore 61102 may store data related to vehicle identification and attributes, vehicle state and event data, data from maintenance records, historical operating data, notes from vehicle service engineer, etc. The sensor datastore 61104 may store sensor data from operations including data from temperature, pressure, and vibration sensors that may be stored as signal or time-series data. The failure datastore 61106 may store failure data from the vehicle 60104 including failure data of components or similar vehicles at different times and under different operating conditions. The model datastore 61108 may store data related to different predictive models including fault detection and remaining life prediction models. [00906] The digital twin 60136 coordinates with an artificial intelligence system to select one or more models based on the kind and quality of available data and the desired answers or outcomes. For example, the physical models 61110 may be selected if the intended use of the digital twin 60136 is to simulate what-if scenarios and predict how the vehicle will behave under such scenarios. The Fault Detection and Diagnostics Models 61112 may be selected to determine the current health of the vehicle 60104 and any faulty conditions. A simple fault detection model may use or more condition indicators to distinguish between regular and faulty behaviors and may have a threshold value for the condition indicator that is indicative of a fault condition when exceeded. A more complex model may train a classifier to compare the value of one or more condition indicators to values associated with fault states, and returns the probability of the presence of one or more fault states. [00907] The Remaining Useful Life (RUL) Prediction models 61114 are used for predicting future failures and may include degradation models 61116, survival models 61118 and similarity models 61120. An example RUL prediction model may fit the time evolution of a condition indicator and predicts how long it will be before the condition indicator crosses some threshold value indicative of a failure. Another model may compare the time evolution of the condition indicator to measured or simulated time series from similar systems that ran to failure. [00908] In embodiments, a combination of one or more of these models may be selected by the digital twin 60136.
SFT-106-A-PCT [00909] The Artificial Intelligence system 60112 may include machine learning processes 61122, clustering processes 61124, analytics processes 61126 and natural language processes 61128. The machine learning processes 61122 work with the digital twin 60136 to train one or more models as identified above. An example of such machine-learned model is the RUL prediction model 61114. The model 61114 may be trained using training dataset 61130 from the database 60118. The performance of the model 61114 and classifier may then be tested using testing dataset 61132. [00910] The clustering processes 61124 may be implemented to identify the failure pattern hidden in the failure data to train a model for detecting uncharacteristic or anomalous behavior. The failure data across multiple vehicles and their historical records may be clustered to understand how different patterns correlate to certain wear-down behavior. The analytics processes 61126 perform data analytics on various data to identify insights and predict outcomes. The natural language processes 61128 coordinate with the digital twin 60136 to communicate the outcomes and results to the user of the vehicle 60104. [00911] The outcomes 60234 may be in the form of modeling results 61136, alerts and warnings 61138 or remaining useful life (RUL) predictions 61140. The digital twin 60136 may communicate with a user via multiple communication channels such as speech, text, gestures to convey outcomes 61134. [00912] In embodiments, models may then be updated or reinforced based on the model outcomes 61134. For example, the artificial intelligence system 60112 may receive a set of circumstances that led to a prediction of failure and the outcome and may update the model based on the feedback. [00913] Fig. 70 is a flow chart depicting a method for generating a digital twin of a vehicle in accordance with certain embodiments of the disclosure. At 61202, a request from a user, such as an owner, a lessee, a driver, a fleet operator/owner, a mechanic, and the like associated with the vehicle 60104 is received by the vehicle 60104, such as through an interface provided in the vehicle or a user device 60140 carried by the user to provide state information of the vehicle 60104. At 61204, a digital twin 60136 of the vehicle 60104 is generated using one or more processors, based on one or more inputs regarding vehicle state from an on-board diagnostic system, a telemetry system, a vehicle-located sensor, and a system external to the vehicle. At 61206, the user is presented through the interface, a version of state information of the vehicle 60104 as determined by using the digital twin 60136 of the vehicle 60104 as noted above. [00914] Fig.71 is a diagrammatic view that illustrates an alternate architecture for a transportation system comprising a vehicle and a digital twin system in accordance with various embodiments of the present disclosure. The vehicle 60104 includes an edge intelligence system 61304 that provides 5G connectivity to a system external to the vehicle 60104, internal connectivity to a set of sensors 60108 and data sources of the vehicle 60104, and onboard artificial intelligence system 60112. The
SFT-106-A-PCT edge intelligence system 61304 may also communicate with artificial intelligence system 60130 of the digital twin system 60200 hosted on the cloud computing platform 60124. The digital twin system 60200 may be populated via an application programming interface (API) from the edge intelligence system 61304. [00915] The edge intelligence system 61304 helps provide certain intelligence locally in the vehicle 60104 instead of relying on cloud-based intelligence. This may, for example, include tasks requiring low-overhead computations and/or those performed in low latency conditions. This helps the system perform reliably in even a limited network bandwidth situations and avoid dropouts. [00916] Fig.72 depicts a digital twin representing a combination of set of states of both a vehicle and a driver of the vehicle in accordance with certain embodiments of the present disclosure. [00917] The integrated vehicle and driver twin 61404 may be created, such as by integrating a digital twin of the vehicle 60104 with the digital twin of the driver. In embodiments, such an integration may be achieved by normalizing the 3D models used by each of the twins to represent a consistent scale, and linking via APIs to obtain regular updates of each twin (such as current operating states of the vehicle and current physiological state, posture, or the like of the driver). [00918] The integrated vehicle and driver twin may then work with the edge intelligence system 1304 to configure a vehicle experience based on the combined vehicle state 60116 and the driver state 61408. [00919] Fig.73 illustrates a schematic diagram depicting a scenario in which the integrated vehicle and the driver digital twin may configure the vehicle experience in accordance with an example embodiment. [00920] In the example scenario, the integrated vehicle and the driver twin 61404 may determine that the driver’s state is “drowsy” based on an input from a set of IR cameras tracking the pupil size and eyelid movement and a set of sensors 60108 tracking the (sagging) posture and (slower) reaction time of the driver 60244. The twin may also determine that the vehicle is “unstable” based on the tracking of speed, lateral position, turning angles and moving course. The integrated vehicle and driver twin 61404 may communicate with the driver 60244 alerting the driver 60244 about the potential safety hazards driving in such a state. Alternatively, the integrated vehicle and the driver twin 61404 may take one or more steps to wake the driver like switching on music or turning up the volume and/or ensure driver and vehicle safety by switching the vehicle into an autopilot or autosteer mode. [00921] As another example, the integrated vehicle and the driver twin may use information about the vehicle state (e.g., amount of fuel remaining) and the driver state (e.g., time since the driver last ate), to activate a point of interest suggestion function to suggest a detour along a planned route to a good place to eat that passes by a preferred fuel provider.
SFT-106-A-PCT [00922] In embodiments, an integrated vehicle and the rider twin may be created, such as by integrating a digital twin of the vehicle 60104 with the digital twin of the rider. In embodiments, such an integration may be achieved by normalizing the 3D models used by each of the twins to represent a consistent scale and linking via APIs to obtain regular updates of each twin (such as current operating states of the vehicle and current physiological state, posture, or the like of the rider). [00923] In embodiments, the integrated vehicle and the rider twin are updated when a second rider enters the vehicle. [00924] In embodiments, the integrated vehicle and the rider twin may work with the edge intelligence system 61304 to configure a vehicle experience based on the combined vehicle state and the rider state. [00925] For example, the integrated vehicle and rider twin may determine that the rider state is “fatigued” based on an input from one or more sensors 60108, etc. In embodiments, a seat- integrated and sensor-enabled fabric wrapped around the parts of the body of the rider may assist the twin in determining the rider state. The twin may also determine that the vehicle state includes high traffic congestion and damaged road conditions. The integrated vehicle and the rider twin may then take one or more actions to provide comfort to the rider: the twin may activate a seat-integrated robotic exoskeleton element for providing functional support to the rider including support for arms, legs, back and neck/head. Alternatively, the twin may activate an electrostimulation element on the seat-integrated and sensor-enabled fabric wrapped around the parts of the body of the rider including torso, legs, etc. for providing relaxation and comfort to the rider. [00926] As another example, the integrated vehicle and the rider twin may determine that the rider state is “shivery” based on an input from one or more sensors 60108, etc. In embodiments, a seat- integrated and sensor-enabled fabric wrapped around the parts of the body of the rider may assist the twin in determining the rider state. The twin may also determine that the vehicle state includes rainy weather conditions. The integrated vehicle and rider twin may then take one or more actions to provide warmth to the rider: the twin may activate a warming element or an element for mid- infrared (penetrating heat) on the seat-integrated and sensor-enabled fabric wrapped around the parts of the body of the rider including torso, legs, etc. for providing warmth and comfort to the rider. [00927] In embodiments, a digital twin may represent a set of items contained in a vehicle, such as ones recognized by a network (e.g., by having device identifiers recognized by the network, such as device identifiers of cellular phones, laptops, tablets, or other computing devices) and/or ones identified by in-vehicle sensors, such as cameras, including ones using computer vision for object recognition. Thus, a digital twin may provide a view of a user of the interior contents of the
SFT-106-A-PCT vehicle that depicts to presence or absence of the items, such that the user can confirm the same. This may assist with finding lost items, with confirming the presence of items required for a trip (such as coordinated with a packing list, including a digital packing list), with confirming the presence of sports equipment, provisions, portable seats, umbrellas, or other accessories or personal property items of the user. In embodiments, the digital twin of the vehicle may integrate with, or integrate information from, a set of digital twins that represent other items, including items of personal property of the user of the digital twin. In embodiments, an application, such as a mobile application, may be provided, such as by or linked to a vehicle manufacturer or dealer, or the like, for tracking the personal items of a user, including a typical set of vehicle accessories and items typically transported or stored in a vehicle, via a set of digital twins that each represent some or all of the items. A user may be prompted to enter the items, such as by identifying the items by name or description, by linking to the items (such as by linking to or from identifiers in e-commerce sites (or to communications from such sites, such as confirmation emails indicating purchases), by capturing photographs of the items, by capturing QR codes, bar codes, or the like of the items, or other techniques. Identified items may be represented in a set of digital twins based on type (such as by retrieving dimensions, images, and other attributes from relevant data sources, such as e- commerce sites or providers), or based on actual images (which may be sized based on dimensional information captured during image capture, such as using structured light, LIDAR or other dimension estimating techniques). In the mobile application, the user may indicate a wish to track the personal property, in which case location tracking systems, including tag-based systems (such as RFID systems), label-based systems (such as QR systems), sensor-based systems (such as using cameras and other sensors), network-based systems (such as Internet of Things systems) and others may track the locations of the personal property. In embodiments, the location information from a location tracking system may represent the items in a set of digital twins, such as ones representing a user’s vehicle, locations within a user’s vehicle (in a vehicle digital twin), locations within a user’s home (such as in a home digital twin), locations within a user’s workplace (such as in a workplace digital twin), or the like. In embodiments, a user may select an item in the mobile application, such as from a list or menu, or via a search function, and the mobile application will retrieve the appropriate digital twin and display the item within the digital twin based on the current location of the item. [00928] Referring to Fig. 74, the artificial intelligence system 65248 may define a machine learning model 65102 for performing analytics, simulation, decision making, and prediction making related to data processing, data analysis, simulation creation, and simulation analysis of one or more of the transportation entities. The machine learning model 65102 is an algorithm and/or statistical model that performs specific tasks without using explicit instructions, relying
SFT-106-A-PCT instead on patterns and inference. The machine learning model 65102 builds one or more mathematical models based on training data to make predictions and/or decisions without being explicitly programmed to perform the specific tasks. The machine learning model 65102 may receive inputs of sensor data as training data, including event data 65124 and state data 65702 related to one or more of the transportation entities. The sensor data input to the machine learning model 65102 may be used to train the machine learning model 65102 to perform the analytics, simulation, decision making, and prediction making relating to the data processing, data analysis, simulation creation, and simulation analysis of the one or more of the transportation entities. The machine learning model 65102 may also use input data from a user or users of the information technology system. The machine learning model 65102 may include an artificial neural network, a decision tree, a support vector machine, a Bayesian network, a genetic algorithm, any other suitable form of machine learning model, or a combination thereof. The machine learning model 65102 may be configured to learn through supervised learning, unsupervised learning, reinforcement learning, self-learning, feature learning, sparse dictionary learning, anomaly detection, association rules, a combination thereof, or any other suitable algorithm for learning. [00929] The artificial intelligence system 65248 may also define the digital twin system 65330 to create a digital replica of one or more of the transportation entities. The digital replica of the one or more of the transportation entities may use substantially real-time sensor data to provide for substantially real-time virtual representation of the transportation entity and provides for simulation of one or more possible future states of the one or more transportation entities. The digital replica exists simultaneously with the one or more transportation entities being replicated. The digital replica provides one or more simulations of both physical elements and properties of the one or more transportation entities being replicated and the dynamics thereof, in embodiments, throughout the lifestyle of the one or more transportation entities being replicated. The digital replica may provide a hypothetical simulation of the one or more transportation entities, for example during a design phase before the one or more transportation entities are constructed or fabricated, or during or after construction or fabrication of the one or more transportation entities by allowing for hypothetical extrapolation of sensor data to simulate a state of the one or more transportation entities, such as during high stress, after a period of time has passed during which component wear may be an issue, during maximum throughput operation, after one or more hypothetical or planned improvements have been made to the one or more transportation entities, or any other suitable hypothetical situation. In some embodiments, the machine learning model 65102 may automatically predict hypothetical situations for simulation with the digital replica, such as by predicting possible improvements to the one or more transportation entities, predicting when one or more components of the one or more transportation entities may fail, and/or suggesting
SFT-106-A-PCT possible improvements to the one or more transportation entities, such as changes to timing settings, arrangement, components, or any other suitable change to the transportation entities. The digital replica allows for simulation of the one or more transportation entities during both design and operation phases of the one or more transportation entities, as well as simulation of hypothetical operation conditions and configurations of the one or more transportation entities. The digital replica allows for invaluable analysis and simulation of the one or more transportation entities, by facilitating observation and measurement of nearly any type of metric, including temperature, wear, light, vibration, etc. not only in, on, and around each component of the one or more transportation entities, but in some embodiments within the one or more transportation entities. In some embodiments, the machine learning model 65102 may process the sensor data including the event data 65124 and the state data 65702 to define simulation data for use by the digital twin system 65330. The machine learning model 65102 may, for example, receive state data 65702 and event data 65124 related to a particular transportation entity of the plurality of transportation entities and perform a series of operations on the state data 65702 and the event data 65124 to format the state data 65702 and the event data 65124 into a format suitable for use by the digital twin system 65330 in creation of a digital replica of the transportation entity. For example, one or more transportation entities may include a robot configured to augment products on an adjacent assembly line. The machine learning model 65102 may collect data from one or more sensors positioned on, near, in, and/or around the robot. The machine learning model 65102 may perform operations on the sensor data to process the sensor data into simulation data and output the simulation data to the digital twin system 65330. The digital twin simulation 65330 may use the simulation data to create one or more digital replicas of the robot, the simulation including for example metrics including temperature, wear, speed, rotation, and vibration of the robot and components thereof. The simulation may be a substantially real-time simulation, allowing for a human user of the information technology to view the simulation of the robot, metrics related thereto, and metrics related to components thereof, in substantially real time. The simulation may be a predictive or hypothetical situation, allowing for a human user of the information technology to view a predictive or hypothetical simulation of the robot, metrics related thereto, and metrics related to components thereof. [00930] In some embodiments, the machine learning model 65102 and the digital twin system 65330 may process sensor data and create a digital replica of a set of transportation entities of the plurality of transportation entities to facilitate design, real-time simulation, predictive simulation, and/or hypothetical simulation of a related group of transportation entities. The digital replica of the set of transportation entities may use substantially real-time sensor data to provide for substantially real-time virtual representation of the set of transportation entities and provide for
SFT-106-A-PCT simulation of one or more possible future states of the set of transportation entities. The digital replica exists simultaneously with the set of transportation entities being replicated. The digital replica provides one or more simulations of both physical elements and properties of the set of transportation entities being replicated and the dynamics thereof, in embodiments throughout the lifestyle of the set of transportation entities being replicated. The one or more simulations may include a visual simulation, such as a wire-frame virtual representation of the one or more transportation entities that may be viewable on a monitor, using an augmented reality (AR) apparatus, or using a virtual reality (VR) apparatus. The visual simulation may be able to be manipulated by a human user of the information technology system, such as zooming or highlighting components of the simulation and/or providing an exploded view of the one or more transportation entities. The digital replica may provide a hypothetical simulation of the set of transportation entities, for example during a design phase before the one or more transportation entities are constructed or fabricated, or during or after construction or fabrication of the one or more transportation entities by allowing for hypothetical extrapolation of sensor data to simulate a state of the set of transportation entities, such as during high stress, after a period of time has passed during which component wear may be an issue, during maximum throughput operation, after one or more hypothetical or planned improvements have been made to the set of transportation entities, or any other suitable hypothetical situation. In some embodiments, the machine learning model 65102 may automatically predict hypothetical situations for simulation with the digital replica, such as by predicting possible improvements to the set of transportation entities, predicting when one or more components of the set of transportation entities may fail, and/or suggesting possible improvements to the set of transportation entities, such as changes to timing settings, arrangement, components, or any other suitable change to the transportation entities. The digital replica allows for simulation of the set of transportation entities during both design and operation phases of the set of transportation entities, as well as simulation of hypothetical operation conditions and configurations of the set of transportation entities. The digital replica allows for invaluable analysis and simulation of the one or more transportation entities, by facilitating observation and measurement of nearly any type of metric, including temperature, wear, light, vibration, etc. not only in, on, and around each component of the set of transportation entities, but in some embodiments within the set of transportation entities. In some embodiments, the machine learning model 65102 may process the sensor data including the event data 65124 and the state data 65702 to define simulation data for use by the digital twin system 65330. The machine learning model 65102 may, for example, receive state data 65702 and event data 65124 related to a particular transportation entity of the plurality of transportation entities and perform a series of operations on the state data 65702 and the event data 65124 to format the state data 65702 and the event data
SFT-106-A-PCT 65124 into a format suitable for use by the digital twin system 65330 in the creation of a digital replica of the set of transportation entities. For example, a set of transportation entities may include a die machine configured to place products on a conveyor belt, the conveyor belt on which the die machine is configured to place the products, and a plurality of robots configured to add parts to the products as they move along the assembly line. The machine learning model 65102 may collect data from one or more sensors positioned on, near, in, and/or around each of the die machines, the conveyor belt, and the plurality of robots. The machine learning model 65102 may perform operations on the sensor data to process the sensor data into simulation data and output the simulation data to the digital twin system 65330. The digital twin simulation 65330 may use the simulation data to create one or more digital replicas of the die machine, the conveyor belt, and the plurality of robots, the simulation including for example metrics including temperature, wear, speed, rotation, and vibration of the die machine, the conveyor belt, and the plurality of robots and components thereof. The simulation may be a substantially real-time simulation, allowing for a human user of the information technology to view the simulation of the die machine, the conveyor belt, and the plurality of robots, metrics related thereto, and metrics related to components thereof, in substantially real time. The simulation may be a predictive or hypothetical situation, allowing for a human user of the information technology to view a predictive or hypothetical simulation of the die machine, the conveyor belt, and the plurality of robots, metrics related thereto, and metrics related to components thereof. [00931] In some embodiments, the machine learning model 65102 may prioritize collection of sensor data for use in digital replica simulations of one or more of the transportation entities. The machine learning model 65102 may use sensor data and user inputs to train, thereby learning which types of sensor data are most effective for creation of digital replicate simulations of one or more of the transportation entities. For example, the machine learning model 65102 may find that a particular transportation entity has dynamic properties such as component wear and throughput affected by temperature, humidity, and load. The machine learning model 65102 may, through machine learning, prioritize collection of sensor data related to temperature, humidity, and load, and may prioritize processing sensor data of the prioritized type into simulation data for output to the digital twin system 65330. In some embodiments, the machine learning model 65102 may suggest to a user of the information technology system that more and/or different sensors of the prioritized type be implemented in the information technology near and around the transportation entity being simulation such that more and/or better data of the prioritized type may be used in simulation of the transportation entity via the digital replica thereof. [00932] In some embodiments, the machine learning model 65102 may be configured to learn to determine which types of sensor data are to be processed into simulation data for transmission to
SFT-106-A-PCT the digital twin system 65330 based on one or both of a modeling goal and a quality or type of sensor data. A modeling goal may be an objective set by a user of the information technology system or may be predicted or learned by the machine learning model 65102. Examples of modeling goals include creating a digital replica capable of showing dynamics of throughput on an assembly line, which may include collection, simulation, and modeling of, e.g., thermal, electrical power, component wear, and other metrics of a conveyor belt, an assembly machine, one or more products, and other components of the transportation ecosystem. The machine learning model 65102 may be configured to learn to determine which types of sensor data are necessary to be processed into simulation data for transmission to the digital twin system 65330 to achieve such a model. In some embodiments, the machine learning model 65102 may analyze which types of sensor data are being collected, the quality and quantity of the sensor data being collected, and what the sensor data being collected represents, and may make decisions, predictions, analyses, and/or determinations related to which types of sensor data are and/or are not relevant to achieving the modeling goal and may make decisions, predictions, analyses, and/or determinations to prioritize, improve, and/or achieve the quality and quantity of sensor data being processed into simulation data for use by the digital twin system 65330 in achieving the modeling goal. [00933] In some embodiments, a user of the information technology system may input a modeling goal into the machine learning model 65102. The machine learning model 65102 may learn to analyze training data to output suggestions to the user of the information technology system regarding which types of sensor data are most relevant to achieving the modeling goal, such as one or more types of sensors positioned in, on, or near a transportation entity or a plurality of transportation entities that is relevant to the achievement of the modeling goal is and/or are not sufficient for achieving the modeling goal, and how a different configuration of the types of sensors, such as by adding, removing, or repositioning sensors, may better facilitate achievement of the modeling goal by the machine learning model 65102 and the digital twin system 65330. In some embodiments, the machine learning model 65102 may automatically increase or decrease collection rates, processing, storage, sampling rates, bandwidth allocation, bitrates, and other attributes of sensor data collection to achieve or better achieve the modeling goal. In some embodiments, the machine learning model 65102 may make suggestions or predictions to a user of the information technology system related to increasing or decreasing collection rates, processing, storage, sampling rates, bandwidth allocation, bitrates, and other attributes of sensor data collection to achieve or better achieve the modeling goal. In some embodiments, the machine learning model 65102 may use sensor data, simulation data, previous, current, and/or future digital replica simulations of one or more transportation entities of the plurality of transportation entities to automatically create and/or propose modeling goals. In some embodiments, modeling goals
SFT-106-A-PCT automatically created by the machine learning model 65102 may be automatically implemented by the machine learning model 65102. In some embodiments, modeling goals automatically created by the machine learning model 65102 may be proposed to a user of the information technology system, and implemented only after acceptance and/or partial acceptance by the user, such as after modifications are made to the proposed modeling goal by the user. [00934] In some embodiments, the user may input the one or more modeling goals, for example, by inputting one or more modeling commands to the information technology system. The one or more modeling commands may include, for example, a command for the machine learning model 65102 and the digital twin system 65330 to create a digital replica simulation of one transportation entity or a set of transportation entities, may include a command for the digital replica simulation to be one or more of a real-time simulation, and a hypothetical simulation. The modeling command may also include, for example, parameters for what types of sensor data should be used, sampling rates for the sensor data, and other parameters for the sensor data used in the one or more digital replica simulations. In some embodiments, the machine learning model 65102 may be configured to predict modeling commands, such as by using previous modeling commands as training data. The machine learning model 65102 may propose predicted modeling commands to a user of the information technology system, for example, to facilitate simulation of one or more of the transportation entities that may be useful for the management of the transportation entities and/or to allow the user to easily identify potential issues with or possible improvements to the transportation entities. [00935] In some embodiments, the machine learning model 65102 may be configured to evaluate a set of hypothetical simulations of one or more of the transportation entities. The set of hypothetical simulations may be created by the machine learning model 65102 and the digital twin system 65330 as a result of one or more modeling commands, as a result of one or more modeling goals, one or more modeling commands, by prediction by the machine learning model 65102, or a combination thereof. The machine learning model 65102 may evaluate the set of hypothetical simulations based on one or more metrics defined by the user, one or more metrics defined by the machine learning model 65102, or a combination thereof. In some embodiments, the machine learning model 65102 may evaluate each of the hypothetical simulations of the set of hypothetical simulations independently of one another. In some embodiments, the machine learning model 65102 may evaluate one or more of the hypothetical simulations of the set of hypothetical simulations in relation to one another, for example by ranking the hypothetical simulations or creating tiers of the hypothetical simulations based on one or more metrics. [00936] In some embodiments, the machine learning model 65102 may include one or more model interpretability systems to facilitate human understanding of outputs of the machine learning model
SFT-106-A-PCT 65102, as well as information and insight related to cognition and processes of the machine learning model 65102, i.e., the one or more model interpretability systems allow for human understanding of not only “what” the machine learning model 65102 is outputting, but also “why” the machine learning model 65102 is outputting the outputs thereof, and what process led to the 65102 formulating the outputs. The one or more model interpretability systems may also be used by a human user to improve and guide training of the machine learning model 65102, to help debug the machine learning model 65102, to help recognize bias in the machine learning model 65102. The one or more model interpretability systems may include one or more of linear regression, logistic regression, a generalized linear model (GLM), a generalized additive model (GAM), a decision tree, a decision rule, RuleFit, Naive Bayes Classifier, a K-nearest neighbors algorithm, a partial dependence plot, individual conditional expectation (ICE), an accumulated local effects (ALE) plot, feature interaction, permutation feature importance, a global surrogate model, a local surrogate (LIME) model, scoped rules, i.e. anchors, Shapley values, Shapley additive explanations (SHAP), feature visualization, network dissection, or any other suitable machine learning interpretability implementation. In some embodiments, the one or more model interpretability systems may include a model dataset visualization system. The model dataset visualization system is configured to automatically provide to a human user of the information technology system visual analysis related to distribution of values of the sensor data, the simulation data, and data nodes of the machine learning model 65102. [00937] In some embodiments, the machine learning model 65102 may include and/or implement an embedded model interpretability system, such as a Bayesian case model (BCM) or glass box. The Bayesian case model uses Bayesian case-based reasoning, prototype classification, and clustering to facilitate human understanding of data such as the sensor data, the simulation data, and data nodes of the machine learning model 65102. In some embodiments, the model interpretability system may include and/or implement a glass box interpretability method, such as a Gaussian process, to facilitate human understanding of data such as the sensor data, the simulation data, and data nodes of the machine learning model 65102. [00938] In some embodiments, the machine learning model 65102 may include and/or implement testing with concept activation vectors (TCAV). The TCAV allows the machine learning model 65102 to learn human-interpretable concepts, such as “running,” “not running,” “powered,” “not powered,” “robot,” “human,” “truck,” or “ship” from examples by a process including defining the concept, determining concept activation vectors, and calculating directional derivatives. By learning human-interpretable concepts, objects, states, etc., TCAV may allow the machine learning model 65102 to output useful information related to the transportation entities and data collected
SFT-106-A-PCT therefrom in a format that is readily understood by a human user of the information technology system. [00939] In some embodiments, the machine learning model 65102 may be and/or include an artificial neural network, e.g. a connectionist system configured to “learn” to perform tasks by considering examples and without being explicitly programmed with task-specific rules. The machine learning model 65102 may be based on a collection of connected units and/or nodes that may act like artificial neurons that may in some ways emulate neurons in a biological brain. The units and/or nodes may each have one or more connections to other units and/or nodes. The units and/or nodes may be configured to transmit information, e.g. one or more signals, to other units and/or nodes, process signals received from other units and/or nodes, and forward processed signals to other units and/or nodes. One or more of the units and/or nodes and connections therebetween may have one or more numerical “weights” assigned. The assigned weights may be configured to facilitate learning, i.e. training, of the machine learning model 65102. The weights assigned weights may increase and/or decrease one or more signals between one or more units and/or nodes, and in some embodiments may have one or more thresholds associated with one or more of the weights. The one or more thresholds may be configured such that a signal is only sent between one or more units and/or nodes if a signal and/or aggregate signal crosses the threshold. In some embodiments, the units and/or nodes may be assigned to a plurality of layers, each of the layers having one or both of inputs and outputs. A first layer may be configured to receive training data, transform at least a portion of the training data, and transmit signals related to the training data and transformation thereof to a second layer. A final layer may be configured to output an estimate, conclusion, product, or other consequence of processing of one or more inputs by the machine learning model 65102. Each of the layers may perform one or more types of transformations, and one or more signals may pass through one or more of the layers one or more times. In some embodiments, the machine learning model 65102 may employ deep learning and being at least partially modeled and/or configured as a deep neural network, a deep belief network, a recurrent neural network, and/or a convolutional neural network, such as by being configured to include one or more hidden layers. [00940] In some embodiments, the machine learning model 65102 may be and/or include a decision tree, e.g. a tree-based predictive model configured to identify one or more observations and determine one or more conclusions based on an input. The observations may be modeled as one or more “branches” of the decision tree, and the conclusions may be modeled as one or more “leaves” of the decision tree. In some embodiments, the decision tree may be a classification tree. the classification tree may include one or more leaves representing one or more class labels, and one or more branches representing one or more conjunctions of features configured to lead to the
SFT-106-A-PCT class labels. In some embodiments, the decision tree may be a regression tree. The regression tree may be configured such that one or more target variables may take continuous values. [00941] In some embodiments, the machine learning model 65102 may be and/or include a support vector machine, e.g. a set of related supervised learning methods configured for use in one or both of classification and regression-based modeling of data. The support vector machine may be configured to predict whether a new example falls into one or more categories, the one or more categories being configured during training of the support vector machine. [00942] In some embodiments, the machine learning model 65102 may be configured to perform regression analysis to determine and/or estimate a relationship between one or more inputs and one or more features of the one or more inputs. Regression analysis may include linear regression, wherein the machine learning model 65102 may calculate a single line to best fit input data according to one or more mathematical criteria. [00943] In embodiments, inputs to the machine learning model 65102 (such as a regression model, Bayesian network, supervised model, or other types of model) may be tested, such as by using a set of testing data that is independent from the data set used for the creation and/or training of the machine learning model, such as to test the impact of various inputs to the accuracy of the model 65102. For example, inputs to the regression model may be removed, including single inputs, pairs of inputs, triplets, and the like, to determine whether the absence of inputs creates a material degradation of the success of the model 65102. This may assist with recognition of inputs that are in fact correlated (e.g., are linear combinations of the same underlying data), that are overlapping, or the like. Comparison of model success may help select among alternative input data sets that provide similar information, such as to identify the inputs (among several similar ones) that generate the least “noise” in the model, that provide the most impact on model effectiveness for the lowest cost, or the like. Thus, input variation and testing of the impact of input variation on model effectiveness may be used to prune or enhance model performance for any of the machine learning systems described throughout this disclosure. [00944] In some embodiments, the machine learning model 65102 may be and/or include a Bayesian network. The Bayesian network may be a probabilistic graphical model configured to represent a set of random variables and conditional independence of the set of random variables. The Bayesian network may be configured to represent the random variables and conditional independence via a directed acyclic graph. The Bayesian network may include one or both of a dynamic Bayesian network and an influence diagram. [00945] In some embodiments, the machine learning model 65102 may be defined via supervised learning, i.e. one or more algorithms configured to build a mathematical model of a set of training data containing one or more inputs and desired outputs. The training data may consist of a set of
SFT-106-A-PCT training examples, each of the training examples having one or more inputs and desired outputs, i.e. a supervisory signal. Each of the training examples may be represented in the machine learning model 65102 by an array and/or a vector, i.e. a feature vector. The training data may be represented in the machine learning model 65102 by a matrix. The machine learning model 65102 may learn one or more functions via iterative optimization of an objective function, thereby learning to predict an output associated with new inputs. Once optimized, the objective function may provide the machine learning model 65102 with the ability to accurately determine an output for inputs other than inputs included in the training data. In some embodiments, the machine learning model 65102 may be defined via one or more supervised learning algorithms such as active learning, statistical classification, regression analysis, and similarity learning. Active learning may include interactively querying, by the machine learning model 65102, a user and/or an information source to label new data points with desired outputs. Statistical classification may include identifying, by the machine learning model 65102, to which a set of subcategories, i.e. subpopulations, a new observation belongs based on a training set of data containing observations having known categories. Regression analysis may include estimating, by the machine learning model 65102 relationships between a dependent variable, i.e. an outcome variable, and one or more independent variables, i.e. predictors, covariates, and/or features. Similarity learning may include learning, by the machine learning model 65102, from examples using a similarity function, the similarity function being designed to measure how similar or related two objects are. [00946] In some embodiments, the machine learning model 65102 may be defined via unsupervised learning, i.e. one or more algorithms configured to build a mathematical model of a set of data containing only inputs by finding structure in the data such as grouping or clustering of data points. In some embodiments, the machine learning model 65102 may learn from test data, i.e. training data, that has not been labeled, classified, or categorized. The unsupervised learning algorithm may include identifying, by the machine learning model 65102, commonalities in the training data and learning by reacting based on the presence or absence of the identified commonalities in new pieces of data. In some embodiments, the machine learning model 65102 may generate one or more probability density functions. In some embodiments, the machine learning model 65102 may learn by performing cluster analysis, such as by assigning a set of observations into subsets, i.e. clusters, according to one or more predesignated criteria, such as according to a similarity metric of which internal compactness, separation, estimated density, and/or graph connectivity are factors. [00947] In some embodiments, the machine learning model 65102 may be defined via semi- supervised learning, i.e. one or more algorithms using training data wherein some training examples may be missing training labels. The semi-supervised learning may be weakly supervised
SFT-106-A-PCT learning, wherein the training labels may be noisy, limited, and/or imprecise. The noisy, limited, and/or imprecise training labels may be cheaper and/or less labor intensive to produce, thus allowing the machine learning model 65102 to train on a larger set of training data for less cost and/or labor. [00948] In some embodiments, the machine learning model 65102 may be defined via reinforcement learning, such as one or more algorithms using dynamic programming techniques such that the machine learning model 65102 may train by taking actions in an environment in order to maximize a cumulative reward. In some embodiments, the training data is represented as a Markov Decision Process. [00949] In some embodiments, the machine learning model 65102 may be defined via self- learning, wherein the machine learning model 65102 is configured to train using training data with no external rewards and no external teaching, such as by employing a Crossbar Adaptive Array (CAA). The CAA may compute decisions about actions and/or emotions about consequence situations in a crossbar fashion, thereby driving teaching of the machine learning model 65102 by interactions between cognition and emotion. [00950] In some embodiments, the machine learning model 65102 may be defined via feature learning, i.e. one or more algorithms designed to discover increasingly accurate and/or apt representations of one or more inputs provided during training, e.g. training data. Feature learning may include training via principal component analysis and/or cluster analysis. Feature learning algorithms may include attempting, by the machine learning model 65102, to preserve input training data while also transforming the input training data such that the transformed input training data is useful. In some embodiments, the machine learning model 65102 may be configured to transform the input training data prior to performing one or more classifications and/or predictions of the input training data. Thus, the machine learning model 65102 may be configured to reconstruct input training data from one or more unknown data-generating distributions without necessarily conforming to implausible configurations of the input training data according to the distributions. In some embodiments, the feature learning algorithm may be performed by the machine learning model 65102 in a supervised, unsupervised, or semi-supervised manner. [00951] In some embodiments, the machine learning model 65102 may be defined via anomaly detection, i.e. by identifying rare and/or outlier instances of one or more items, events and/or observations. The rare and/or outlier instances may be identified by the instances differing significantly from patterns and/or properties of a majority of the training data. Unsupervised anomaly detection may include detecting of anomalies, by the machine learning model 65102, in an unlabeled training data set under an assumption that a majority of the training data is “normal.”
SFT-106-A-PCT Supervised anomaly detection may include training on a data set wherein at least a portion of the training data has been labeled as “normal” and/or “abnormal.” [00952] In some embodiments, the machine learning model 65102 may be defined via robot learning. Robot learning may include generation, by the machine learning model 65102, of one or more curricula, the curricula being sequences of learning experiences, and cumulatively acquiring new skills via exploration guided by the machine learning model 65102 and social interaction with humans by the machine learning model 65102. Acquisition of new skills may be facilitated by one or more guidance mechanisms such as active learning, maturation, motor synergies, and/or imitation. [00953] In some embodiments, the machine learning model 65102 can be defined via association rule learning. Association rule learning may include discovering relationships, by the machine learning model 65102, between variables in databases, in order to identify strong rules using some measure of “interestingness.” Association rule learning may include identifying, learning, and/or evolving rules to store, manipulate and/or apply knowledge. The machine learning model 65102 may be configured to learn by identifying and/or utilizing a set of relational rules, the relational rules collectively representing knowledge captured by the machine learning model 65102. Association rule learning may include one or more of learning classifier systems, inductive logic programming, and artificial immune systems. Learning classifier systems are algorithms that may combine a discovery component, such as one or more genetic algorithms, with a learning component, such as one or more algorithms for supervised learning, reinforcement learning, or unsupervised learning. Inductive logic programming may include rule-learning, by the machine learning model 65102, using logic programming to represent one or more of input examples, background knowledge, and hypothesis determined by the machine learning model 65102 during training. The machine learning model 65102 may be configured to derive a hypothesized logic program entailing all positive examples given an encoding of known background knowledge and a set of examples represented as a logical database of facts. [00954] Fig.75 illustrates an example environment of a digital twin system 200. In embodiments, the digital twin system 200 generates a set of digital twins of a set of transportation systems 11 and/or transportation entities within the set of transportation systems. In embodiments, the digital twin system 200 maintains a set of states of the respective transportation systems 11, such as using sensor data obtained from respective sensor systems 25 that monitor the transportation systems 11. In embodiments, the digital twin system 200 may include a digital twin management system 202, a digital twin I/O system 204, a digital twin simulation system 206, a digital twin dynamic model system 208, a cognitive intelligence system 258, (also disclosed herein as a cognitive processes system 258) and/or an environment control system 234. In embodiments, the digital twin system
SFT-106-A-PCT 200 may provide a real time sensor API 214 that provides a set of capabilities for enabling a set of interfaces for the sensors of the respective sensor systems 25. In embodiments, the digital twin system 200 may include and/or employ other suitable APIs, brokers, connectors, bridges, gateways, hubs, ports, routers, switches, data integration systems, peer-to-peer systems, and the like to facilitate the transferring of data to and from the digital twin system 200. In embodiments, the digital twin system 200, the sensor system 25, and a client application 217 may be connected to a network 81120. In these embodiments, these connective components may allow a network connected sensor or an intermediary device (e.g., a relay, an edge device, a switch, or the like) within a sensor system 25 to communicate data to the digital twin system 25 and/or to receive data (e.g., configuration data, control data, or the like) from the digital twin system 25 or another external system. In embodiments, the digital twin system 200 may further include a digital twin datastore 269 that stores digital twins 236 of various transportation systems 11 and the objects 222, devices 265, sensors 227, and/or humans 229 in the transportation system 11. [00955] A digital twin may refer to a digital representation of one or more transportation entities, such as a transportation system 11, a physical object 222, a device 265, a sensor 227, a human 229, or any combination thereof. Examples of transportation systems 11 include, but are not limited to, a land, sea, or air vehicle, a group of vehicles, a fleet, a squadron, an armada, a port, a rail yard, a loading dock, a ferry, a train, a drone, a submarine, a street sweeper, a snow plow, a recycling truck, a tanker truck, a mobile generator, a tunneling machine, a natural resources excavation machine (e.g., a mining vehicle, a mobile oil rig, etc.), a barge, an offshore oil platform, a rail car, a trailer, a dirigible, an aircraft carrier, a fishing vessel, a cargo ship, a cruise ship, a hospital ship and the like. Depending on the type of transportation system, the types of objects, devices, and sensors that are found in the environments will differ. Non-limiting examples of physical objects 222 include raw materials, manufactured products, excavated materials, containers (e.g., boxes, dumpsters, cooling towers, ship funnels, vats, pallets, barrels, palates, bins, and the like), furniture (e.g., tables, counters, workstations, shelving, etc.), and the like. Non-limiting examples of devices 265 include robots, computers, vehicles (e.g., cars, trucks, tankers, trains, forklifts, cranes, etc.), machinery/equipment (e.g., tractors, tillers, drills, presses, assembly lines, conveyor belts, etc.), and the like. The sensors 227 may be any sensor devices and/or sensor aggregation devices that are found in a sensor system 25 within a transportation system. Non-limiting examples of sensors 227 that may be implemented in a sensor system 25 may include temperature sensors 231, humidity sensors 233, vibration sensors 235, LIDAR sensors 238, motion sensors 239, chemical sensors 241, audio sensors 243, pressure sensors 253, weight sensors 254, radiation sensors 255, video sensors 270, wearable devices 257, relays 275, edge devices 277, switches 278, infrared sensors 297, radio frequency (RF) Sensors 215, Extraordinary Magnetoresistive (EMR) sensors 280,
SFT-106-A-PCT and/or any other suitable sensors. Examples of different types of physical objects 222, devices 265, sensors 227, and transportation systems 11 are referenced throughout the disclosure. [00956] In embodiments, a switch 278 is implemented in the sensor system 25 having multiple inputs and multiple outputs including a first input connected to the first sensor and a second input connected to the second sensor. The multiple outputs include a first output and second output configured to be switchable between a condition in which the first output is configured to switch between delivery of the first sensor signal and the second sensor signal and a condition in which there is simultaneous delivery of the first sensor signal from the first output and the second sensor signal from the second output. Each of multiple inputs is configured to be individually assigned to any of the multiple outputs. Unassigned outputs are configured to be switched off producing a high-impedance state. In some examples, the switch 278 can be a crosspoint switch. [00957] In embodiments, the first sensor signal and the second sensor signal are continuous vibration data about the transportation system. In embodiments, the second sensor in the sensor system 25 is configured to be connected to the first machine. In embodiments, the second sensor in the sensor system 25 is configured to be connected to a second machine in the transportation system. In embodiments, the computing environment of the platform is configured to compare relative phases of the first and second sensor signals. In embodiments, the first sensor is a single- axis sensor and the second sensor is a three-axis sensor. In embodiments, at least one of the multiple inputs of the switch 278 includes internet protocol, front-end signal conditioning, for improved signal-to-noise ratio. In embodiments, the switch 278 includes a third input that is configured with a continuously monitored alarm having a pre-determined trigger condition when the third input is unassigned to any of the multiple outputs. [00958] In embodiments, multiple inputs of the switch 278 include a third input connected to the second sensor and a fourth input connected to the second sensor. The first sensor signal is from a single-axis sensor at an unchanging location associated with the first machine. In embodiments, the second sensor is a three-axis sensor. In embodiments, the sensor system 25 is configured to record gap-free digital waveform data simultaneously from at least the first input, the second input, the third input, and the fourth input. In embodiments, the platform is configured to determine a change in relative phase based on the simultaneously recorded gap-free digital waveform data. In embodiments, the second sensor is configured to be movable to a plurality of positions associated with the first machine while obtaining the simultaneously recorded gap-free digital waveform data. In embodiments, multiple outputs of the switch include a third output and fourth output. The second, third, and fourth outputs are assigned together to a sequence of tri-axial sensors each located at different positions associated with the machine. In embodiments, the platform is
SFT-106-A-PCT configured to determine an operating deflection shape based on the change in relative phase and the simultaneously recorded gap-free digital waveform data. [00959] In embodiments, the unchanging location is a position associated with the rotating shaft of the first machine. In embodiments, tri-axial sensors in the sequence of the tri-axial sensors are each located at different positions on the first machine but are each associated with different bearings in the machine. In embodiments, tri-axial sensors in the sequence of the tri-axial sensors are each located at similar positions associated with similar bearings but are each associated with different machines. In embodiments, the sensor system 25 is configured to obtain the simultaneously recorded gap-free digital waveform data from the first machine while the first machine and a second machine are both in operation. In embodiments, the sensor system 25 is configured to characterize a contribution from the first machine and the second machine in the simultaneously recorded gap-free digital waveform data from the first machine. In embodiments, the simultaneously recorded gap-free digital waveform data has a duration that is in excess of one minute. [00960] In embodiments, a method of monitoring a machine having at least one shaft supported by a set of bearings includes monitoring a first data channel assigned to a single-axis sensor at an unchanging location associated with the machine. The method includes monitoring second, third, and fourth data channels each assigned to an axis of a three-axis sensor. The method includes recording gap-free digital waveform data simultaneously from all of the data channels while the machine is in operation and determining a change in relative phase based on the digital waveform data. [00961] In embodiments, the tri-axial sensor is located at a plurality of positions associated with the machine while obtaining the digital waveform. In embodiments, the second, third, and fourth channels are assigned together to a sequence of tri-axial sensors each located at different positions associated with the machine. In embodiments, the data is received from all of the sensors simultaneously. In embodiments, the method includes determining an operating deflection shape based on the change in relative phase information and the waveform data. In embodiments, the unchanging location is a position associated with the shaft of the machine. In embodiments, the tri-axial sensors in the sequence of the tri-axial sensors are each located at different positions and are each associated with different bearings in the machine. In embodiments, the unchanging location is a position associated with the shaft of the machine. The tri-axial sensors in the sequence of the tri-axial sensors are each located at different positions and are each associated with different bearings that support the shaft in the machine. [00962] In embodiments, the method includes monitoring the first data channel assigned to the single-axis sensor at an unchanging location located on a second machine. The method includes
SFT-106-A-PCT monitoring the second, the third, and the fourth data channels, each assigned to the axis of a three- axis sensor that is located at the position associated with the second machine. The method also includes recording gap-free digital waveform data simultaneously from all of the data channels from the second machine while both of the machines are in operation. In embodiments, the method includes characterizing the contribution from each of the machines in the gap-free digital waveform data simultaneously from the second machine. [00963] In some embodiments, on-device sensor fusion and data storage for network connected devices is supported, including on-device sensor fusion and data storage for a network connected device, where data from multiple sensors is multiplexed at the device for storage of a fused data stream. For example, pressure and temperature data may be multiplexed into a data stream that combines pressure and temperature in a time series, such as in a byte-like structure (where time, pressure, and temperature are bytes in a data structure, so that pressure and temperature remain linked in time, without requiring separate processing of the streams by outside systems), or by adding, dividing, multiplying, subtracting, or the like, such that the fused data can be stored on the device. Any of the sensor data types described throughout this disclosure, including vibration data, can be fused in this manner and stored in a local data pool, in storage, or on an IoT device, such as a data collector, a component of a machine, or the like. [00964] In some embodiments, a set of digital twins may represent an organization, such as an energy transport organization, an oil and gas transport organization, aerospace manufacturers, vehicle manufacturers, heavy equipment manufacturers, a mining organization, a drilling organization, an offshore platform organization, and the like. In these examples, the digital twins may include digital twins of one or more transportation systems of the organization. [00965] In embodiments, the digital twin management system 202 generates digital twins. A digital twin may be comprised of (e.g., via reference) other digital twins. In this way, a discrete digital twin may be comprised of a set of other discrete digital twins. For example, a digital twin of a machine may include digital twins of sensors on the machine, digital twins of components that make up the machine, digital twins of other devices that are incorporated in or integrated with the machine (such as systems that provide inputs to the machine or take outputs from it), and/or digital twins of products or other items that are made by the machine. Taking this example one step further, a digital twin of a transportation system may include a digital twin representing the layout of the transportation system, including the arrangement of physical assets and systems in or around the transportation system, as well as digital assets of the assets within the transportation system (e.g., the digital twin of the machine), as well as digital twins of storage areas in the transportation system, digital twins of humans collecting vibration measurements from machines throughout the transportation system, and the like. In this second example, the digital twin of the transportation
SFT-106-A-PCT system may reference the embedded digital twins, which may then reference other digital twins embedded within those digital twins. [00966] In some embodiments, a digital twin may represent abstract entities, such as workflows and/or processes, including inputs, outputs, sequences of steps, decision points, processing loops, and the like that make up such workflows and processes. For example, a digital twin may be a digital representation of a manufacturing process, a logistics workflow, an agricultural process, a mineral extraction process, or the like. In these embodiments, the digital twin may include references to the transportation entities that are included in the workflow or process. The digital twin of the manufacturing process may reflect the various stages of the process. In some of these embodiments, the digital twin system 200 receives real-time data from the transportation system (e.g., from a sensor system 25 of the transportation system 11) in which the manufacturing process takes place and reflects a current (or substantially current) state of the process in real-time. [00967] In embodiments, the digital representation may include a set of data structures (e.g., classes) that collectively define a set of properties of a represented physical object 222, device 265, sensor 227, or transportation system 11 and/or possible behaviors thereof. For example, the set of properties of a physical object 222 may include a type of the physical object, the dimensions of the object, the mass of the object, the density of the object, the material(s) of the object, the physical properties of the material(s), the surface of the physical object, the status of the physical object, a location of the physical object, identifiers of other digital twins contained within the object, and/or other suitable properties. Examples of behavior of a physical object may include a state of the physical object (e.g., a solid, liquid, or gas), a melting point of the physical object, a density of the physical object when in a liquid state, a viscosity of the physical object when in a liquid state, a freezing point of the physical object, a density of the physical object when in a solid state, a hardness of the physical object when in a solid state, the malleability of the physical object, the buoyancy of the physical object, the conductivity of the physical object, a burning point of the physical object, the manner by which humidity affects the physical object, the manner by which water or other liquids affect the physical object, a terminal velocity of the physical object, and the like. In another example, the set of properties of a device may include a type of the device, the dimensions of the device, the mass of the device, the density of the density of the device, the material(s) of the device, the physical properties of the material(s), the surface of the device, the output of the device, the status of the device, a location of the device, a trajectory of the device, vibration characteristics of the device, identifiers of other digital twins that the device is connected to and/or contains, and the like. Examples of the behaviors of a device may include a maximum acceleration of a device, a maximum speed of a device, ranges of motion of a device, a heating profile of a device, a cooling profile of a device, processes that are performed by the device,
SFT-106-A-PCT operations that are performed by the device, and the like. Example properties of an environment may include the dimensions of the environment, the boundaries of the environment, the temperature of the environment, the humidity of the environment, the airflow of the environment, the physical objects in the environment, currents of the environment (if a body of water), and the like. Examples of behaviors of an environment may include scientific laws that govern the environment, processes that are performed in the environment, rules or regulations that must be adhered to in the environment, and the like. [00968] In embodiments, the properties of a digital twin may be adjusted. For example, the temperature of a digital twin, a humidity of a digital twin, the shape of a digital twin, the material of a digital twin, the dimensions of a digital twin, or any other suitable parameters may be adjusted. As the properties of the digital twin are adjusted, other properties may be affected as well. For example, if the temperature of a volume associated with a transportation system 11 is increased, the pressure within the volume may increase as well, such as a pressure of a gas in accordance with the ideal gas law. In another example, if a digital twin of a subzero volume is increased to above freezing temperatures, the properties of an embedded twin of water in a solid state (i.e., ice) may change into a liquid state over time. [00969] Digital twins may be represented in a number of different forms. In embodiments, a digital twin may be a visual digital twin that is rendered by a computing device, such that a human user can view digital representations of a transportation system 11 and/or the physical objects 222, devices 265, and/or the sensors 227 within an environment. In embodiments, the digital twin may be rendered and output to a display device. In some of these embodiments, the digital twin may be rendered in a graphical user interface, such that a user may interact with the digital twin. For example, a user may “drill down” on a particular element (e.g., a physical object or device) to view additional information regarding the element (e.g., a state of a physical object or device, properties of the physical object or device, or the like). In some embodiments, the digital twin may be rendered and output in a virtual reality display. For example, a user may view a 3D rendering of a transportation system (e.g., using monitor or a virtual reality headset). While doing so, the user may view/inspect digital twins of physical assets or devices in the environment. [00970] In some embodiments, a data structure of the visual digital twins (i.e., digital twins that are configured to be displayed in a 2D or 3D manner) may include surfaces (e.g., splines, meshes, polygons meshes, or the like). In some embodiments, the surfaces may include texture data, shading information, and/or reflection data. In this way, a surface may be displayed in a more realistic manner. In some embodiments, such surfaces may be rendered by a visualization engine (not shown) when the digital twin is within a field of view and/or when existing in a larger digital twin (e.g., a digital twin of a transportation system). In these embodiments, the digital twin system
SFT-106-A-PCT 200 may render the surfaces of digital objects, whereby a rendered digital twin may be depicted as a set of adjoined surfaces. [00971] In embodiments, a user may provide input that controls one or more properties of a digital twin via a graphical user interface. For example, a user may provide input that changes a property of a digital twin. In response, the digital twin system 200 can calculate the effects of the changed property and may update the digital twin and any other digital twins affected by the change of the property. [00972] In embodiments, a user may view processes being performed with respect to one or more digital twins (e.g., manufacturing of a product, extracting minerals from a mine or well, a livestock inspection line, and the like). In these embodiments, a user may view the entire process or specific steps within a process. [00973] In some embodiments, a digital twin (and any digital twins embedded therein) may be represented in a non-visual representation (or “data representation”). In these embodiments, a digital twin and any embedded digital twins exist in a binary representation but the relationships between the digital twins are maintained. For example, in embodiments, each digital twin and/or the components thereof may be represented by a set of physical dimensions that define a shape of the digital twin (or component thereof). Furthermore, the data structure embodying the digital twin may include a location of the digital twin. In some embodiments, the location of the digital twin may be provided in a set of coordinates. For example, a digital twin of a transportation system may be defined with respect to a coordinate space (e.g., a Cartesian coordinate space, a polar coordinate space, or the like). In embodiments, embedded digital twins may be represented as a set of one or more ordered triples (e.g., [x coordinate, y coordinate, z coordinates] or other vector-based representations). In some of these embodiments, each ordered triple may represent a location of a specific point (e.g., center point, top point, bottom point, or the like) on the transportation entity (e.g., object, device, or sensor) in relation to the environment in which the transportation entity resides. In some embodiments, a data structure of a digital twin may include a vector that indicates a motion of the digital twin with respect to the environment. For example, fluids (e.g., liquids or gasses) or solids may be represented by a vector that indicates a velocity (e.g., direction and magnitude of speed) of the entity represented by the digital twin. In embodiments, a vector within a twin may represent a microscopic subcomponent, such as a particle within a fluid, and a digital twin may represent physical properties, such as displacement, velocity, acceleration, momentum, kinetic energy, vibrational characteristics, thermal properties, electromagnetic properties, and the like. [00974] In some embodiments, a set of two or more digital twins may be represented by a graph database that includes nodes and edges that connect the nodes. In some implementations, an edge
SFT-106-A-PCT may represent a spatial relationship (e.g., “abuts”, “rests upon”, “contains”, and the like). In these embodiments, each node in the graph database represents a digital twin of an entity (e.g., a transportation entity) and may include the data structure defining the digital twin. In these embodiments, each edge in the graph database may represent a relationship between two entities represented by connected nodes. In some implementations, an edge may represent a spatial relationship (e.g., “abuts”, “rests upon”, “interlocks with”, “bears”, “contains”, and the like). In embodiments, various types of data may be stored in a node or an edge. In embodiments, a node may store property data, state data, and/or metadata relating to a facility, system, subsystem, and/or component. Types of property data and state data will differ based on the entity represented by a node. For example, a node representing a robot may include property data that indicates a material of the robot, the dimensions of the robot (or components thereof), a mass of the robot, and the like. In this example, the state data of the robot may include a current pose of the robot, a location of the robot, and the like. In embodiments, an edge may store relationship data and metadata data relating to a relationship between two nodes. Examples of relationship data may include the nature of the relationship, whether the relationship is permanent (e.g., a fixed component would have a permanent relationship with the structure to which it is attached or resting on), and the like. In embodiments, an edge may include metadata concerning the relationship between two entities. For example, if a product was produced on an assembly line, one relationship that may be documented between a digital twin of the product and the assembly line may be “created by”. In these embodiments, an example edge representing the “created by” relationship may include a timestamp indicating a date and time that the product was created. In another example, a sensor may take measurements relating to a state of a device, whereby one relationship between the sensor and the device may include “measured” and may define a measurement type that is measured by the sensor. In this example, the metadata stored in an edge may include a list of N measurements taken and a timestamp of each respective measurement. In this way, temporal data relating to the nature of the relationship between two entities may be maintained, thereby allowing for an analytics engine, machine-learning engine, and/or visualization engine to leverage such temporal relationship data, such as by aligning disparate data sets with a series of points in time, such as to facilitate cause- and-effect analysis used for prediction systems. [00975] In some embodiments, a graph database may be implemented in a hierarchical manner, such that the graph database relates a set of facilities, systems, and components. For example, a digital twin of a manufacturing environment may include a node representing the manufacturing environment. The graph database may further include nodes representing various systems within the manufacturing environment, such as nodes representing an HVAC system, a lighting system, a manufacturing system, and the like, all of which may connect to the node representing the
SFT-106-A-PCT manufacturing system. In this example, each of the systems may further connect to various subsystems and/or components of the system. For example, within the HVAC system, the HVAC system may connect to a subsystem node representing a cooling system of the facility, a second subsystem node representing a heating system of the facility, a third subsystem node representing the fan system of the facility, and one or more nodes representing a thermostat of the facility (or multiple thermostats). Carrying this example further, the subsystem nodes and/or component nodes may connect to lower level nodes, which may include subsystem nodes and/or component nodes. For example, the subsystem node representing the cooling subsystem may be connected to a component node representing an air conditioner unit. Similarly, a component node representing a thermostat device may connect to one or more component nodes representing various sensors (e.g., temperature sensors, humidity sensors, and the like). [00976] In embodiments where a graph database is implemented, a graph database may relate to a single environment, transportation entity or transportation system or may represent a larger enterprise. In the latter scenario, a company may have various manufacturing and distribution facilities, as well as transportation entities and systems. In these embodiments, an enterprise node representing the enterprise may connect to transportation system nodes of each respective facility. In this way, the digital twin system 200 may maintain digital twins for multiple facilities, and transportation systems of an enterprise. [00977] In embodiments, such an enterprise may involve any sort of business or organization. In some embodiments, a transportation system may be the enterprise, for example, an airport. In other examples, an enterprise may include or be linked to a transportation system, for example a moving and storage company. [00978] In embodiments, an example of an enterprise could be a cruise line. The cruise line enterprise may be a business that owns and operates a fleet of cruise ships. The cruise line enterprise may also own or operate real estate and buildings, for example cruise terminals and resorts. Digital twins may be useful for representing the cruise line enterprise at various levels of abstraction and from various points of view. It may be advantageous for digital twins to have different characteristics appropriate to the various roles/responsibilities of the enterprise. The Chief Engineer of a ship may be interested in the ship’s ability to provide electrical power to the electric motors that drive the propellers. The Hotel Director of a ship may be the head of a department that is responsible for all guest services, entertainment, and revenue of the ship. While the Hotel Director may have an interest in the power generating capability of the ship, the appropriate level of detail regarding power generation would be different for a Hotel Director compared to the Chief Engineer. Similarly, the Captain of the ship and the Chief Executive Officer (CEO) of the cruise
SFT-106-A-PCT line would have different points of view, and the appropriate level of abstraction could be different for each. [00979] Another example of an enterprise could be a delivery service. The delivery service may be a business that operates transportation systems that include a fleet of aircraft, a fleet of trucks, and a fleet of smaller vehicles including automobiles. The delivery service may also operate real estate and buildings, for example, airport terminals, truck depots and sorting facilities. The delivery service may be organized to have individuals in charge of various functions of the enterprise; for example, aircraft operations and ground operations. Digital twins may be useful for representing the delivery service enterprise at various levels of abstraction and from various points of view. The various roles of the enterprise, having different responsibilities, may find utility in digital twins having different characteristics. A Chief Engineer of aircraft operations may be interested in the potential for a particular jet engine type to cause unexpected aircraft downtime. The Chief Engineer of ground operations may have an interest in the aircraft downtime, but the appropriate level of detail regarding jet engines would be different for Chief Engineer of ground operations compared to the Chief Engineer of aircraft operations. Similarly, the president of aircraft operations, and the CEO of the delivery service enterprise would have different points of view, and the appropriate level of abstraction could be different for each. [00980] Digital twins can be helpful for visualizing the current state of a system, running simulations on the system, and modeling behaviors, amongst many other uses. Depending on the configuration of the digital twin, however, a particular view or feature may not be useful for some members of an organization, as the configuration of the digital twin dictates the data that is depicted/visualized by the digital twin. Thus, in some embodiments, role-based digital twins are generated. Role-based digital twins may refer to digital twins of one or more segments/aspects of an enterprise, where the one or more segments/aspects and/or the granularity of the data represented by the role-based digital twin are tailored to a particular role within the entity and/or to the identity of a user that is associated with the role (optionally accounting for the competencies, training, education, experience, authority and/or permissions of the user, or other characteristics). [00981] In embodiments, the role-based digital twins include executive digital twins. Executive digital twins may refer to digital twins that are configured for a respective executive within an enterprise. Examples of executive digital twins may include CEO digital twins, (Chief Financial Officer (CFO) digital twins, Chief Operations Officer (COO) digital twins, Human Resources (HR) digital twins, Chief Technology Officer (CTO) digital twins, Chief Marketing Officer (CMO) digital twins, General Counsel (GC) digital twins, Chief Information Officer (CIO) digital twins, and the like. In some of these embodiments, the digital twin generation system 8928, also called the digital twin management system 202 (Fig. 75) herein, generates different types of executive
SFT-106-A-PCT digital twins for users having different roles within the organization. In some of these embodiments, the respective configuration of each type of executive digital twin may be predefined with default digital twin data types, default relationships among entities, default features, and default granularities, among other elements. The default data types, entities, features and granularities may be determined based on a model of an organization, which may, in turn, be based on an industry-specific or domain-specific model or template, such as one that is based on a typical organizational structure for an industry (e.g., an automotive manufacturer, a consumer packaged goods manufacturer, a nationwide retailer, a regional grocery chain, or many others). In embodiments, an artificial intelligence system may be trained, such as on a labeled industry specific or domain-specific data set, to automatically generate an industry-specific or domain-specific digital twin for an organization, with default configuration of data types, entities, features and granularities for various roles within an organization of that industry or domain. The defaults can then be reconfigured in a user interface of an authorized user to reflect company-specific variations from the industry-specific or domain-specific defaults. In some embodiments, a user (e.g., during an on-boarding process) may define the types of data depicted in the different types of executive digital twins, the entities to be represented, the features to be provided and/or the granularities of the different types of executive digital twins. Features may include what data is permitted to be accessed, what views are represented, levels of granularity of views, what analytic models and results can be accessed, what simulations can be undertaken, what changes can be made (including changes relevant to permissions of other users), communication and collaboration features (including receipt of alerts and the capacity to communicate directly to digital twins of other roles and users), control features, and many others. For convenience of reference, references to views, data, features, control or granularity throughout this disclosure should be understood to encompass any and all of the above, except where context specifically indicates otherwise. Granularity may refer to the level of detail at which a particular type of data or types of data is/are represented in a digital twin. For example, a CEO digital twin may include P&L data for a particular time period but may not depict the various revenue streams and costs that contribute to the P&L data during the time period. Continuing this example, the CFO digital twin may depict the various revenue streams and costs during the time period in addition to the high-level P&L data. The foregoing examples are not intended to limit the scope of the disclosure. Additional examples and configurations of different executive digital twins are described throughout the disclosure. [00982] In some embodiments, executive digital twins may allow a user (e.g., a CEO, CFO, COO, VP, Board member, GC, or the like) to increase the granularity of a particular state depicted in the digital twin (also referred to "drilling down into" a state of the digital twin) For example, a CEO digital twin may depict low granularity snapshots or summaries of P&L data, sales figures,
SFT-106-A-PCT customer satisfaction, employee satisfaction, and the like. A user (e.g., the CEO of an enterprise) may opt to drill down into the P&L data via a client application depicting the CEO digital twin. In response, the digital twin system may provide higher resolution P&L data, such as real-time revenue streams, real-time cost streams, and the like. In another example, the CEO digital twin may include visual indicators of different states of the enterprise. For example, the CEO digital twin may depict different colored icons to differentiate a condition (e.g., current and/or forecasted condition) of a respective data item. For example, a red icon may indicate a warning state, a yellow icon may indicate a neutral state, and a green icon may indicate a satisfactory state. In this example, the user (e.g., a CEO) may drill down into a particular data item (e.g., may select a red sales icon to drill down into the sales data, to see more specific and/or additional data, in order to determine why there is the warning state). In response, the CEO digital twin may depict one or more different data streams relating to the selected data item. [00983] In embodiments, the digital twin system 200 may use a graph database to generate a digital twin that may be rendered and displayed and/or may be represented in a data representation. In the former scenario, the digital twin system 200 may receive a request to render a digital twin, whereby the request includes one or more parameters that are indicative of a view that will be depicted. For example, the one or more parameters may indicate a transportation system to be depicted and the type of rendering (e.g., “real-world view” that depicts the environment as a human would see it, an “infrared view” that depicts objects as a function of their respective temperature, an “airflow view” that depicts the airflow in a digital twin, or the like). In response, the digital twin system 200 may traverse a graph database and may determine a configuration of the transportation system to be depicted based on the nodes in the graph database that are related (either directly or through a lower level node) to the transportation system node of the transportation system and the edges that define the relationships between the related nodes. Upon determining a configuration, the digital twin system 200 may identify the surfaces that are to be depicted and may render those surfaces. The digital twin system 200 may then render the requested digital twin by connecting the surfaces in accordance with the configuration. The rendered digital twin may then be output to a viewing device (e.g., VR headset, monitor, or the like). In some scenarios, the digital twin system 200 may receive real-time sensor data from a sensor system 25 of a transportation system 11 and may update the visual digital twin based on the sensor data. For example, the digital twin system 200 may receive sensor data (e.g., vibration data from a vibration sensor 235) relating to a motor and its set of bearings. Based on the sensor data, the digital twin system 200 may update the visual digital twin to indicate the approximate vibrational characteristics of the set of bearings within a digital twin of the motor.
SFT-106-A-PCT [00984] In scenarios where the digital twin system 200 is providing data representations of digital twins (e.g., for dynamic modeling, simulations, machine learning), the digital twin system 200 may traverse a graph database and may determine a configuration of the transportation system to be depicted based on the nodes in the graph database that are related (either directly or through a lower level node) to the transportation system node of the transportation system and the edges that define the relationships between the related nodes. In some scenarios, the digital twin system 200 may receive real-time sensor data from a sensor system 25 of a transportation system 11 and may apply one or more dynamic models to the digital twin based on the sensor data. In other scenarios, a data representation of a digital twin may be used to perform simulations, as is discussed in greater detail throughout the specification. [00985] In some embodiments, the digital twin system 200 may execute a digital ghost that is executed with respect to a digital twin of a transportation system. In these embodiments, the digital ghost may monitor one or more sensors of a sensor system 25 of a transportation system to detect anomalies that may indicate a malicious virus or other security issues. [00986] As discussed, the digital twin system 200 may include a digital twin management system 202, a digital twin I/O system 204, a digital twin simulation system 206, a digital twin dynamic model system 208, a cognitive intelligence system 258, and/or an environment control system 234. [00987] In embodiments, the digital twin management system 202 creates new digital twins, maintains/updates existing digital twins, and/or renders digital twins. The digital twin management system 202 may receive user input, uploaded data, and/or sensor data to create and maintain existing digital twins. Upon creating a new digital twin, the digital twin management system 202 may store the digital twin in the digital twin datastore 269. Creating, updating, and rendering digital twins are discussed in greater detail throughout the disclosure. [00988] In embodiments, the digital twin I/O system 204 receives input from various sources and outputs data to various recipients. In embodiments, the digital twin I/O system receives sensor data from one or more sensor systems 25. In these embodiments, each sensor system 25 may include one or more IoT sensors that output respective sensor data. Each sensor may be assigned an IP address or may have another suitable identifier. Each sensor may output sensor packets that include an identifier of the sensor and the sensor data. In some embodiments, the sensor packets may further include a timestamp indicating a time at which the sensor data was collected. In some embodiments, the digital twin I/O system 204 may interface with a sensor system 25 via the real- time sensor API 214. In these embodiments, one or more devices (e.g., sensors, aggregators, edge devices) in the sensor system 25 may transmit the sensor packets containing sensor data to the digital twin I/O system 204 via the API. The digital twin I/O system may determine the sensor system 25 that transmitted the sensor packets and the contents thereof, and may provide the sensor
SFT-106-A-PCT data and any other relevant data (e.g., time stamp, environment identifier/sensor system identifier, and the like) to the digital twin management system 202. [00989] In embodiments, the digital twin I/O system 204 may receive imported data from one or more sources. For example, the digital twin system 200 may provide a portal for users to create and manage their digital twins. In these embodiments, a user may upload one or more files (e.g., image files, LIDAR scans, blueprints, and the like) in connection with a new digital twin that is being created. In response, the digital twin I/O system 204 may provide the imported data to the digital twin management system 202. The digital twin I/O system 204 may receive other suitable types of data without departing from the scope of the disclosure. [00990] In some embodiments, the digital twin simulation system 206 is configured to execute simulations using the digital twin. For example, the digital twin simulation system 206 may iteratively adjust one or more parameters of a digital twin and/or one or more embedded digital twins. In embodiments, the digital twin simulation system 206, for each set of parameters, executes a simulation based on the set of parameters and may collect the simulation outcome data resulting from the simulation. Put another way, the digital twin simulation system 206 may collect the properties of the digital twin and the digital twins within or containing the digital twin used during the simulation as well as any outcomes stemming from the simulation. For example, in running a simulation on a digital twin of an indoor agricultural facility, the digital twin simulation system 206 can vary the temperature, humidity, airflow, carbon dioxide and/or other relevant parameters and can execute simulations that output outcomes resulting from different combinations of the parameters. In another example, the digital twin simulation system 206 may simulate the operation of a specific machine within a transportation system that produces an output given a set of inputs. In some embodiments, the inputs may be varied to determine an effect of the inputs on the machine and the output thereof. In another example, the digital twin simulation system 206 may simulate the vibration of a machine and/or machine components. In this example, the digital twin of the machine may include a set of operating parameters, interfaces, and capabilities of the machine. In some embodiments, the operating parameters may be varied to evaluate the effectiveness of the machine. The digital twin simulation system 206 is discussed in further detail throughout the disclosure. [00991] In embodiments, the digital twin dynamic model system 208 is configured to model one or more behaviors with respect to a digital twin of a transportation system. In embodiments, the digital twin dynamic model system 208 may receive a request to model a certain type of behavior regarding an environment or a process and may model that behavior using a dynamic model, the digital twin of the transportation system or process, and sensor data collected from one or more sensors that are monitoring the environment or process. For example, an operator of a machine
SFT-106-A-PCT having bearings may wish to model the vibration of the machine and bearings to determine whether the machine and/or bearings can withstand an increase in output. In this example, the digital twin dynamic model system 208 may execute a dynamic model that is configured to determine whether an increase in output would result in adverse consequences (e.g., failures, downtime, or the like). The digital twin dynamic model system 208 is discussed in further detail throughout the disclosure. [00992] In embodiments, the cognitive processes system 258 performs machine learning and artificial intelligence related tasks on behalf of the digital twin system. In embodiments, the cognitive processes system 258 may train any suitable type of model, including but not limited to various types of neural networks, regression models, random forests, decision trees, Hidden Markov models, Bayesian models, and the like. In embodiments, the cognitive processes system 258 trains machine learned models using the output of simulations executed by the digital twin simulation system 206. In some of these embodiments, the outcomes of the simulations may be used to supplement training data collected from real-world environments and/or processes. In embodiments, the cognitive processes system 258 leverages machine learned models to make predictions, identifications, classifications and provide decision support relating to the real-world environments and/or processes represented by respective digital twins. [00993] For example, a machine-learned prediction model may be used to predict the cause of irregular vibrational patterns (e.g., a suboptimal, critical, or alarm vibration fault state) for a bearing of an engine in a transportation system. In this example, the cognitive processes system 258 may receive vibration sensor data from one or more vibration sensors disposed on or near the engine and may receive maintenance data from the transportation system and may generate a feature vector based on the vibration sensor data and the maintenance data. The cognitive processes system 258 may input the feature vector into a machine-learned model trained specifically for the engine (e.g., using a combination of simulation data and real-world data of causes of irregular vibration patterns) to predict the cause of the irregular vibration patterns. In this example, the causes of the irregular vibrational patterns could be a loose bearing, a lack of bearing lubrication, a bearing that is out of alignment, a worn bearing, the phase of the bearing may be aligned with the phase of the engine, loose housing, loose bolt, and the like. [00994] In another example, a machine-learned model may be used to provide decision support to bring a bearing of an engine in a transportation system operating at a suboptimal vibration fault level state to a normal operation vibration fault level state. In this example, the cognitive processes system 258 may receive vibration sensor data from one or more vibration sensors disposed on or near the engine and may receive maintenance data from the transportation system and may generate a feature vector based on the vibration sensor data and the maintenance data. The cognitive processes system 258 may input the feature vector into a machine-learned model trained
SFT-106-A-PCT specifically for the engine (e.g., using a combination of simulation data and real-world data of solutions to irregular vibration patterns) to provide decision support in achieving a normal operation fault level state of the bearing. In this example, the decision support could be a recommendation to tighten the bearing, lubricate the bearing, re-align the bearing, order a new bearing, order a new part, collect additional vibration measurements, change operating speed of the engine, tighten housings, tighten bolts, and the like. [00995] In another example, a machine-learned model may be used to provide decision support relating to vibration measurement collection by a worker. In this example, the cognitive processes system 258 may receive vibration measurement history data from the transportation system and may generate a feature vector based on the vibration measurement history data. The cognitive processes system 258 may input the feature vector into a machine-learned model trained specifically for the engine (e.g., using a combination simulation data and real-world vibration measurement history data) to provide decision support in selecting vibration measurement locations. [00996] In yet another example, a machine-learned model may be used to identify vibration signatures associated with machine and/or machine component problems. In this example, the cognitive processes system 258 may receive vibration measurement history data from the transportation system and may generate a feature vector based on the vibration measurement history data. The cognitive processes system 258 may input the feature vector into a machine- learned model trained specifically for the engine (e.g., using a combination simulation data and real-world vibration measurement history data) to identify vibration signatures associated with a machine and/or machine component. The foregoing examples are non-limiting examples and the cognitive processes system 258 may be used for any other suitable AI/machine-learning related tasks that are performed with respect to industrial facilities. [00997] In examples, vibration data can be diagnostic of fault level states of many bearing applications in transportation entities and systems. For example, bearing vibrations can be used to detect nascent faults in axles and transmissions used in automobiles, trucks and trains. In examples, vibration data can be used to detect fault level states of bearings that support propeller shafts, water pumps, and crankshafts of various transportation entities and systems including automobiles, aircraft, ships, and submarines. Vibration data can also be used to detect fault level states of other components of transportation entities or systems including, e.g., jet engine compressor blades, aircraft propellers, ship propellers, and ship propeller shafts. It is to be understood that by analyzing vibration data, it can be possible to identify or classify transportation entities by their vibration signatures. Some tools for analysis include Fast Fourier Transforms (FFT) and filters. As such, some transportation entities may be identified or classified by the vibrations, including sounds, that
SFT-106-A-PCT they produce, without the vibration sensor, including microphones, being in direct contact with the transportation entity. In a similar way that certain brands of motorcycles and automobiles may be identified by their exhaust notes, vibration sensors may be used to identify or classify a particular machine. Having selected an appropriate digital twin based on the identity or classification of a particular machine, a better diagnosis can be done on fault level states. Thus, as disclosed herein, sensors may rove in a large transportation system, like a ship, and audit various machines in the transportation system. In some examples, location-based identification of the various machines may be used. In other examples, the methods and systems of the present disclosure can be used to identify or classify the various machines based on their vibration signatures. Further, the methods of the present disclosure can use a stationary set of vibration sensors, for example, to monitor a fleet of vehicles as the vehicles pass by the sensors. The digital twins of the various vehicles may be maintained so that changes in the vibration signatures, detected by sensors mounted to or near the roadbed, can be tracked. Thus, using the methods of the present disclosure, it may be possible to determine by a drive-by test that a particular vehicle in a fleet has, for example, wrist-pin damage that is getting worse without taking the vehicle out of service, and to report that information in a convenient system that uses digital twins. [00998] In embodiments, the environment control system 234 controls one or more aspects of transportation systems 11. In some of these embodiments, the environment control system 234 may control one or more devices within a transportation system. For example, the environment control system 234 may control one or more machines within a transportation system 11, robots within a transportation system 11, an HVAC system of the transportation system 11, an alarm system of the transportation system 11, an assembly line in the transportation system 11, or the like. In embodiments, the environment control system 234 may leverage the digital twin simulation system 206, the digital twin dynamic model system 208, and/or the cognitive processes system 258 to determine one or more control instructions. In embodiments, the environment control system 234 may implement a rules-based and/or a machine-learning approach to determine the control instructions. In response to determining a control instruction, the environment control system 234 may output the control instruction to the intended device within a specific transportation system 11 via the digital twin I/O system 204. [00999] Fig. 76 illustrates an example digital twin management system 202 according to some embodiments of the present disclosure. In embodiments, the digital twin management system 202 may include, but is not limited to, a digital twin creation module 264, a digital twin update module 266, and a digital twin visualization module 268. [01000] In embodiments, the digital twin creation module 264 may create a set of new digital twins of a set of transportation systems using input from users, imported data (e.g., blueprints,
SFT-106-A-PCT specifications, and the like), image scans of the transportation system, 3D data from a LIDAR device and/or SLAM sensor, and other suitable data sources. For example, a user (e.g., a user affiliated with an organization/customer account) may, via a client application 217, provide input to create a new digital twin of a transportation system. In doing so, the user may upload 2D or 3D image scans of the transportation system and/or a blueprint of the transportation system. The user may also upload 3D data, such as taken by a camera, a LIDAR device, an IR scanner, a set of SLAM sensors, a radar device, an EMF scanner, or the like. In response to the provided data, the digital twin creation module 264 may create a 3D representation of the environment, which may include any objects that were captured in the image data/detected in the 3D data. In embodiments, the cognitive processes system 258 may analyze input data (e.g., blueprints, image scans, 3D data) to classify rooms, pathways, equipment, and the like to assist in the generation of the 3D representation. In some embodiments, the digital twin creation module 264 may map the digital twin to a 3D coordinate space (e.g., a Cartesian space having x, y, and z axes). [01001] In some embodiments, the digital twin creation module 264 may output the 3D representation of the transportation system to a graphical user interface (GUI). In some of these embodiments, a user may identify certain areas and/or objects and may provide input relating to the identified areas and/or objects. For example, a user may label specific rooms, equipment, machines, and the like. Additionally or alternatively, the user may provide data relating to the identified objects and/or areas. For example, in identifying a piece of equipment, the user may provide a make/model number of the equipment. In some embodiments, the digital twin creation module 264 may obtain information from a manufacturer of a device, a piece of equipment, or machinery. This information may include one or more properties and/or behaviors of the device, equipment, or machinery. In some embodiments, the user may, via the GUI, identify locations of sensors throughout the environment. For each sensor, the user may provide a type of sensor and related data (e.g., make, model, IP address, and the like). The digital twin creation module 264 may record the locations (e.g., the x, y, z coordinates of the sensors) in the digital twin of the transportation system. In embodiments, the digital twin system 200 may employ one or more systems that automate the population of digital twins. For example, the digital twin system 200 may employ a machine vision-based classifier that classifies makes and models of devices, equipment, or sensors. Additionally or alternatively, the digital twin system 200 may iteratively ping different types of known sensors to identify the presence of specific types of sensors that are in an environment. Each time a sensor responds to a ping, the digital twin system 200 may extrapolate the make and model of the sensor. [01002] In some embodiments, the manufacturer may provide or make available digital twins of their products (e.g., sensors, devices, machinery, equipment, raw materials, and the like). In these
SFT-106-A-PCT embodiments, the digital twin creation module 264 may import the digital twins of one or more products that are identified in the transportation system and may embed those digital twins in the digital twin of the transportation system. In embodiments, embedding a digital twin within another digital twin may include creating a relationship between the embedded digital twin with the other digital twin. In these embodiments, the manufacturer of the digital twin may define the behaviors and/or properties of the respective products. For example, a digital twin of a machine may define the manner by which the machine operates, the inputs/outputs of the machine, and the like. In this way, the digital twin of the machine may reflect the operation of the machine given a set of inputs. [01003] In embodiments, a user may define one or more processes that occur in an environment. In these embodiments, the user may define the steps in the process, the machines/devices that perform each step in the process, the inputs to the process, and the outputs of the process. [01004] In embodiments, the digital twin creation module 264 may create a graph database that defines the relationships between a set of digital twins. In these embodiments, the digital twin creation module 264 may create nodes for the environment, systems and subsystems of the transportation system, devices in the environment, sensors in the environment, workers that work in the environment, processes that are performed in the environment, and the like. In embodiments, the digital twin creation module 264 may write the graph database representing a set of digital twins to the digital twin datastore 269. [01005] In embodiments, the digital twin creation module 264 may, for each node, include any data relating to the entity in the node representing the entity. For example, in defining a node representing an environment, the digital twin creation module 264 may include the dimensions, boundaries, layout, pathways, and other relevant spatial data in the node. Furthermore, the digital twin creation module 264 may define a coordinate space with respect to the environment. In the case that the digital twin may be rendered, the digital twin creation module 264 may include a reference in the node to any shapes, meshes, splines, surfaces, and the like that may be used to render the environment. In representing a system, subsystem, device, or sensor, the digital twin creation module 264 may create a node for the respective entity and may include any relevant data. For example, the digital twin creation module 264 may create a node representing a machine in the environment. In this example, the digital twin creation module 264 may include the dimensions, behaviors, properties, location, and/or any other suitable data relating to the machine in the node representing the machine. The digital twin creation module 264 may connect nodes of related entities with an edge, thereby creating a relationship between the entities. In doing so, the created relationship between the entities may define the type of relationship characterized by the edge. In representing a process, the digital twin creation module 264 may create a node for the entire process or may create a node for each step in the process. In some of these embodiments, the digital twin
SFT-106-A-PCT creation module 264 may relate the process nodes to the nodes that represent the machinery/devices that perform the steps in the process. In embodiments where an edge connects the process step nodes to the machinery/device that performs the process step, the edge or one of the nodes may contain information that indicates the input to the step, the output of the step, the amount of time the step takes, the nature of processing of inputs to produce outputs, a set of states or modes the process can undergo, and the like. [01006] In embodiments, the digital twin update module 266 updates sets of digital twins based on a current status of one or more transportation entities. In some embodiments, the digital twin update module 266 receives sensor data from a sensor system 25 of a transportation system and updates the status of the digital twin of the transportation system and/or digital twins of any affected systems, subsystems, devices, workers, processes, or the like. As discussed, the digital twin I/O system 204 may receive the sensor data in one or more sensor packets. The digital twin I/O system 204 may provide the sensor data to the digital twin update module 266 and may identify the environment from which the sensor packets were received and the sensor that provided the sensor packet. In response to the sensor data, the digital twin update module 266 may update a state of one or more digital twins based on the sensor data. In some of these embodiments, the digital twin update module 266 may update a record (e.g., a node in a graph database) corresponding to the sensor that provided the sensor data to reflect the current sensor data. In some scenarios, the digital twin update module 266 may identify certain areas within the environment that are monitored by the sensor and may update a record (e.g., a node in a graph database) to reflect the current sensor data. For example, the digital twin update module 266 may receive sensor data reflecting different vibrational characteristics of a machine and/or machine components. In this example, the digital twin update module 266 may update the records representing the vibration sensors that provided the vibration sensor data and/or the records representing the machine and/or the machine components to reflect the vibration sensor data. In another example, in some scenarios, workers (e.g., drivers, pilots, ship’s crew, aircraft crew, maintenance workers), in a transportation system (e.g., air-traffic control facility, airport, railyard, truck depot, bridge, road, railroad, tunnel, or the like) may be required to wear wearable devices (e.g., smart watches, smart helmets, smart shoes, or the like). In these embodiments, the wearable devices may collect sensor data relating to the worker (e.g., location, movement, heartrate, respiration rate, body temperature, or the like) and/or the ambient environment surrounding the worker and may communicate the collected sensor data to the digital twin system 200 (e.g., via the real-time sensor API 214) either directly or via an aggregation device of the sensor system. In response to receiving the sensor data from the wearable device of a worker, the digital twin update module 266 may update a digital twin of a worker to reflect, for example, a location of the worker, a trajectory of the worker, a health status of the
SFT-106-A-PCT worker, or the like. In some of these embodiments, the digital twin update module 266 may update a node representing a worker and/or an edge that connects the node representing the transportation system with the collected sensor data to reflect the current status of the worker. [01007] In some embodiments, the digital twin update module 266 may provide the sensor data from one or more sensors to the digital twin dynamic model system 208, which may model a behavior of the transportation system and/or one or more transportation entities to extrapolate additional state data. [01008] In embodiments, the digital twin visualization module 268 receives requests to view a visual digital twin or a portion thereof. In embodiments, the request may indicate the digital twin to be viewed (e.g., a transportation system identifier). In response, the digital twin visualization module 268 may determine the requested digital twin and any other digital twins implicated by the request. For example, in requesting to view a digital twin of a transportation system, the digital twin visualization module 268 may further identify the digital twins of any transportation entities within the transportation system. In embodiments, the digital twin visualization module 268 may identify the spatial relationships between the transportation entities and the environment based on, for example, the relationships defined in a graph database. In these embodiments, the digital twin visualization module 268 can determine the relative location of embedded digital twins within the containing digital twin, relative locations of adjoining digital twins, and/or the transience of the relationship (e.g., is an object fixed to a point or does the object move). The digital twin visualization module 268 may render the requested digital twins and any other implicated digital twin based on the identified relationships. In some embodiments, the digital twin visualization module 268 may, for each digital twin, determine the surfaces of the digital twin. In some embodiments, the surfaces of a digital may be defined or referenced in a record corresponding to the digital twin, which may be provided by a user, determined from imported images, or defined by a manufacturer of a transportation entity. In the scenario that an object can take different poses or shapes (e.g., a robot), the digital twin visualization module 268 may determine a pose or shape of the object for the digital twin. The digital twin visualization module 268 may embed the digital twins into the requested digital twin and may output the requested digital twin to a client application. [01009] In some of these embodiments, the request to view a digital twin may further indicate the type of view. As discussed, in some embodiments, digital twins may be depicted in a number of different view types. For example, a transportation system or device may be viewed in a “real- world” view that depicts the transportation system or device as they typically appear, in a “heat” view that depicts the transportation system or device in a manner that is indicative of a temperature of the transportation system or device, in a “vibration” view that depicts the machines and/or
SFT-106-A-PCT machine components in a transportation system in a manner that is indicative of vibrational characteristics of the machines and/or machine components, in a “filtered” view that only displays certain types of objects within a transportation system or components of a device (such as objects that require attention resulting from, for example, recognition of a fault condition, an alert, an updated report, or other factors), an augmented view that overlays data on the digital twin, and/or any other suitable view types. In embodiments, digital twins may be depicted in a number of different role-based view types. For example, a manufacturing facility device may be viewed in an “operator” view that depicts the facility in a manner that is suitable for a facility operator, a “C- Suite” view that depicts the facility in a manner that is suitable for executive-level managers, a “marketing” view that depicts the facility in a manner that is suitable for workers in sales and/or marketing roles, a “board” view that depicts the facility in a manner that is suitable for members of a corporate board, a “regulatory” view that depicts the facility in a manner that is suitable for regulatory managers, and a “human resources” view that depicts the facility in a manner that is suitable for human resources personnel. In response to a request that indicates a view type, the digital twin visualization module 268 may retrieve the data for each digital twin that corresponds to the view type. For example, if a user has requested a vibration view of a transportation system, the digital twin visualization module 268 may retrieve vibration data for the transportation system (which may include vibration measurements taken from different machines and/or machine components and/or vibration measurements that were extrapolated by the digital twin dynamic model system 208 and/or simulated vibration data from digital twin simulation system 206) as well as available vibration data for any transportation entities appearing in the transportation system. In this example, the digital twin visualization module 268 may determine colors corresponding to each machine component in a transportation system that represent a vibration fault level state (e.g., red for alarm, orange for critical, yellow for suboptimal, and green for normal operation). The digital twin visualization module 268 may then render the digital twins of the machine components within the transportation system based on the determined colors. Additionally or alternatively, the digital twin visualization module 268 may render the digital twins of the machine components within the transportation system with indicators having the determined colors. For instance, if the vibration fault level state of an inbound bearing of a motor is suboptimal and the outbound bearing of the motor is critical, the digital twin visualization module 268 may render the digital twin of the inbound bearing having an indicator in a shade of yellow (e.g., suboptimal) and the outbound bearing having an indicator in a shade of orange (e.g., critical). It is noted that in some embodiments, the digital twin simulation system 200 may include an analytics system (not shown) that determines the manner by which the digital twin visualization module 268 presents information to a human user. For example, the analytics system may track outcomes relating to
SFT-106-A-PCT human interactions with real-world transportation systems or objects in response to information presented in a visual digital twin. In some embodiments, the analytics system may apply cognitive models to determine the most effective manner to display visualized information (e.g., what colors to use to denote an alarm condition, what kind of movements or animations bring attention to an alarm condition, or the like) or audio information (what sounds to use to denote an alarm condition) based on the outcome data. In some embodiments, the analytics system may apply cognitive models to determine the most suitable manner to display visualized information based on the role of the user. In embodiments, the visualization may include display of information related to the visualized digital twins, including graphical information, graphical information depicting vibration characteristics, graphical information depicting harmonic peaks, graphical information depicting peaks, vibration severity units data, vibration fault level state data, recommendations from cognitive intelligence system 258, predictions from cognitive intelligence system 258, probability of failure data, maintenance history data, time to failure data, cost of downtime data, probability of downtime data, cost of repair data, cost of machine replace data, probability of shutdown data, KPIs, and the like. [01010] In another example, a user may request a filtered view of a digital twin of a process, whereby the digital twin of the process only shows components (e.g., machine or equipment) that are involved in the process. In this example, the digital twin visualization module 268 may retrieve a digital twin of the process, as well as any related digital twins (e.g., a digital twin of the transportation system and digital twins of any machinery or devices that affect the process). The digital twin visualization module 268 may then render each of the digital twins (e.g., the transportation system and the relevant transportation entities) and then may perform the process on the rendered digital twins. It is noted that as a process may be performed over a period of time and may include moving items and/or parts, the digital twin visualization module 268 may generate a series of sequential frames that demonstrate the process. In this scenario, the movements of the machines and/or devices implicated by the process may be determined according to the behaviors defined in the respective digital twins of the machines and/or devices. [01011] As discussed, the digital twin visualization module 268 may output the requested digital twin to a client application 217. In some embodiments, the client application 217 is a virtual reality application, whereby the requested digital twin is displayed on a virtual reality headset. In some embodiments, the client application 217 is an augmented reality application, whereby the requested digital twin is depicted in an AR-enabled device. In these embodiments, the requested digital twin may be filtered such that visual elements and/or text are overlaid on the display of the AR-enabled device.
SFT-106-A-PCT [01012] It is noted that while a graph database is discussed, the digital twin system 200 may employ other suitable data structures to store information relating to a set of digital twins. In these embodiments, the data structures, and any related storage system, may be implemented such that the data structures provide for some degree of feedback loops and/or recursion when representing iteration of flows. [01013] Fig. 77 illustrates an example of a digital twin I/O system 204 that interfaces with the transportation system 11, the digital twin system 200, and/or components thereof to provide bi- directional transfer of data between coupled components according to some embodiments of the present disclosure. [01014] In embodiments, the transferred data includes signals (e.g., request signals, command signals, response signals, etc.) between connected components, which may include software components, hardware components, physical devices, virtualized devices, simulated devices, combinations thereof, and the like. The signals may define material properties (e.g., physical quantities of temperature, pressure, humidity, density, viscosity, etc.), measured values (e.g., contemporaneous or stored values acquired by the device or system), device properties (e.g., device ID or properties of the device’s design specifications, materials, measurement capabilities, dimensions, absolute position, relative position, combinations thereof, and the like), set points (e.g., targets for material properties, device properties, system properties, combinations thereof, and the like), and/or critical points (e.g., threshold values such as minimum or maximum values for material properties, device properties, system properties, etc.). The signals may be received from systems or devices that acquire (e.g., directly measure or generate) or otherwise obtain (e.g., receive, calculate, look-up, filter, etc.) the data, and may be communicated to or from the digital twin I/O system 204 at predetermined times or in response to a request (e.g., polling) from the digital twin I/O system 204. The communications may occur through direct or indirect connections (e.g., via intermediate modules within a circuit and/or intermediate devices between the connected components). The values may correspond to real-world elements 2R (e.g., an input or output for a tangible vibration sensor) or virtual elements 2V (e.g., an input or output for a digital twin 2DT and/or a simulated element 2S that provide vibration data). [01015] In embodiments, the real-world elements 2R may be elements within the transportation system 11. The real-world elements 2R may include, for example, non-networked elements 222, the devices 265 (smart or non-smart), sensors 227, and humans 229. The real-world elements 2R may be process or non-process equipment within the transportation systems 11. For example, process equipment may include motors, pumps, mills, fans, painters, welders, smelters, etc., and non-process equipment may include personal protective equipment, safety equipment, emergency stations or devices (e.g., safety showers, eyewash stations, fire extinguishers, sprinkler systems,
SFT-106-A-PCT etc.), warehouse features (e.g., walls, floor layout, etc.), obstacles (e.g., persons or other items within the transportation system 11, etc.), etc. [01016] In embodiments, the virtual elements 2V may be digital representations of or that correspond to contemporaneously existing real-world elements 2R. Additionally or alternatively, the virtual elements 2V may be digital representations of or that correspond to real-world elements 2R that may be available for later addition and implementation into the transportation system 11. The virtual elements may include, for example, simulated elements 2S and/or digital twins 2DT. In embodiments, the simulated elements 2S may be digital representations of real-world elements 2S that are not present within the transportation system 11. The simulated elements 2S may mimic desired physical properties which may be later integrated within the transportation system 11 as real-world elements 2R (e.g., a “black box” that mimics the dimensions of real-world elements 2R). The simulated elements 2S may include digital twins of existing objects (e.g., a single simulated element 2S may include one or more digital twins 2DT for existing sensors). Information related to the simulated elements 2S may be obtained, for example, by evaluating behavior of corresponding real-world elements 2R using mathematical models or algorithms, from libraries that define information and behavior of the simulated elements 2S (e.g., physics libraries, chemistry libraries, or the like). [01017] In embodiments, the digital twin 2DT may be a digital representation of one or more real- world elements 2R. The digital twins 2DT are configured to mimic, copy, and/or model behaviors and responses of the real-world elements 2R in response to inputs, outputs, and/or conditions of the surrounding or ambient environment. Data related to physical properties and responses of the real-world elements 2R may be obtained, for example, via user input, sensor input, and/or physical modeling (e.g., thermodynamic models, electrodynamic models, mechanodynamic models, etc.). Information for the digital twin 2DT may correspond to and be obtained from the one or more real- world elements 2R corresponding to the digital twin 2DT. For example, in some embodiments, the digital twin 2DT may correspond to one real-world element 2R that is a fixed digital vibration sensor 235 on a machine component, and vibration data for the digital twin 2DT may be obtained by polling or fetching vibration data measured by the fixed digital vibration sensor on the machine component. In a further example, the digital twin 2DT may correspond to a plurality of real-world elements 2R such that each of the elements can be a fixed digital vibration sensor on a machine component, and vibration data for the digital twin 2DT may be obtained by polling or fetching vibration data measured by each of the fixed digital vibration sensors on the plurality of real-world elements 2R. Additionally or alternatively, vibration data of a first digital twin 2DT may be obtained by fetching vibration data of a second digital twin 2DT that is embedded within the first digital twin 2DT, and vibration data for the first digital twin 2DT may include or be derived from
SFT-106-A-PCT vibration data for the second digital twin 2DT. For example, the first digital twin may be a digital twin 2DT of a transportation system 11 (alternatively referred to as a “transportation system digital twin”) and the second digital twin 2DT may be a digital twin 2DT corresponding to a vibration sensor disposed within the transportation system 11 such that the vibration data for the first digital twin 2DT is obtained from or calculated based on data including the vibration data for the second digital twin 2DT. [01018] In embodiments, the digital twin system 200 monitors properties of the real-world elements 2R using the sensors 227 within a respective transportation system 11 that is or may be represented by a digital twin 2DT and/or outputs of models for one or more simulated elements 2S. In embodiments, the digital twin system 200 may minimize network congestion while maintaining effective monitoring of processes by extending polling intervals and/or minimizing data transfer for sensors corresponding that correspond to affected real-world elements 2R and performing simulations (e.g., via the digital-twin simulation system 106) during the extended interval using data that was obtained from other sources (e.g., sensors that are physically proximate to or have an effect on the affected real-world elements 2R). Additionally or alternatively, error checking may be performed by comparing the collected sensor data with data obtained from the digital-twin simulation system 106. For example, consistent deviations or fluctuations between sensor data obtained from the real-world element 2R and the simulated element 2S may indicate malfunction of the respective sensor or another fault condition. [01019] In embodiments, the digital twin system 200 may optimize features of the transportation system through use of one or more simulated elements 2S. For example, the digital twin system 200 may evaluate effects of the simulated elements 2S within a digital twin of a transportation system to quickly and efficiently determine costs and/or benefits flowing from inclusion, exclusion, or substitution of real-world elements 2R within the transportation system 11. The costs and benefits may include, for example, increased machinery costs (e.g., capital investment and maintenance), increased efficiency (e.g., process optimization to reduce waste or increase throughput), decreased or altered footprint within the transportation system 11, extension or optimization of useful lifespans, minimization of component faults, minimization of component downtime, etc. [01020] In embodiments, the digital twin I/O system 204 may include one or more software modules that are executed by one or more controllers of one or more devices (e.g., server devices, user devices, and/or distributed devices) to affect the described functions. The digital twin I/O system 204 may include, for example, an input module 263, an output module 273, and an adapter module 283.
SFT-106-A-PCT [01021] In embodiments, the input module 263 may obtain or import data from data sources in communication with the digital twin I/O system 204, such as the sensor system 25 and the digital twin simulation system 206. The data may be immediately used by or stored within the digital twin system 200. The imported data may be ingested from data streams, data batches, in response to a triggering event, combinations thereof, and the like. The input module 263 may receive data in a format that is suitable to transfer, read, and/or write information within the digital twin system 200. [01022] In embodiments, the output module 273 may output or export data to other system components (e.g., the digital twin datastore 269, the digital twin simulation system 206, the cognitive intelligence system 258, etc.), devices 265, and/or the client application 217. The data may be output in data streams, data batches, in response to a triggering event (e.g., a request), combinations thereof, and the like. The output module 273 may output data in a format that is suitable to be used or stored by the target element (e.g., one protocol for output to the client application and another protocol for the digital twin datastore 269). [01023] In embodiments, the adapter module 283 may process and/or convert data between the input module 263 and the output module 273. In embodiments, the adapter module 283 may convert and/or route data automatically (e.g., based on data type) or in response to a received request (e.g., in response to information within the data). [01024] In embodiments, the digital twin system 200 may represent a set of workpiece elements in a digital twin, and the digital twin simulation system 206 simulates a set of physical interactions of a worker with the workpiece elements. For example, the worker may be a crewmember of a transportation system that is a cruise ship, and the workpiece may be a dinner plate that needs to be cleaned and stowed. [01025] In embodiments, the digital twin simulation system 206 may determine process outcomes for the simulated physical interactions accounting for simulated human factors. For example, variations in workpiece throughput may be modeled by the digital twin system 200 including, for example, worker response times to events, worker fatigue, discontinuity within worker actions (e.g., natural variations in human-movement speed, differing positioning times, etc.), effects of discontinuities on downstream processes, and the like. In embodiments, individualized worker interactions may be modeled using historical data that is collected, acquired, and/or stored by the digital twin system 200. The simulation may begin based on estimated amounts (e.g., worker age, industry averages, workplace expectations, etc.). The simulation may also individualize data for each worker (e.g., comparing estimated amounts to collected worker-specific outcomes). [01026] In embodiments, information relating to workers (e.g., fatigue rates, efficiency rates, and the like) may be determined by analyzing performance of specific workers over time and modeling said performance.
SFT-106-A-PCT [01027] In embodiments, the digital twin system 200 includes a plurality of proximity sensors within the sensor array 25. The proximity sensors are or may be configured to detect elements of the transportation system 11 that are within a predetermined area. For example, proximity sensors may include electromagnetic sensors, light sensors, and/or acoustic sensors. [01028] The electromagnetic sensors are or may be configured to sense objects or interactions via one or more electromagnetic fields (e.g., emitted electromagnetic radiation or received electromagnetic radiation). In embodiments, the electromagnetic sensors include inductive sensors (e.g., radio-frequency identification sensors), capacitive sensors (e.g., contact and contactless capacitive sensors), combinations thereof, and the like. [01029] The light sensors are or may be configured to sense objects or interactions via electromagnetic radiation in, for example, the far-infrared, near-infrared, optical, and/or ultraviolet spectra. In embodiments, the light sensors may include image sensors (e.g., charge-coupled devices and CMOS active-pixel sensors), photoelectric sensors (e.g., through-beam sensors, retroreflective sensors, and diffuse sensors), combinations thereof, and the like. Further, the light sensors may be implemented as part of a system or subsystem, such as a light detection and ranging (“LIDAR”) sensor. [01030] The acoustic sensors are or may be configured to sense objects or interactions via sound waves that are emitted and/or received by the acoustic sensors. In embodiments, the acoustic sensors may include infrasonic, sonic, and/or ultrasonic sensors. Further, the acoustic sensors may be grouped as part of a system or subsystem, such as a sound navigation and ranging (“SONAR”) sensor. [01031] In embodiments, the digital twin system 200 stores and collects data from a set of proximity sensors within the transportation system 11 or portions thereof. The collected data may be stored, for example, in the digital twin datastore 269 for use by components the digital twin system 200 and/or visualization by a user. Such use and/or visualization may occur contemporaneously with or after collection of the data (e.g., during later analysis and/or optimization of processes). [01032] In embodiments, data collection may occur in response to a triggering condition. These triggering conditions may include, for example, expiration of a static or a dynamic predetermined interval, obtaining a value short of or in excess of a static or dynamic value, receiving an automatically generated request or instruction from the digital twin system 200 or components thereof, interaction of an element with the respective sensor or sensors (e.g., in response to a worker or machine breaking a beam or coming within a predetermined distance from the proximity sensor), interaction of a user with a digital twin (e.g., selection of a transportation system digital twin, a sensor array digital twin, or a sensor digital twin), combinations thereof, and the like.
SFT-106-A-PCT [01033] In some embodiments, the digital twin system 200 collects and/or stores RFID data in response to interaction of a worker with a real-world element 2R. For example, in response to a worker interaction with a real-world environment, the digital twin will collect and/or store RFID data from RFID sensors within or associated with the corresponding transportation system 11. Additionally or alternatively, worker interaction with a sensor-array digital twin will collect and/or store RFID data from RFID sensors within or associated with the corresponding sensor array. Similarly, worker interaction with a sensor digital twin will collect and/or store RFID data from the corresponding sensor. The RFID data may include suitable data attainable by RFID sensors such as proximate RFID tags, RFID tag position, authorized RFID tags, unauthorized RFID tags, unrecognized RFID tags, RFID type (e.g., active or passive), error codes, combinations thereof, and the like. [01034] In embodiments, the digital twin system 200 may further embed outputs from one or more devices within a corresponding digital twin. In embodiments, the digital twin system 200 embeds output from a set of individual-associated devices into a transportation system digital twin. For example, the digital twin I/O system 204 may receive information output from one or more wearable devices 257 or mobile devices (not shown) associated with an individual within a transportation system. The wearable devices may include image capture devices (e.g., body cameras or augmented-reality headwear), navigation devices (e.g., GPS devices, inertial guidance systems), motion trackers, acoustic capture devices (e.g., microphones), radiation detectors, combinations thereof, and the like. [01035] In embodiments, upon receiving the output information, the digital twin I/O system 204 routes the information to the digital twin creation module 264 to check and/or update the transportation system digital twin and/or associated digital twins within the environment (e.g., a digital twin of a worker, machine, or robot position at a given time). Further, the digital twin system 200 may use the embedded output to determine characteristics of the transportation system 11. [01036] In embodiments, the digital twin system 200 embeds output from a LIDAR point cloud system into a transportation system digital twin. For example, the digital twin I/O system 204 may receive information output from one or more Lidar devices 238 within a transportation system. The Lidar devices 238 is configured to provide a plurality of points having associated position data (e.g., coordinates in absolute or relative x, y, and z values). Each of the plurality of points may include further LIDAR attributes, such as intensity, return number, total returns, laser color data, return color data, scan angle, scan direction, etc. The Lidar devices 238 may provide a point cloud that includes the plurality of points to the digital twin system 200 via, for example, the digital twin I/O system 204. Additionally or alternatively, the digital twin system 200 may receive a stream of points and assemble the stream into a point cloud, or may receive a point cloud and assemble the
SFT-106-A-PCT received point cloud with existing point cloud data, map data, or three dimensional (3D)-model data. [01037] In embodiments, upon receiving the output information, the digital twin I/O system 204 routes the point cloud information to the digital twin creation module 264 to check and/or update the environment digital twin and/or associated digital twins within the environment (e.g., a digital twin of a worker, machine, or robot position at a given time). In some embodiments, the digital twin system 200 is further configured to determine closed-shape objects within the received LIDAR data. For example, the digital twin system 200 may group a plurality of points within the point cloud as an object and, if necessary, estimate obstructed faces of objects (e.g., a face of the object contacting or adjacent a floor or a face of the object contacting or adjacent another object such as another piece of equipment). The system may use such closed-shape objects to narrow search space for digital twins and thereby increase efficiency of matching algorithms (e.g., a shape- matching algorithm). [01038] In embodiments, the digital twin system 200 embeds output from a simultaneous location and mapping (“SLAM”) system in an environmental digital twin. For example, the digital twin I/O system 204 may receive information output from the SLAM system, such as Slam sensor 293, and embed the received information within an environment digital twin corresponding to the location determined by the SLAM system. In embodiments, upon receiving the output information from the SLAM system, the digital twin I/O system 204 routes the information to the digital twin creation module 264 to check and/or update the environment digital twin and/or associated digital twins within the environment (e.g., a digital twin of a workpiece, furniture, movable object, or autonomous object). Such updating provides digital twins of non-connected elements (e.g., furnishings or persons) automatically and without need of user interaction with the digital twin system 200. [01039] In embodiments, the digital twin system 200 can leverage known digital twins to reduce computational requirements for the SLAM sensor 293 by using suboptimal map-building algorithms. For example, the suboptimal map-building algorithms may allow for a higher uncertainty tolerance using simple bounded-region representations and identifying possible digital twins. Additionally or alternatively, the digital twin system 200 may use a bounded-region representation to limit the number of digital twins, analyze the group of potential twins for distinguishing features, then perform higher precision analysis for the distinguishing features to identify and/or eliminate categories of, groups of, or individual digital twins and, in the event that no matching digital twin is found, perform a precision scan of only the remaining areas to be scanned.
SFT-106-A-PCT [01040] In embodiments, the digital twin system 200 may further reduce compute required to build a location map by leveraging data captured from other sensors within the environment (e.g., captured images or video, radio images, etc.) to perform an initial map-building process (e.g., a simple bounded-region map or other suitable photogrammetry methods), associate digital twins of known environmental objects with features of the simple bounded-region map to refine the simple bounded-region map, and perform more precise scans of the remaining simple bounded regions to further refine the map. In some embodiments, the digital twin system 200 may detect objects within received mapping information and, for each detected object, determine whether the detected object corresponds to an existing digital twin of a real-world-element. In response to determining that the detected object does not correspond to an existing real-world-element digital twin, the digital twin system 200 may use, for example, the digital twin creation module 264 to generate a new digital twin corresponding to the detected object (e.g., a detected-object digital twin) and add the detected- object digital twin to the real-world-element digital twins within the digital twin datastore. Additionally or alternatively, in response to determining that the detected object corresponds to an existing real-world-element digital twin, the digital twin system 200 may update the real-world- element digital twin to include new information detected by the simultaneous location and mapping sensor, if any. [01041] In embodiments, the digital twin system 200 represents locations of autonomously or remotely moveable elements and attributes thereof within a transportation system digital twin. Such movable elements may include, for example, workers, persons, vehicles, autonomous vehicles, robots, etc. The locations of the moveable elements may be updated in response to a triggering condition. Such triggering conditions may include, for example, expiration of a static or a dynamic predetermined interval, receiving an automatically generated request or instruction from the digital twin system 200 or components thereof, interaction of an element with a respective sensor or sensors (e.g., in response to a worker or machine breaking a beam or coming within a predetermined distance from a proximity sensor), interaction of a user with a digital twin (e.g., selection of an environmental digital twin, a sensor array digital twin, or a sensor digital twin), combinations thereof, and the like. [01042] In embodiments, the time intervals may be based on probability of the respective movable element having moved within a time period. For example, the time interval for updating a worker location may be relatively shorter for workers expected to move frequently (e.g., a worker tasked with lifting and carrying objects within and through the transportation system 11) and relatively longer for workers expected to move infrequently (e.g., a worker tasked with monitoring a process stream). Additionally or alternatively, the time interval may be dynamically adjusted based on applicable conditions, such as increasing the time interval when no movable elements are detected,
SFT-106-A-PCT decreasing the time interval as or when the number of moveable elements within an environment increases (e.g., increasing number of workers and worker interactions), increasing the time interval during periods of reduced environmental activity (e.g., breaks such as lunch), decreasing the time interval during periods of abnormal environmental activity (e.g., tours, inspections, or maintenance), decreasing the time interval when unexpected or uncharacteristic movement is detected (e.g., frequent movement by a typically sedentary element or coordinated movement, for example, of workers approaching an exit or moving cooperatively to carry a large object), combinations thereof, and the like. Further, the time interval may also include additional, semi- random acquisitions. For example, occasional mid-interval locations may be acquired by the digital twin system 200 to reinforce or evaluate the efficacy of the particular time interval. [01043] In embodiments, the digital twin system 200 may analyze data received from the digital twin I/O system 204 to refine, remove, or add conditions. For example, the digital twin system 200 may optimize data collection times for movable elements that are updated more frequently than needed (e.g., multiple consecutive received positions being identical or within a predetermined margin of error). [01044] In embodiments, the digital twin system 200 may receive, identify, and/or store a set of states 116A-116N (i.e., 116A, 116B, 116C…116N) where A,B,C…N indicates a set of indexes that is unique to the identified state. For example, the set of indexes may be the positive integers. Thus, the quantity of indexes in the set of indexes is not necessarily limited to the quantity of letters in an alphabet. Each index in the set of indexes may be used, for example, to indicate an association between a state 116N and a set of identifying criteria 5N having the same index (N in the example of this sentence). In the example depicted in Fig. 78, the set of identified states 116A-116N is related to the transportation system 11. The states 116A-116N may be, for example, data structures that include a plurality of attributes 4A-4N. In this case, the index A-N associated with the attribute may not necessarily be associated with a particular state 116A-116N. When written herein with reference numeral 4, e.g., 4A, the indexes A-N indicate a unique attribute. For example, 4A may be a reference sign for “power input,” 4B may be a reference sign for “operational speed,” 4C may be a reference sign for “critical speed,” and 4D may be a reference sign for “operating temperature.” [01045] Further, the states 116A-116N may be, for example, data structures that include a set of identifying criteria 5A-5N to uniquely identify each respective state 116A-116N. In embodiments, the states 116A-116N may correspond to states where it is desirable for the digital twin system 200 to set or alter conditions of real-world elements 2R and/or the transportation system 11 (e.g., increase/decrease monitoring intervals, alter operating conditions, etc.).
SFT-106-A-PCT [01046] In embodiments, the set of states 116A-116N may further include, for example, minimum monitored attributes for each state 116A-116N, the set of identifying criteria 5A-5N for each state 116A-116N, and/or actions available to be taken or recommended to be taken in response to each state 116A-116N. Such information may be stored by, for example, the digital twin datastore 269 or another datastore. The states 116A-116N or portions thereof may be provided to, determined by, or altered by the digital twin system 200. Further, the set of states 116A-116N may include data from disparate sources. For example, details to identify and/or respond to occurrence of a first state may be provided to the digital twin system 200 via user input, details to identify and/or respond to occurrence of a second state may be provided to the digital twin system 200 via an external system, details to identify and/or respond to occurrence of a third state may be determined by the digital twin system 200 (e.g., via simulations or analysis of process data), and details to identify and/or respond to occurrence of a fourth state may be stored by the digital twin system 200 and altered as desired (e.g., in response to simulated occurrence of the state or analysis of data collected during an occurrence of and response to the state). [01047] In embodiments, the plurality of attributes 4A-4N includes at least the attributes 4A-4N needed to identify the respective state 116A-116N. The plurality of attributes 4A-4N may further include additional attributes that are or may be monitored in determining the respective state 116A- 116N, but are not needed to identify the respective state 116A-116N. For example, the plurality of attributes 4A-4N for a first state may include relevant information such as rotational speed, fuel level, energy input, linear speed, acceleration, temperature, strain, torque, volume, weight, etc. [01048] The set of identifying criteria 5A-5N may include information for each of the set of attributes 4A-4N to uniquely identify the respective state. The identifying criteria 5A-5N may include, for example, rules, thresholds, limits, ranges, logical values, conditions, comparisons, combinations thereof, and the like. [01049] The change in operating conditions or monitoring may be any suitable change. For example, after identifying occurrence of a respective state 116A-116N, the digital twin system 200 may increase or decrease monitoring intervals for a device (e.g., decreasing monitoring intervals in response to a measured parameter differing from nominal operation) without altering operation of the device. Additionally or alternatively, the digital twin system 200 may alter operation of the device (e.g., reduce speed or power input) without altering monitoring of the device. In further embodiments, the digital twin system 200 may alter operation of the device (e.g., reduce speed or power input) and alter monitoring intervals for the device (e.g., decreasing monitoring intervals). [01050] Fig.78 illustrates an example set of identified states 116A-116N related to transportation systems that the digital twin system 200 may identify and/or store for access by intelligent systems (e.g., the cognitive intelligence system 258) or users of the digital twin system 200, according to
SFT-106-A-PCT some embodiments of the present disclosure. The states 116A-116N may include operational states (e.g., suboptimal, normal, optimal, critical, or alarm operation of one or more components), excess or shortage states (e.g., supply-side or output-side quantities), combinations thereof, and the like. [01051] In embodiments, the digital twin system 200 may monitor attributes 4A-4N of real-world elements 2R and/or digital twins 2DT to determine the respective state 116A-116N. The attributes 4A-4N may be, for example, operating conditions, set points, critical points, status indicators, other sensed information, combinations thereof, and the like. For example, the attributes 4A-4N may include power input 4A, operational speed 4B, critical speed 4C, and operational temperature 4D of the monitored elements. While the illustrated example illustrates uniform monitored attributes, the monitored attributes may differ by target device (e.g., the digital twin system 200 would not monitor rotational speed for an object with no rotatable components). [01052] Each of the states 116A-116N includes a set of identifying criteria 5A-5N meeting particular criteria that are unique among the group of monitored states 116A-116N. Referring to Fig. 78, the digital twin system 200 may identify the overspeed state 116A, for example, in response to the monitored attributes 4A-4N meeting a first set of identifying criteria 5A (e.g., operational speed 4B being higher than the critical speed 4C, while the operational temperature 4D is nominal). [01053] The digital twin system 200 may identify the power loss state 116B, for example, in response to the monitored attributes 4A-4N meeting a second set of identifying criteria 5B (e.g., operational speed 4B requiring more than expected power input 4C). [01054] The digital twin system 200 may identify the high-temperature overspeed state 116C, for example, in response to the monitored attributes 4A-4N meeting a third set of identifying criteria 5C (e.g., operational speed 4B being higher than the critical speed 4C, while the operational temperature 4D is above a predetermined limit). [01055] In response to determining that one or more states 116A-116N exists or has occurred, the digital twin system 200 may update triggering conditions for one or more monitoring protocols, issue an alert or notification, or trigger actions of subcomponents of the digital twin system 200. For example, subcomponents of the digital twin system 200 may take actions to mitigate and/or evaluate impacts of the detected states 116A-116N. When attempting to take actions to mitigate impacts of the detected states 116A-116N on real-world elements 2R, the digital twin system 200 may determine whether instructions exist (e.g., are stored in the digital twin datastore 269) or should be developed (e.g., developed via simulation and cognitive intelligence or via user or worker input). Further, the digital twin system 200 may evaluate impacts of the detected states 116A-116N, for example, concurrently with the mitigation actions or in response to determining
SFT-106-A-PCT that the digital twin system 200 has no stored mitigation instructions for the detected states 116A- 116N. [01056] In embodiments, the digital twin system 200 employs the digital twin simulation system 206 to simulate one or more impacts, such as immediate, upstream, downstream, and/or continuing effects, of recognized states. The digital twin simulation system 206 may collect and/or be provided with values relevant to the evaluated states 116A-116N. In simulating the effect of the one or more states 116A-116N, the digital twin simulation system 206 may recursively evaluate performance characteristics of affected digital twins 2DT until convergence is achieved. The digital twin simulation system 206 may work, for example, in tandem with the cognitive intelligence system 258 to determine response actions to alleviate, mitigate, inhibit, and/or prevent occurrence of the one or more states 116A-116N. For example, the digital twin simulation system 206 may recursively simulate impacts of the one or more states 116A-116N until achieving a desired fit (e.g., convergence is achieved), provide the simulated values to the cognitive intelligence system 258 for evaluation and determination of potential actions, receive the potential actions, evaluate impacts of each of the potential actions for a respective desired fit (e.g., cost functions for minimizing production disturbance, preserving critical components, minimizing maintenance and/or downtime, optimizing system, worker, user, or personal safety, etc.). [01057] In embodiments, the digital twin simulation system 206 and the cognitive intelligence system 258 may repeatedly share and update the simulated values and response actions for each desired outcome until desired conditions are met (e.g., convergence for each evaluated cost function for each evaluated action). The digital twin system 200 may store the results in the digital twin datastore 269 for use in response to determining that one or more states 116A-116N has occurred. Additionally, simulations and evaluations by the digital twin simulation system 206 and/or the cognitive intelligence system 258 may occur in response to occurrence or detection of the event. [01058] In embodiments, simulations and evaluations are triggered only when associated actions are not present within the digital twin system 200. In further embodiments, simulations and evaluations are performed concurrently with use of stored actions to evaluate the efficacy or effectiveness of the actions in real time and/or evaluate whether further actions should be employed or whether unrecognized states may have occurred. In embodiments, the cognitive intelligence system 258 may also be provided with notifications of instances of undesired actions with or without data on the undesired aspects or results of such actions to optimize later evaluations. [01059] In embodiments, the digital twin system 200 evaluates and/or represents the effect of machine downtime within a digital twin of a manufacturing facility. For example, the digital twin system 200 may employ the digital twin simulation system 206 to simulate the immediate,
SFT-106-A-PCT upstream, downstream, and/or continuing effects of a machine downtime state 116D. The digital twin simulation system 206 may collect or be provided with performance-related values such as optimal, suboptimal, and minimum performance requirements for elements (e.g., real-world elements 2R and/or nested digital twins 2DT) within the affected digital twins 2DT, and/or characteristics thereof that are available to the affected digital twins 2DT, nested digital twins 2DT, redundant systems within the affected digital twins 2DT, combinations thereof, and the like. [01060] In embodiments, the digital twin system 200 is configured to: simulate one or more operating parameters for the real-world elements in response to the transportation system being supplied with given characteristics using the real-world-element digital twins; calculate a mitigating action to be taken by one or more of the real-world elements in response to being supplied with the contemporaneous characteristics; and actuate, in response to detecting the contemporaneous characteristics, the mitigating action. The calculation may be performed in response to detecting contemporaneous characteristics or operating parameters falling outside of respective design parameters or may be determined via a simulation prior to detection of such characteristics. [01061] Additionally or alternatively, the digital twin system 200 may provide alerts to one or more users or system elements in response to detecting states. [01062] In embodiments, the digital twin I/O system 204 includes a pathing module 293. The pathing module 293 may ingest navigational data from the elements 2, provide and/or request navigational data to components of the digital twin system 200 (e.g., the digital twin simulation system 206, the digital twin dynamic model system 208, and/or the cognitive intelligence system 258), and/or output navigational data to elements 2 (e.g., to the wearable devices 257). The navigational data may be collected or estimated using, for example, historical data, guidance data provided to the elements 2, combinations thereof, and the like. [01063] For example, the navigational data may be collected or estimated using historical data stored by the digital twin system 200. The historical data may include or be processed to provide information such as acquisition time, associated elements 2, polling intervals, task performed, laden or unladen conditions, whether prior guidance data was provided and/or followed, conditions of the transportation system 11, other elements 2 within the transportation system 11, combinations thereof, and the like. The estimated data may be determined using one or more suitable pathing algorithms. For example, the estimated data may be calculated using suitable order-picking algorithms, suitable path-search algorithms, combinations thereof, and the like. The order-picking algorithm may be, for example, a largest gap algorithm, an s-shape algorithm, an aisle-by-aisle algorithm, a combined algorithm, combinations thereof, and the like. The path-search algorithms may be, for example, Dijkstra's algorithm, the A* algorithm, hierarchical path-finding algorithms,
SFT-106-A-PCT incremental path-finding algorithms, any angle path-finding algorithms, flow field algorithms, combinations thereof, and the like. [01064] Additionally or alternatively, the navigational data may be collected or estimated using guidance data of the worker. The guidance data may include, for example, a calculated route provided to a device of the worker (e.g., a mobile device or the wearable device 257). In another example, the guidance data may include a calculated route provided to a device of the worker that instructs the worker to collect vibration measurements from one or more locations on one or more machines along the route. The collected and/or estimated navigational data may be provided to a user of the digital twin system 200 for visualization, used by other components of the digital twin system 200 for analysis, optimization, and/or alteration, provided to one or more elements 2, combinations thereof, and the like. [01065] In embodiments, the digital twin system 200 ingests navigational data for a set of workers for representation in a digital twin. Additionally or alternatively, the digital twin system 200 ingests navigational data for a set of mobile equipment assets of a transportation system into a digital twin. [01066] In embodiments, the digital twin system 200 ingests a system for modeling traffic of mobile elements in a transportation system digital twin. For example, the digital twin system 200 may model traffic patterns for workers or persons within the transportation system 11, mobile equipment assets, combinations thereof, and the like. The traffic patterns may be estimated based on modeling traffic patterns from and historical data and contemporaneous ingested data. Further, the traffic patterns may be continuously or intermittently updated depending on conditions within the transportation system 11 (e.g., a plurality of autonomous mobile equipment assets may provide information to the digital twin system 200 at a slower update interval than the transportation system 11 including both workers and mobile equipment assets). [01067] The digital twin system 200 may alter traffic patterns (e.g., by providing updated navigational data to one or more of the mobile elements) to achieve one or more predetermined criteria. The predetermined criteria may include, for example, increasing process efficiency, decreasing interactions between laden workers and mobile equipment assets, minimizing worker path length, routing mobile equipment around paths or potential paths of persons, combinations thereof, and the like. [01068] In embodiments, the digital twin system 200 may provide traffic data and/or navigational information to mobile elements in a transportation system digital twin. The navigational information may be provided as instructions or rule sets, displayed path data, or selective actuation of devices. For example, the digital twin system 200 may provide a set of instructions to a robot to direct the robot to and/or along a desired route for collecting vibration data from one or more specified locations on one or more specified machines along the route using a vibration sensor.
SFT-106-A-PCT The robot may communicate updates to the system including obstructions, reroutes, unexpected interactions with other assets within the transportation system 11, etc. [01069] In embodiments, the digital twin system 200 includes design specification information for representing a real-world element 2R using a digital twin 2DT. The digital may correspond to an existing real-world element 2R or a potential real-world element 2R. The design specification information may be received from one or more sources. For example, the design specification information may include design parameters set by user input, determined by the digital twin system 200 (e.g., the via digital twin simulation system 206), optimized by users or the digital twin simulation system 206, combinations thereof, and the like. The digital twin simulation system 206 may represent the design specification information for the component to users, for example, via a display device or a wearable device. The design specification information may be displayed schematically (e.g., as part of a process diagram or table of information) or as part of an augmented reality or virtual reality display. The design specification information may be displayed, for example, in response to a user interaction with the digital twin system 200 (e.g., via user selection of the element or user selection to generally include design specification information within displays). Additionally or alternatively, the design specification information may be displayed automatically, for example, upon the element coming within view of an augmented reality or virtual reality device. In embodiments, the displayed design specification information may further include indicia of information source (e.g., different displayed colors indicate user input versus digital twin system 200 determination), indicia of mismatches (e.g., between design specification information and operational information), combinations thereof, and the like. [01070] In embodiments, the digital twin system 200 embeds a set of control instructions for a wearable device within a transportation system digital twin, such that the control instructions are provided to the wearable device to induce an experience for a wearer of the wearable device upon interaction with an element of the transportation system digital twin. The induced experience may be, for example, an augmented reality experience or a virtual reality experience. The wearable device, such as a headset, may be configured to output video, audio, and/or haptic feedback to the wearer to induce the experience. For example, the wearable device may include a display device and the experience may include display of information related to the respective digital twin. The information displayed may include maintenance data associated with the digital twin, vibration data associated with the digital twin, vibration measurement location data associated with the digital twin, financial data associated with the digital twin, such as a profit or loss associated with operation of the digital twin, manufacturing KPIs associated with the digital twin, information related to an occluded element (e.g., a sub-assembly) that is at least partially occluded by a foreground element (e.g., a housing), a virtual model of the occluded element overlaid on the
SFT-106-A-PCT occluded element and visible with the foreground element, operating parameters for the occluded element, a comparison to a design parameter corresponding to the operating parameter displayed, combinations thereof, and the like. Comparisons may include, for example, altering display of the operating parameter to change a color, size, and/or display period for the operating parameter. [01071] In some embodiments, the displayed information may include indicia for removable elements that are or may be configured to provide access to the occluded element with each indicium being displayed proximate to or overlying the respective removable element. Further, the indicia may be sequentially displayed such that a first indicium corresponding to a first removable element (e.g., a housing) is displayed, and a second indicium corresponding to a second removable element (e.g., an access panel within the housing) is displayed in response to the worker removing the first removable element. In some embodiments, the induced experience allows the wearer to see one or more locations on a machine for optimal vibration measurement collection. In an example, the digital twin system 200 may provide an augmented reality view that includes highlighted vibration measurement collection locations on a machine and/or instructions related to collecting vibration measurements. Furthering the example, the digital twin system 200 may provide an augmented reality view that includes instructions related to timing of vibration measurement collection. Information utilized in displaying the highlighted placement locations may be obtained using information stored by the digital twin system 200. In some embodiments, mobile elements may be tracked by the digital twin system 200 (e.g., via observational elements within the transportation system 11 and/or via pathing information communicated to the digital twin system 200) and continually displayed by the wearable device within the occluded view of the worker. This optimizes movement of elements within the transportation system 11, increases worker safety, and minimizes down time of elements resulting from damage. [01072] In some embodiments, the digital twin system 200 may provide an augmented reality view that displays mismatches between design parameters or expected parameters of real-world elements 2R to the wearer. The displayed information may correspond to real-world elements 2R that are not within the view of the wearer (e.g., elements within another room or obscured by machinery). This allows the worker to quickly and accurately troubleshoot mismatches to determine one or more sources for the mismatch. The cause of the mismatch may then be determined, for example, by the digital twin system 200 and corrective actions ordered. In example embodiments, a wearer may be able to view malfunctioning subcomponents of machines without removing occluding elements (e.g., housings or shields). Additionally or alternatively, the wearer may be provided with instructions to repair the device, for example, including display of the removal process (e.g., location of fasteners to be removed), assemblies or subassemblies that should be transported to other areas for repair (e.g., dust-sensitive components), assemblies or
SFT-106-A-PCT subassemblies that need lubrication, and locations of objects for reassembly (e.g., storing location that the wearer has placed removed objects and directing the wearer or another wearer to the stored locations to expedite reassembly and minimize further disassembly or missing parts in the reassembled element). This can expedite repair work, minimize process impact, allow workers to disassemble and reassemble equipment (e.g., by coordinating disassembly without direct communication between the workers), increase equipment longevity and reliability (e.g., by assuring that all components are properly replaced prior to placing back in service), combinations thereof, and the like. [01073] In some embodiments, the induced experience includes a virtual reality view or an augmented reality view that allows the wearer to see information related to existing or planned elements. The information may be unrelated to physical performance of the element (e.g., financial performance such as asset value, energy cost, input material cost, output material value, legal compliance, and corporate operations). One or more wearers may perform a virtual walkthrough or an augmented walkthrough of the transportation system 11. [01074] In examples, the wearable device displays compliance information that expedites inspections or performance of work. [01075] In further examples, the wearable device displays financial information that is used to identify targets for alteration or optimization. For example, a manager or officer may inspect the transportation system 11 for compliance with updated regulations, including information regarding compliance with former regulations, “grandfathered,” and/or excepted elements. This can be used to reduce unnecessary downtime (e.g., scheduling upgrades for the least impactful times, such as during planned maintenance cycles), prevent unnecessary upgrades (e.g., replacing grandfathered or excepted equipment), and reduce capital investment. [01076] Referring back to Fig. 75, in embodiments, the digital twin system 200 may include, integrate, integrate with, manage, handle, link to, take input from, provide output to, control, coordinate with, or otherwise interact with a digital twin dynamic model system 208. The digital twin dynamic model system 208 can update the properties of a set of digital twins of a set of transportation entities and/or systems, including properties of physical industrial assets, workers, processes, manufacturing facilities, warehouses, and the like (or any of the other types of entities or systems described in this disclosure or in the documents incorporated by reference herein) in such a manner that the digital twins may represent those transportation entities and environments, and properties or attributes thereof, in real-time or very near real-time. In some embodiments, the digital twin dynamic model system 208 may obtain sensor data received from a sensor system 25 and may determine one or more properties of a transportation system or a transportation entity within an environment based on the sensor data and based on one or more dynamic models.
SFT-106-A-PCT [01077] In embodiments, the digital twin dynamic model system 208 may update/assign values of various properties in a digital twin and/or one or more embedded digital twins, including, but not limited to, vibration values, vibration fault level states, probability of failure values, probability of downtime values, cost of downtime values, probability of shutdown values, financial values, KPI values, temperature values, humidity values, heat flow values, fluid flow values, radiation values, substance concentration values, velocity values, acceleration values, location values, pressure values, stress values, strain values, light intensity values, sound level values, volume values, shape characteristics, material characteristics, and dimensions. [01078] In embodiments, a digital twin may be comprised of (e.g., via reference) of other embedded digital twins. For example, a digital twin of a manufacturing facility may include an embedded digital twin of a machine and one or more embedded digital twins of one or more respective motors enclosed within the machine. A digital twin may be embedded, for example, in the memory of a machine that has an onboard IT system (e.g., the memory of an Onboard Diagnostic System, control system (e.g., SCADA system) or the like). Other non-limiting examples of where a digital twin may be embedded include the following: on a wearable device of a worker; in memory on a local network asset, such as a switch, router, access point, or the like; in a cloud computing resource that is provisioned for an environment or entity; and on an asset tag or other memory structure that is dedicated to an entity. [01079] In one example, the digital twin dynamic model system 208 can update vibration characteristics throughout a transportation system digital twin based on captured vibration sensor data measured at one or more locations in the transportation system and one or more dynamic models that model vibration within the transportation system digital twin. The transportation system digital twin may, before updating, already contain information about properties of the transportation entities and/or system that can be used to feed a dynamic model, such as information about materials, shapes/volumes (e.g., of conduits), positions, connections/interfaces, and the like, such that vibration characteristics can be represented for the entities and/or system in the digital twin. Alternatively, the dynamic models may be configured using such information. [01080] In embodiments, the digital twin dynamic model system 208 can update the properties of a digital twin and/or one or more embedded digital twins on behalf of a client application 217. In embodiments, a client application 217 may be an application relating to a component or system (e.g., monitoring a transportation system or a component within, simulating a transportation system, or the like). In embodiments, the client application 217 may be used in connection with both fixed and mobile data collection systems. In embodiments, the client application 217 may be used in connection with network connected sensor system 25.
SFT-106-A-PCT [01081] In embodiments, the digital twin dynamic model system 208 leverages digital twin dynamic models to model the behavior of a transportation entity and/or system. Dynamic models may enable digital twins to represent physical reality, including the interactions of transportation entities, by using a limited number of measurements to enrich the digital representation of a transportation entity and/or system, such as based on scientific principles. In embodiments, the dynamic models are formulaic or mathematical models. In embodiments, the dynamic models adhere to scientific laws, laws of nature, and formulas (e.g., Newton’s laws of motion, second law of thermodynamics, Bernoulli’s principle, ideal gas law, Dalton’s law of partial pressures, Hooke’s law of elasticity, Fourier’s law of heat conduction, Archimedes’ principle of buoyancy, and the like). In embodiments, the dynamic models are machine-learned models. [01082] In embodiments, the digital twin system 200 may have a digital twin dynamic model datastore 228 for storing dynamic models that may be represented in digital twins. In embodiments, digital twin dynamic model datastore can be searchable and/or discoverable. In embodiments, digital twin dynamic model datastore can contain metadata that allows a user to understand what characteristics a given dynamic model can handle, what inputs are required, what outputs are provided, and the like. In some embodiments, digital twin dynamic model datastore 228 can be hierarchical (such as where a model can be deepened or made simpler based on the extent of available data and/or inputs, the granularity of the inputs, and/or situational factors (such as where something becomes of high interest and a higher fidelity model is accessed for a period of time). [01083] In embodiments, a digital twin or digital representation of a transportation entity or system may include a set of data structures that collectively define a set of properties of a represented physical asset, device, worker, process, facility, and/or system, and/or possible behaviors thereof. In embodiments, the digital twin dynamic model system 208 may leverage the dynamic models to inform the set of data structures that collectively define a digital twin with real-time data values. The digital twin dynamic models may receive one or more sensor measurements, network connected device data, and/or other suitable data as inputs and calculate one or more outputs based on the received data and one or more dynamic models. The digital twin dynamic model system 208 then uses the one or more outputs to update the digital twin data structures. [01084] In one example, the set of properties of a digital twin of an asset that may be updated by the digital twin dynamic model system 208 using dynamic models may include the vibration characteristics of the asset, temperature(s) of the asset, the state of the asset (e.g., a solid, liquid, or gas), the location of the asset, the displacement of the asset, the velocity of the asset, the acceleration of the asset, probability of downtime values associated with the asset, cost of downtime values associated with the asset, probability of shutdown values associated with the asset, KPIs associated with the asset, financial information associated with the asset, heat flow
SFT-106-A-PCT characteristics associated with the asset, fluid flow rates associated with the asset (e.g., fluid flow rates of a fluid flowing through a pipe), identifiers of other digital twins embedded within the digital twin of the asset and/or identifiers of digital twins embedding the digital twin of the asset, and/or other suitable properties. Dynamic models associated with a digital twin of an asset can be configured to calculate, interpolate, extrapolate, and/or output values for such asset digital twin properties based on input data collected from sensors and/or devices disposed in the transportation system setting and/or other suitable data and subsequently populate the asset digital twin with the calculated values. [01085] In some embodiments, the set of properties of a digital twin of a transportation system device that may be updated by the digital twin dynamic model system 208 using dynamic models may include the status of the device, a location of the device, the temperature(s) of a device, a trajectory of the device, identifiers of other digital twins that the digital twin of the device is embedded within, embeds, is linked to, includes, integrates with, takes input from, provides output to, and/or interacts with and the like. Dynamic models associated with a digital twin of a device can be configured to calculate or output values for these device digital twin properties based on input data and subsequently update the device digital twin with the calculated values. [01086] In some embodiments, the set of properties of a digital twin of a transportation system worker that may be updated by the digital twin dynamic model system 208 using dynamic models may include the status of the worker, the location of the worker, a stress measure for the worker, a task being performed by the worker, a performance measure for the worker, and the like. Dynamic models associated with a digital twin of a transportation system worker can be configured to calculate or output values for such properties based on input data, which then may be used to populate the transportation system worker digital twin. In embodiments, transportation system worker dynamic models (e.g., psychometric models) can be configured to predict reactions to stimuli, such as cues that are given to workers to direct them to undertake tasks and/or alerts or warnings that are intended to induce safe behavior. In embodiments, transportation system worker dynamic models may be workflow models (Gantt charts and the like), failure mode effects analysis models (FMEA), biophysical models (such as to model worker fatigue, energy utilization, hunger), and the like. [01087] Example properties of a digital twin of a transportation system that may be updated by the digital twin dynamic model system 208 using dynamic models may include the dimensions of the transportation system environment, the temperature(s) of the transportation system environment, the humidity value(s) of the transportation system environment, the fluid flow characteristics in the transportation system environment, the heat flow characteristics of the transportation system environment, the lighting characteristics of the transportation system environment, the acoustic
SFT-106-A-PCT characteristics of the transportation system environment the physical objects in the transportation system environment, processes occurring in the transportation system environment, currents of the transportation system environment (if a body of water), and the like. Dynamic models associated with a digital twin of a transportation system can be configured to calculate or output these properties based on input data collected from sensors and/or devices disposed in the transportation system environment and/or other suitable data and subsequently populate the transportation system digital twin with the calculated values. [01088] In embodiments, dynamic models may adhere to physical limitations that define boundary conditions, constants or variables for digital twin modeling. For example, the physical characterization of a digital twin of a transportation entity or transportation system may include a gravity constant (e.g., 9.8 m/s2), friction coefficients of surfaces, thermal coefficients of materials, maximum temperatures of assets, maximum flow capacities, and the like. Additionally or alternatively, the dynamic models may adhere to the laws of nature. For example, dynamic models may adhere to the laws of thermodynamics, laws of motion, laws of fluid dynamics, laws of buoyancy, laws of heat transfer, laws or radiation, laws of quantum dynamics, and the like. In some embodiments, dynamic models may adhere to biological aging theories or mechanical aging principles. Thus, when the digital twin dynamic model system 208 facilitates a real-time digital representation, the digital representation may conform to dynamic models, such that the digital representations mimic real world conditions. In some embodiments, the output(s) from a dynamic model can be presented to a human user and/or compared against real-world data to ensure convergence of the dynamic models with the real world. Furthermore, as dynamic models are based partly on assumptions, the properties of a digital twin may be improved and/or corrected when a real-world behavior differs from that of the digital twin. In embodiments, additional data collection and/or instrumentation can be recommended based on the recognition that an input is missing from a desired dynamic model, that a model in operation isn’t working as expected (perhaps due to missing and/or faulty sensor information), that a different result is needed (such as due to situational factors that make something of high interest), and the like. [01089] Dynamic models may be obtained from a number of different sources. In some embodiments, a user can upload a model created by the user or a third party. Additionally or alternatively, the models may be created on the digital twin system using a graphical user interface. The dynamic models may include bespoke models that are configured for a particular transportation system and/or set of transportation entities and/or agnostic models that are applicable to similar types of digital twins. The dynamic models may be machine-learned models. [01090] Fig.79 illustrates example embodiments of a method for updating a set of properties of a digital twin and/or one or more embedded digital twins on behalf of client applications 217. In
SFT-106-A-PCT embodiments, a digital twin dynamic model system 208 leverages one or more dynamic models to update a set of properties of a digital twin and/or one or more embedded digital twins on behalf of client application 217 based on the effect of collected sensor data from sensor system 25, data collected from network connected devices 265, and/or other suitable data in the set of dynamic models that are used to enable the transportation system digital twins. In embodiments, the digital twin dynamic model system 208 may be instructed to run specific dynamic models using one or more digital twins that represent physical transportation system assets, devices, workers, processes, and/or transportation systems that are managed, maintained, and/or monitored by the client applications 217. [01091] In embodiments, the digital twin dynamic model system 208 may obtain data from other types of external data sources that are not necessarily transportation-related data sources, but may provide data that can be used as input data for the dynamic models. For example, weather data, news events, social media data, and the like may be collected, crawled, subscribed to, and the like to supplement sensor data, network connected device data, and/or other data that is used by the dynamic models. In embodiments, the digital twin dynamic model system 208 may obtain data from a machine vision system. The machine vision system may use video and/or still images to provide measurements (e.g., locations, statuses, and the like) that may be used as inputs by the dynamic models. [01092] In embodiments, the digital twin dynamic model system 208 may feed this data into one or more of the dynamic models discussed above to obtain one or more outputs. These outputs may include calculated vibration fault level states, vibration severity unit values, vibration characteristics, probability of failure values, probability of downtime values, probability of shutdown values, cost of downtime values, cost of shutdown values, time to failure values, temperature values, pressure values, humidity values, precipitation values, visibility values, air quality values, strain values, stress values, displacement values, velocity values, acceleration values, location values, performance values, financial values, KPI values, electrodynamic values, thermodynamic values, fluid flow rate values, and the like. The client application 217 may then initiate a digital twin visualization event using the results obtained by the digital twin dynamic model system 208. In embodiments, the visualization may be a heat map visualization. [01093] In embodiments, the digital twin dynamic model system 208 may receive requests to update one or more properties of digital twins of transportation entities and/or systems such that the digital twins represent the transportation entities and/or systems in real-time. As shown in Fig. 79, at box 100, the digital twin dynamic model system 208 receives a request to update one or more properties of one or more of the digital twins of transportation entities and/or systems. For example, the digital twin dynamic model system 208 may receive the request from a client application 217
SFT-106-A-PCT or from another process executed by the digital twin system 200 (e.g., a predictive maintenance process). The request may indicate the one or more properties and the digital twin or digital twins implicated by the request. In Fig. 79 step 102, the digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins, including any embedded digital twins, from the digital twin datastore 269. At Fig.79, box 104, the digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from digital twin dynamic model store 228. At Fig.79, box 106, the digital twin dynamic model system 208 selects one or more sensors from sensor system 25, data collected from network connected devices 265, and/or other data sources from digital twin I/O system 204 based on available data sources for one or more inputs for the one or more dynamic models. In embodiments, the data sources may be defined in the inputs required by the one or more dynamic models or may be selected using a lookup table. At Fig. 79, box 108, the digital twin dynamic model system 208 retrieves the selected data from digital twin I/O system 204, which receives sensor data and other data via real-time sensor API 214. At Fig. 79, box 110, digital twin dynamic model system 208 runs the one or more dynamic models using the retrieved data (e.g., vibration sensor data, network connected device data, and the like) as input data and determines one or more output values based on the one or more dynamic models and the input data. At Fig. 79, box 112, the digital twin dynamic model system 208 updates the values of one or more properties of the one or more digital twins based on the one or more outputs of the dynamic model(s). [01094] In example embodiments, client application 217 may be configured to provide a digital representation and/or visualization of the digital twin of a transportation entity. In embodiments, the client application 217 may include one or more software modules that are executed by one or more server devices. These software modules may be configured to quantify properties of the digital twin, model properties of a digital twin, and/or to visualize digital twin behaviors. In embodiments, these software modules may enable a user to select a particular digital twin behavior visualization for viewing. In embodiments, these software modules may enable a user to select to view a digital twin behavior visualization playback. In some embodiments, the client application 217 may provide a selected behavior visualization to digital twin dynamic model system 208. [01095] In embodiments, the digital twin dynamic model system 208 may receive requests from the client application 217 to update properties of a digital twin in order to enable a digital representation of a transportation entity and/or system wherein the real-time digital representation is a visualization of the digital twin. In embodiments, a digital twin may be rendered by a computing device, such that a human user can view the digital representations of real-world assets, devices, workers, processes and/or systems. For example, the digital twin may be rendered and
SFT-106-A-PCT outcome to a display device. In embodiments, dynamic model outputs and/or related data may be overlaid on the rendering of the digital twin. In embodiments, dynamic model outputs and/or related information may appear with the rendering of the digital twin in a display interface. In embodiments, the related information may include real-time video footage associated with the real- world entity represented by the digital twin. In embodiments, the related information may include a sum of each of the vibration fault level states in the machine. In embodiments, the related information may be graphical information. In embodiments, the graphical information may depict motion and/or motion as a function of frequency for individual machine components. In embodiments, graphical information may depict motion and/or motion as a function of frequency for individual machine components, wherein a user is enabled to select a view of the graphical information in the x, y, and z dimensions. In embodiments, graphical information may depict motion and/or motion as a function of frequency for individual machine components, wherein the graphical information includes harmonic peaks and peaks. In embodiments, the related information may be cost data, including the cost of downtime per day data, cost of repair data, cost of new part data, cost of new machine data, and the like. In embodiments, related information may be a probability of downtime data, probability of failure data, and the like. In embodiments, related information may be time to failure data. [01096] In embodiments, the related information may be recommendations and/or insights. For example, recommendations or insights received from the cognitive intelligence system related to a machine may appear with the rendering of the digital twin of a machine in a display interface. [01097] In embodiments, clicking, touching, or otherwise interacting with the digital twin rendered in the display interface can allow a user to “drill down” and see underlying subsystems or processes and/or embedded digital twins. For example, in response to a user clicking on a machine bearing rendered in the digital twin of a machine, the display interface can allow a user to drill down and see information related to the bearing, view a 3D visualization of the bearing’s vibration, and/or view a digital twin of the bearing. [01098] In embodiments, clicking, touching, or otherwise interacting with information related to the digital twin rendered in the display interface can allow a user to “drill down” and see underlying information. [01099] Fig.80 illustrates example embodiments of a display interface that renders the digital twin of a dryer centrifuge and other information related to the dryer centrifuge. Dryer centrifuges may be included in many transportation systems. For example, some ships use a dryer centrifuge to separate water from fuel and lubricating oil. Transportation systems and transportation entities such as, for example, shipping ports, fuel infrastructure systems at airports, and oil platforms, may include a dryer centrifuge.
SFT-106-A-PCT [01100] In some embodiments, the digital twin may be rendered and output in a virtual reality display. For example, a user may view a 3D rendering of a transportation system (e.g., using a monitor or a virtual reality headset). The user may also inspect and/or interact with digital twins of transportation entities. In embodiments, a user may view processes being performed with respect to one or more digital twins (e.g., collecting measurements, movements, interactions, loading, packing, fueling, resupplying, maintaining, cleaning, painting and the like). In embodiments, a user may provide input that controls one or more properties of a digital twin via a graphical user interface. [01101] In some embodiments, the digital twin dynamic model system 208 may receive requests from client application 217 to update properties of a digital twin in order to enable a digital representation of transportation entities and/or systems wherein the digital representation is a heat map visualization of the digital twin. In embodiments, a platform is provided having heat maps displaying collected data from the sensor system 25, network connected devices 265, and data outputs from dynamic models for providing input to a display interface. In embodiments, the heat map interface is provided as an output for digital twin data, such as for handling and providing information for visualization of various sensor data, dynamic model output data, and other data (such as map data, analog sensor data, and other data), such as to another system, such as a mobile device, tablet, dashboard, computer, AR/VR device, or the like. A digital twin representation may be provided in a form factor (e.g., user device, VR-enabled device, AR-enabled device, or the like) suitable for delivering visual input to a user, such as the presentation of a map that includes indicators of levels of analog sensor data, digital sensor data, and output values from the dynamic models (such as data indicating vibration fault level states, vibration severity unit values, probability of downtime values, cost of downtime values, probability of shutdown values, time to failure values, probability of failure values, KPIs, temperatures, levels of rotation, vibration characteristics, fluid flow, heating or cooling, pressure, substance concentrations, and many other output values). In embodiments, signals from various sensors or input sources (or selective combinations, permutations, mixes, and the like) as well as data determined by the digital twin dynamic model system 208 may provide input data to a heat map. Coordinates may include real world location coordinates (such as geo-location or location on a map of a transportation system), as well as other coordinates, such as time-based coordinates, frequency-based coordinates, or other coordinates that allow for representation of analog sensor signals, digital signals, dynamic model outputs, input source information, and various combinations, in a map-based visualization, such that colors may represent varying levels of input along the relevant dimensions. For example, among many other possibilities, if transportation system machine component is at a critical vibration fault level state, the heat map interface may alert a user by showing the machine
SFT-106-A-PCT component in orange. In the example of a heat map, clicking, touching, or otherwise interacting with the heat map can allow a user to drill down and see underlying sensor, dynamic model outputs, or other input data that is used as an input to the heat map display. In other examples, such as ones where a digital twin is displayed in a VR or AR environment, if a transportation system machine component is vibrating outside of normal operation (e.g., at a suboptimal, critical, or alarm vibration fault level), a haptic interface may induce vibration when a user touches a representation of the machine component, or if a machine component is operating in an unsafe manner, a directional sound signal may direct a user’s attention toward the machine in digital twin, such as by playing in a particular speaker of a headset or other sound system. [01102] In embodiments, the digital twin dynamic model system 208 may take a set of ambient environmental data and/or other data and automatically update a set of properties of a digital twin of a transportation entity or system based on the effect of the ambient environmental data and/or other data in the set of dynamic models that are used to enable the digital twin. Ambient environmental data may include temperature data, pressure data, humidity data, wind data, rainfall data, tide data, storm surge data, cloud cover data, snowfall data, visibility data, water level data, and the like. Additionally or alternatively, the digital twin dynamic model system 208 may use a set of ambient environmental data measurements collected by a set of network connected devices 265 disposed in a transportation system setting as inputs for the set of dynamic models that are used to enable the digital twin. For example, digital twin dynamic model system 208 may feed the dynamic models data collected, handled or exchanged by network connected devices 265, such as cameras, monitors, embedded sensors, mobile devices, diagnostic devices and systems, instrumentation systems, telematics systems, and the like, such as for monitoring various parameters and features of machines, devices, components, parts, operations, functions, conditions, states, events, workflows and other elements (collectively encompassed by the term “states”) of transportation systems. Other examples of network connected devices include smart fire alarms, smart security systems, smart air quality monitors, smart/learning thermostats, and smart lighting systems. [01103] Fig.81 illustrates example embodiments of a method for updating a set of vibration fault level states for a set of bearings in a digital twin of a machine. In examples, the machine may be a transportation entity or system. In this example, a client application 217, which interfaces with digital twin dynamic model system 208, may be configured to provide a visualization of the fault level states of the bearings in the digital twin of the machine. [01104] In the example depicted in Fig.81, the digital twin dynamic model system 208 may receive requests from the client application 217 to update one or more vibration fault level states of a digital twin of a machine. At Fig.81, box 200, digital twin dynamic model system 208 receives a request
SFT-106-A-PCT from client application 217 to update one or more bearing vibration fault level states of one or more digital twins. Next, in Fig. 81, step 202, digital twin dynamic model system 208 determines the one or more digital twins to fulfill the request and retrieves the one or more digital twins from the digital twin datastore 269. In this example, the digital twin dynamic model system 208 may retrieve the digital twin of the machine and any embedded digital twins, such as any embedded motor digital twins and bearing digital twins, and any digital twins that embed the machine digital twin, such as the transportation system digital twin. At Fig. 81, box 204, digital twin dynamic model system 208 determines one or more dynamic models to fulfill the request and retrieves the one or more dynamic models from the digital twin dynamic model datastore 228. At Fig.81, box 206, the digital twin dynamic model system 208 selects data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, and any other suitable data via digital twin I/O system 204) from a set of available data sources (e.g., available sensors from a set of sensors in sensor system 25) for the one or more inputs of the one or more dynamic models. In the present example, the retrieved one or more dynamic models may take one or more vibration sensor measurements from vibration sensors 235 for input to the one or more dynamic models. In embodiments, vibration sensors 235 may be optical vibration sensors, single axis vibration sensors, tri-axial vibration sensors, and the like. At Fig. 81, box 208, digital twin dynamic model system 208 retrieves data from the selected data sources via the digital twin I/O system 204. Next, at Fig. 81, box 210, the digital twin dynamic model system 208 runs the one or more dynamic models, using the retrieved data as inputs, and calculates one or more output values that represent the one or more bearing vibration fault level state. Next, at Fig. 81, box 212, the digital twin dynamic model system 208 updates one or more bearing vibration fault level states of the one or more digital twins, based on the one or more output values of the one or more dynamic models. The client application 217 may obtain vibration fault level states of the bearings and may display the obtained vibration fault level state associated with each bearing and/or display colors associated with fault level severity (e.g., red for alarm, orange for critical, yellow for suboptimal, green for normal operation) in the rendering of one or more of the digital twins on a display interface. [01105] In another example, a client application 217 may be an augmented reality application. In some embodiments of this example, the client application 217 may obtain vibration fault level states of bearings in a field of view of an AR-enabled device (e.g., smart glasses) hosting the client application from the digital twin of the transportation system (e.g., via an API of the digital twin system 200) and may display the obtained vibration fault level states on the display of the AR- enabled device, such that the vibration fault level state displayed corresponds to the location in the field of view of the AR-enabled device. In this way, a vibration fault level state may be displayed even if there are no vibration sensors located within the field of view of the AR-enabled device.
SFT-106-A-PCT [01106] Fig. 82 illustrates example embodiments of a method for updating a set of vibration severity unit values of bearings in a digital twin of a machine. Vibration severity units may be measured as displacement, velocity, and acceleration. [01107] In this example, client application 217 that interfaces with the digital twin dynamic model system 208 may be configured to provide a visualization of the three-dimensional vibration characteristics of bearings in a digital twin of a machine. [01108] In this example, the digital twin dynamic model system 208 may receive requests from client application 217 to update the vibration severity unit values in the digital twin of a transportation system. At Fig. 82, box 300, digital twin dynamic model system 208 receives a request from client application 217 to update one or more vibration severity unit values in one or more digital twins of a transportation system. Next, in Fig.82, at step 302, a digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins from digital twin datastore 269. In this example, the digital twin dynamic model system 208 may retrieve the one or more digital twins of the transportation system and any embedded digital twins (e.g., digital twins of bearings and other components). At Fig.82, box 304, digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from dynamic model datastore 228. At Fig. 82, box 306, the digital twin dynamic model system 208 selects dynamic model input data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, and any other suitable data) via digital twin I/O system 204 based on available data sources (e.g., available sensors from a set of sensors in sensor system 25) and the one or more required inputs of the one or more dynamic models. In the present example, the retrieved dynamic models may be configured to take one or more vibration sensor measurements as inputs and provide severity unit values for bearings in the transportation system. At Fig. 82, box 308, digital twin dynamic model system 208 retrieves data from the selected data sources. For example, the data may be one or more measurements from each of the selected one or more vibration sensors. In the present example, the digital twin dynamic model system 208 retrieves measurements from vibration sensors 235 via digital twin I/O system 204. At Fig.82, box 310, digital twin dynamic model system 208 runs the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent vibration severity unit values for the one or more digital twins. Next, at Fig.82, box 312, the digital twin dynamic model system 208 updates the one or more vibration severity unit values in the one or more digital twins and all other embedded digital twins or digital twins that embed the one or more digital twins based on the one or more output values of the one or more dynamic models.
SFT-106-A-PCT [01109] Fig. 83 illustrates example embodiments of a method for updating a set of probability of failure values for machine components in the digital twin of a machine. [01110] In this example, the digital twin dynamic model system 208 may receive requests from client application 217 to update the probability of failure values for one or more digital twins of a transportation system. At Fig. 83, box 400, digital twin dynamic model system 208 receives a request from client application 217 to update one or more probability of failure values for one or more digital twins of a transportation system, any embedded component digital twins, and any digital twins that embed the machine digital twin such as a transportation system digital twin. Next, in Fig.83, at step 402, digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins. In this example, the digital twin dynamic model system 208 may retrieve the digital twin of the transportation system, the digital twin of the machine, and the digital twins of machine components from digital twin datastore 269. At Fig. 83, box 404, digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from dynamic model datastore 228. At Fig.83, box 406, the digital twin dynamic model system 208 selects, via digital twin I/O system 204, dynamic model input data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, historical failure records, and any other suitable data) based on available data sources (e.g., available sensors from a set of sensors in sensor system 25) and the and the one or more required inputs of the dynamic model(s). In the present example, the retrieved dynamic models may take one or more vibration measurements from vibration sensors 235 and historical failure data as dynamic model inputs and output probability of failure values for the machine components in the digital twin of the machine. At Fig.83, box 408, digital twin dynamic model system 208 retrieves data from each of the selected sensors and/or network connected devices via digital twin I/O system 204. At Fig. 83, box 410, digital twin dynamic model system 208 runs the one or more dynamic models using the retrieved one or more vibration measurements and historical failure data as inputs and calculates one or more output values that represent the probability of failure values for the one or more digital twins. Next, at Fig.83, box 412, the digital twin dynamic model system 208 updates the one or more probability of failure values of one or more digital twins, all embedded digital twins, and all digital twins that embed the digital twin based on the output values of the one or more dynamic models. [01111] Fig. 84 illustrates example embodiments of a method for updating a set of probability of downtime for machines in one or more digital twins of a transportation system.
SFT-106-A-PCT [01112] In this example, client application 217, which interfaces with the digital twin dynamic model system 208, may be configured to provide a visualization of the probability of downtime values of a transportation system in the one or more digital twins of the transportation system. [01113] In this example, the digital twin dynamic model system 208 may receive requests from a client application 217 to update probability of downtime values of one or more digital twins of a transportation system. At Fig.84, box 500, the digital twin dynamic model system 208 receives a request from client application 217 to update one or more probability of downtime values of one or more digital twins of a transportation system and any embedded digital twins such as the individual machine digital twins. Next, in Fig.84, at step 502, digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins from digital twin datastore 269. In this example, the digital twin dynamic model system 208 may retrieve the one or more digital twins of the transportation system and any embedded digital twins from digital twin datastore 269. At Fig.84, box 504, digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from dynamic model datastore 228. At Fig. 84, box 506, the digital twin dynamic model system 208 selects dynamic model input data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, and any other suitable data) based on available data sources (e.g., available sensors from a set of sensors in sensor system 25) and the and the one or more required inputs of the one or more dynamic models via digital twin I/O system 204. In the present example, the retrieved one or more dynamic models may be configured to take vibration measurements from vibration sensors and historical downtime data as inputs and output probability of downtime values for different machines throughout the transportation system. At Fig.84, box 508, digital twin dynamic model system 208 retrieves data from the selected data sources. For example, the retrieved data may be one or more measurements from each of the selected one or more vibration sensors retrieved via digital twin I/O system 204. At Fig.84, box 510, digital twin dynamic model system 208 runs the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more probability of downtime values. In examples, the retrieved data may include vibration measurements and historical downtime data. Next, at Fig.84, box 512, the digital twin dynamic model system 208 updates one or more probability of downtime values for the machines in the one or more digital twins and all embedded digital twins based on the one or more output values of the dynamic models. [01114] Fig.85 illustrates example embodiments of a method for updating one or more probability of shutdown values in the one or more digital twins of a transportation system having a set of transportation entities.
SFT-106-A-PCT [01115] In the present example, the digital twin dynamic model system 208 may receive requests from client application 217 to update the probability of shutdown values for the set of transportation entities within one or more digital twins of a transportation system. For example, the transportation entities may be relatively large entities like refueling centers that could lead to a shutdown of large portions of the transportation system. At Fig.85, box 600, digital twin dynamic model system 208 receives a request from client application 217 to update one or more probability of shutdown values of the one or more digital twins and any embedded digital twins. Next, in Fig. 85, at step 602, digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins from digital twin datastore 269. In this example, the digital twin dynamic model system 208 may retrieve the one or more digital twins of the transportation system and any embedded digital twins. At Fig. 85, box 604, digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from dynamic model datastore 228. At Fig. 85, box 606, the digital twin dynamic model system 208 selects dynamic model input data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, and any other suitable data) based on available data sources (e.g., available sensors from a set of sensors in sensor system 25) and the and the one or more required inputs of the one or more dynamic models via digital twin I/O system 204. In the present example, the retrieved one or more dynamic models may be configured to take one or more vibration measurements from one or more vibration sensors 235 and/or other suitable data as inputs and output probability of shutdown values for each transportation entity in the one or more digital twins of the transportation system. At Fig.85, box 608, digital twin dynamic model system 208 retrieves data from the selected data sources. For example, the retrieved data may be one or more vibration measurements from each of the selected vibration sensors 235 retrieved via digital twin I/O system 204. At Fig. 85, box 610, digital twin dynamic model system 208 runs the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more probability of shutdown values. In examples, the retrieved data may include vibration measurements and historical shutdown data. Next, at Fig.85, box 612, the digital twin dynamic model system 208 updates one or more probability of shutdown values of the one or more digital twins and all embedded digital twins based on the one or more output values of the one or more dynamic models. [01116] Fig. 86 illustrates example embodiments of a method for updating a set of cost of downtime values in machines in one or more digital twins of a transportation system. [01117] In the present example, the digital twin dynamic model system 208 may receive requests from a client application 217 to populate real-time cost of downtime values associated with
SFT-106-A-PCT machines in one or more digital twins of a transportation system. At Fig.86, box 700, digital twin dynamic model system 208 receives a request from the client application 217 to update one or more cost of downtime values of one or more digital twins and any embedded digital twins (e.g., machines, machine parts, and the like) from the client application 217. Next, in Fig. 86, at step 702, the digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins. In this example, the digital twin dynamic model system 208 may retrieve the digital twins of the transportation system, the machines, the machine parts, and any other embedded digital twins from digital twin datastore 269. At Fig. 86, box 704, digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from dynamic model datastore 228. At Fig. 86, box 706, the digital twin dynamic model system 208 selects dynamic model input data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, and any other suitable data) based on available data sources (e.g., available sensors from a set of sensors in sensor system 25) and the and the one or more required inputs of the one or more dynamic models via digital twin I/O system 204. In the present example, the retrieved one or more dynamic models may be configured to take historical downtime data and operational data as inputs and output data representing cost of downtime per day for machines in the transportation system. At Fig. 86, box 708, digital twin dynamic model system 208 retrieves data from the selected data sources. For example, the retrieved data may be historical downtime data and operational data retrieved via digital twin I/O system 204. At Fig.86, box 710, digital twin dynamic model system 208 runs the at least one dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent a cost of downtime. In examples, the retrieved data may include historical downtime data and operational data. In examples, the cost of downtime may be a cost of downtime per day for machines in the transportation system. Next, at Fig. 86, box 712, the digital twin dynamic model system 208 updates the one or more cost of downtime values of the one or more digital twins and the digital twins of all embedded digital twins based on the one or more output values of the one or more dynamic models. [01118] Fig. 87 illustrates example embodiments of a method for updating a set of KPI values in the digital twin of a transportation system. In embodiments, the KPI is selected from the group consisting of uptime, capacity utilization, on standard operating efficiency, overall operating efficiency, overall equipment effectiveness, machine downtime, unscheduled downtime, machine set up time, inventory turns, inventory accuracy, quality (e.g., percent defective), first pass yield, rework, scrap, failed audits, on-time delivery, customer returns, training hours, employee turnover, reportable health & safety incidents, revenue per employee, profit per employee, schedule
SFT-106-A-PCT attainment, total cycle time, throughput, changeover time, yield, planned maintenance percentage, availability, and customer return rate. [01119] In the present example, the digital twin dynamic model system 208 may receive requests from a client application 217 to update real-time KPI values in one or more digital twins of a transportation system. At Fig. 87, box 800, digital twin dynamic model system 208 receives a request from the client application 217 to update one or more KPI values of the one or more digital twins and any embedded digital twins (e.g., machines, machine parts, and the like) from the client application 217. Next, in Fig. 87, at step 802, the digital twin dynamic model system 208 determines the one or more digital twins required to fulfill the request and retrieves the one or more required digital twins. In this example, the digital twin dynamic model system 208 may retrieve the one or more digital twins of the transportation system, the machines, the machine parts, and any other embedded digital twins from digital twin datastore 269. At Fig.87, box 804, digital twin dynamic model system 208 determines one or more dynamic models required to fulfill the request and retrieves the one or more required dynamic models from the dynamic model datastore 228. At Fig. 87, box 806, the digital twin dynamic model system 208 selects dynamic model input data sources (e.g., one or more sensors from sensor system 25, data from network connected devices 265, and any other suitable data) based on available data sources (e.g., available sensors from a set of sensors in sensor system 25) and the and the one or more required inputs of the one or more dynamic models via a digital twin I/O system 204. In the present example, the retrieved one or more dynamic models may be configured to take one or more vibration measurements obtained from one or more vibration sensors 235 and operational data as inputs and output one or more KPI values for the transportation system. At Fig.87, box 808, digital twin dynamic model system 208 retrieves data from the selected data sources. For example, the retrieved data may be one or more vibration measurements from each of the selected one or more vibration sensors 235 and operational data retrieved via digital twin I/O system 204. At Fig.87, box 810, digital twin dynamic model system 208 runs the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent KPI values. In examples, the retrieved data may include vibration measurements and operational data. Next, at Fig.87, box 812, the digital twin dynamic model system 208 updates one or more KPI values of the one or more digital twins, machine digital twins, machine part digital twins, and all other embedded digital twins based on the one or more output values of the one or more dynamic models. [01120] Fig.88 illustrates examples of a method of the present disclosure. At Fig.88, box 900, the method includes: receiving imported data from one or more data sources, the imported data corresponding to a transportation system. At Fig. 88, box 902 is: generating a digital twin of a transportation system representing the transportation system based on the imported data. At Fig.
SFT-106-A-PCT 88, box 904 is: identifying one or more transportation entities within the transportation system. At Fig. 88, box 906 is: generating a set of discrete digital twins representing the one or more transportation entities within the transportation system. At Fig. 88, box 908 is: embedding the set of discrete digital twins within the digital twin of the transportation system. At Fig.88, box 910 is: establishing a connection with a sensor system of the transportation system. At Fig. 88, box 912 is: receiving real-time sensor data from one or more sensors of the sensor system via the connection. At Fig. 88, box 914 is: updating at least one of the digital twin of the transportation system and the set of discrete digital twins based on the real-time sensor data. [01121] Fig. 89 illustrates examples of different types of enterprise digital twins, including executive digital twins, in relation to the data layer, processing layer, and application layer of an enterprise digital twin framework. In embodiments, executive digital twins may include, but are not limited to, CEO digital twins 8902, CFO digital twins 8904, COO digital twins 8906, CMO digital twins 8908, CTO digital twins 8910, CIO digital twins 8912, GC digital twins 8914, HR digital twins 8916, and the like. Additionally, the enterprise digital twins that may be relevant to the executive suite may include cohort digital twins 8920, agility digital twins 8922, Customer Relationship Management (CRM) digital twins 8924, and the like. The discussion of the different types of digital twins is provided for example and not intended to limit the scope of the disclosure. It is understood that in some embodiments, users may alter the configuration of the various executive digital twins based on the business needs of the enterprise, the reporting structure of the enterprise, and the roles and responsibilities of the various executives within the enterprise. [01122] In embodiments, executive digital twins and the additional enterprise digital twins are generated using various types of data collected from different data sources. As discussed, the data may include real-time data 8930, historical data 8932, analytics data 8934, simulation/modeled data 8936, CRM data 8938, organizational data, such as org charts and/or an organizational digital twin 8940, an enterprise data lake 8942, and market data 8944. In embodiments, the real-time data 8930 may include sensor data collected from, for example, sensor systems 25 as depicted in Fig. 75. In embodiments, the real-time data 8930 may include sensor data collected from one or more IoT sensor systems, which may be collected directly from each sensor and/or by various data collection devices associated with the enterprise, including readers ( e.g., RFID, NFC, and Bluetooth readers), beacons, gateways, repeaters, mesh network nodes, WIFI systems, access points, routers, switches, gateways, local area network nodes, edge devices, and the like. Real-time data 8930 may include additional or alternative types of data that are collected in real-time, such as real-time sales data, real-time cost data, project management data that indicates the status of current projects, and the like. Historical data 8932 may be any data collected by the enterprise and/or on behalf of the enterprise in the past. This may include sensor data collected from the
SFT-106-A-PCT sensor systems of the enterprise, sales data, cost data, maintenance data, purchase data, employee hiring data, employee on-boarding data, employee retention data, legal-related data indicating legal proceedings, patent filing data indicating patent filings and issued patents, project management data indicating historical progress of past and current projects, product data indicating products that are on the market, and the like. Analytics data 8934 may be data derived by performing one or more analytics processes on data collected by and/or on behalf of the enterprise. Simulation/modeled data 8936 may be any data derived from simulation and/or behavior modeling processes that are performed with respect to one or more digital twins. CRM data 8936 may include data obtained from a CRM of the enterprise. An organizational digital twin 8940 may be a digital twin of the enterprise. The enterprise data lake 8942 may be a data lake that includes data collected from any number of sources. In embodiments, the market data 8942 may include data that is collected from disparate data sources concerning or related to competitors and other cohorts in the marketplace and supply chain. Market data 8942 may be collected from many different sources and may be structured or unstructured. In embodiments, market data 8942 may contain an element of uncertainty that may be depicted in a digital twin that relies on such market data 8942, such as by showing error bars, probability cones, random walk paths, or the like. It is appreciated that the different types of data highlighted above may overlap. For example, historical data 8932 may be obtained from the CRM data 8938; the enterprise data lake 8942 may include real-time data 8930, historical data 8932, analytics data 8932, simulated/modeled data 8936, and/or CRM data 8938; and analytics data 8934 may be based on historical data 8932, real-time data 8932, CRM data 8938, and/or market data 8942. Additional or alternative types of data may be used to populate an enterprise digital twin. [01123] In embodiments, the data structuring system 8900 may structure the various data collected by and/or on behalf of the enterprise. In embodiments, the digital twin generation system 8928 generates the enterprise digital twins. As discussed, the digital twin generation system 8928 may receive a request for a particular type of digital twin (e.g., a CEO digital twin 8902 or a CTO digital twin 8910) and may determine the types of data needed to populate the digital twin based on the configuration of the requested type of digital twin. In embodiments, the digital twin generation system 8928 may then generate the requested digital twin based on the various types of data (which may include structured data structured by the data structuring system 8900). In some embodiments, the digital twin generation system 8928 may output the generated digital twin to a client application 217, which may then display the requested digital twins. [01124] In embodiments, a CEO digital twin 8902 is a digital twin configured for the CEO or analogous top-level decision maker of an enterprise. The CEO digital twin 8902 may include high- level views of different states and/or operations data of the enterprise, including real-time and
SFT-106-A-PCT historical representations of major assets, processes, divisions, performance metrics, the condition of different business units of the enterprise, and any other mission-critical information type. In embodiments, the CEO digital twin 8902 may provide simulations, predictions, statistical summaries, decision-support based on analytics, machine learning, and/or other AI and learning- type processing of inputs (e.g., fiscal data, competitor data, product data, and the like). In embodiments, a CEO digital twin 8902 may provide functionality including, but not limited to, management of personnel, delegation of tasks, decisions, or tasks, coordination with the Board of Directors and/or strategic partners, risk management, policy management, oversight of budgets, resource allocation, investments, and other executive-related resources. [01125] In embodiments, the types of data that may populate a CEO digital twin 8902 may include, but are not limited to: macroeconomic data, microeconomic analytic data, forecast data, demand planning data, employment and salary data, analytic results of AI and/or machine learning modeling. [01126] In embodiments, the digital twin perspective builder 8926 leverages metadata, artificial intelligence, and/or other data processing techniques to produce a definition of information required for generation of the digital twin in the digital twin generation system 8928. In some embodiments, different relevant datasets are hooked to a digital twin (e.g., an executive digital twin, an environment digital twin, or the like) at the appropriate level of granularity, thereby allowing for the structural aspects of the data (e.g., system of accounts, sensor readings, sales data, or the like) to be a part of the data analytics process. One aspect of making a perspective function is that the user can change the structural view or the granularity of data while potentially forecasting future events or changes to the structure to guide control of the area of the business at question. In embodiments, the term "granularity" or "grain of data" may refer to a single line of data, a single byte of data, a single file, a single instance, or the like. Examples of “grains of data" or highly granular data may include a detail record on a sale, a single block in a blockchain in a distributed ledger, a single event in an event log, a single vibration reading from a vibration sensor, or similar singular or atomic data units, and the like. Grain or atomicity may impose a constraint in how the data can be combined or processed to form different outputs. For example, if some element of data is captured only at the level of once-per-day, then it can only be broken down to single days ( or aggregation of days) and cannot be broken down to hours or minutes, unless derived from the day representation (e.g., using inference techniques and/or models). Similarly, if data is provided only at the aggregate business unit level, it can be broken down to the level of an individual employee only by, for example, averaging, modeling, or inductive functions. Role-based and other enterprise digital twins may benefit from finer levels of data, as aggregations and other processing steps may produce outputs that are dynamic in nature and/or that relate to dynamic processes and/or real-time
SFT-106-A-PCT decision-making. It is noted that different types of digital twins may have different "sized" grains of data. For example, the grains of data that feed a CEO digital twin may be at a different granularity level than the grains of data that feed a COO digital twin. In some embodiments, however, a CEO may drill down into a state of the CEO digital twin and the granularity for the selected state may be increased. [01127] In embodiments, the executive digital twins may link to, interact with, integrate with and/or be used by a number of different applications. For example, the executive digital twins may be used in automated Al-reporting tools 8960, collaboration tools 8962, in connection with executive agents 8964, in board meeting tools 8966, for training modules 8968, and for planning tools 8970. [01128] Fig. 90 illustrates example embodiments of a method for configuring role-based digital twins. At Fig. 90, box 9000, a processing system having one or more processors receives an organizational definition of an enterprise, wherein the organizational definition defines a set of roles within the enterprise. In this example, the organizational definition may further identify a set of physical assets of the enterprise. Next, in Fig.90, at step 9102, the processing system generates an organizational digital twin of the enterprise based on the organizational definition, wherein the organizational digital twin is a digital representation of an organizational structure of the enterprise. In some examples, the organizational structure includes hierarchical components. In some examples, the hierarchical components are embodied in a graph data structure. At Fig. 90, box 9004, the processing system determines a set of relationships between different roles within the set of roles based on the organizational definition. In this example, determining the set of relationships may include parsing the organizational definition to identify a reporting structure and one or more business units of the enterprise. The set of relationships may be inferred from the reporting structure and the business units. At Fig. 90, box 9006, the processing system determines a set of settings for a role from the set of roles based on the determined set of relationships. At Fig.90, box 9008, is linking an identity of a respective individual to the role. Some examples further include linking a set of identities to the set of roles, wherein each identity corresponds to a respective role from the set of roles. At Fig.90, box 9010, the processing system determines a configuration of a presentation layer of a role-based digital twin corresponding to the role based on the settings of the role that is linked to the identity, wherein the configuration of the presentation layer defines a set of states that is depicted in the role-based digital twin associated with the role. Next, at Fig. 90, box 9012, the processing system determines a set of data sources that provide data corresponding to the set of states, wherein each data source provides one or more respective types of data. At Fig. 90, box 9014 is configuring one or more data structures that are received from the one or more data
SFT-106-A-PCT sources, wherein the one or more data structures are configured to provide data used to populate one or more of the set of states in the role-based digital twin. [01129] In examples, the set of settings for the set of roles may include role-based preference settings. In some examples, the role-based preference settings may be configured based on a set of role-specific templates. In some examples, the set of templates may include at least one of a CEO template, a COO template, a CFO template, a counsel template, a board member template, a CTO template, a chief marketing officer template, an information technology manager template, a chief information officer template, a chief data officer template, an investor template, a customer template, a vendor template, a supplier template, an engineering manager template, a project manager template, an operations manager template, a sales manager template, a salesperson template, a service manager template, a maintenance operator template, and a business development template. [01130] In some examples, the set of settings for the set of roles may include role-based taxonomy settings. In some examples, the taxonomy settings may identify a taxonomy that is used to characterize data that is presented in the role-based digital twin, such that the data is presented in a taxonomy that is linked to the role corresponding to the role-based digital twin. In some examples, the taxonomy may include at least one of a CEO taxonomy, a COO taxonomy, a CFO taxonomy, a counsel taxonomy, a board member taxonomy, a CTO taxonomy, a chief marketing officer taxonomy, an information technology manager taxonomy, a chief information officer taxonomy, a chief data officer taxonomy, an investor taxonomy, a customer taxonomy, a vendor taxonomy, a supplier taxonomy, an engineering manager taxonomy, a project manager taxonomy, an operations manager taxonomy, a sales manager taxonomy, a salesperson taxonomy, a service manager taxonomy, a maintenance operator taxonomy, and a business development taxonomy. [01131] In examples, at least one role of the set of roles may be selected from among a CEO role, a COO role, a CFO role, a counsel role, a board member role, a CTO role, an information technology manager role, a chief information officer role, a chief data officer role, a human resources manager role, an investor role, an engineering manager role, an accountant role, an auditor role, a resource planning role, a public relations manager role, a project manager role, an operations manager role, a research and development role, an engineer role, including but not limited to mechanical engineer, electrical engineer, semiconductor engineer, chemical engineer, computer science engineer, data science engineer, network engineer, or some other type of engineer, and a business development role. In examples, the at least one role may be selected from among a factory manager role, a factory operations role, a factory worker role, a power plant manager role, a power plant operations role, a power plant worker role, an equipment service role, and an equipment maintenance operator role. In examples, the at least one role may be selected
SFT-106-A-PCT from among a chief marketing officer role, a product development role, a supply chain manager role, a product design role, a marketing analyst role, a product manager role, a competitive analyst role, a customer service representative role, a procurement operator, an inbound logistics operator, an outbound logistics operator, a customer role, a supplier role, a vendor role, a demand management role, a marketing manager role, a sales manager role, a service manager role, a demand forecasting role, a retail manager role, a warehouse manager role, a salesperson role, and a distribution center manager role. [01132] Fig. 91 illustrates example embodiments of a method for configuring a digital twin of a workforce. At Fig. 91, box 9100, the method includes representing an enterprise organizational structure in a digital twin of an enterprise. Next, in Fig. 91, at step 9102, the method includes parsing the structure to infer relationships among a set of roles within the organizational structure, the relationships and the roles defining a workforce of the enterprise. At Fig. 91, box 9104, the method includes configuring the presentation layer of a digital twin to represent the enterprise as a set of workforces having a set of attributes and relationships. In examples, the digital twin integrates with an enterprise resource planning system that operates on a data structure representing a set of roles in the enterprise, such that changes in the enterprise resource planning system are automatically reflected in the digital twin. wherein the organizational structure includes hierarchical components. In examples, the hierarchical components may be embodied in a graph data structure. In examples, the workforce may be a factory operations workforce. In examples, the workforce may be a plant operations workforce. In examples, the workforce may be a resource extraction operations workforce. In examples, at least one workforce role may be selected from among a CEO role, a COO role, a CFO role, a counsel role, a board member role, a CTO role, an information technology manager role, a Chief information officer role, a chief data officer role, an investor role, an engineering manager role, a project manager role, an operations manager role, and a business development role. In examples, the digital twin provides a recommendation for configuration of the workforce. Quantum computing [01133] FIG. 92 illustrates an example quantum computing system 9200 according to some embodiments of the present disclosure that may be used in conjunction with the transportation system, as described herein. In embodiments, the quantum computing system 9200 provides a framework for providing a set of quantum computing services to one or more quantum computing clients, including clients within the transportation system, including clients associated with one or more vehicles 110, which may include various mechanical, electrical, and software components and systems, such as a powertrain 113, a suspension system 117, a steering system, a braking system, a fuel system, a charging system, seats 128, a combustion engine, an electric vehicle drive
SFT-106-A-PCT train, a transmission 119, a gear set, and the like. The vehicle may have a vehicle user interface 123, which may include a set of interfaces that include a steering system, buttons, levers, touch screen interfaces, audio interfaces, and the like as described throughout this disclosure. In some embodiments, the quantum computing system 9200 framework may be at least partially replicated in respective quantum computing clients. In these embodiments, an individual client may include some or all of the capabilities of the quantum computing system 9200, whereby the quantum computing system 9200 is adapted for the specific functions performed by the subsystems of the quantum computing client. Additionally, or alternatively, in some embodiments, the quantum computing system 9200 may be implemented as a set of microservices, such that different quantum computing clients may leverage the quantum computing system 9200 via one or more APIs exposed to the quantum computing clients. In these embodiments, the quantum computing system 9200 may be configured to perform various types of quantum computing services that may be adapted for different quantum computing clients. In either of these configurations, a quantum computing client may provide a request to the quantum computing system 9200, whereby the request is to perform a specific task (e.g., an optimization). In response, the quantum computing system 9200 executes the requested task and returns a response to the quantum computing client. [01134] Referring to FIG. 92, in some embodiments, the quantum computing system 9200 may include a quantum adapted services library 9202, a quantum general services library 9204, a quantum data services library 9206, a quantum computing engine library 9208, a quantum computing configuration service 9210, a quantum computing execution system 9212, and quantum computing API interface 9214. [01135] In embodiments, the quantum adapted services library 9202 includes transportation optimization 9201, design optimization 9203, smart contract configuration 9205, risk management 9207, fraud detection 9209, prediction 9211, cryptography 9213, and transfer pricing 9215. [01136] In embodiments, the quantum general services library 9204 includes quantum continual learning system 9232, neural network utilization 9217, resource queue management 9219, quantum location-aware data caching 9221, traditional computing optimization 9223, quantum superposition 9225, quantum teleportation 9227, quantum entanglement 9229, system resource application 9231, and quantum principle analysis 9233. In embodiments, the quantum computing execution system 9212 includes asset allocation module 9249 and quantum state output module 9250. [01137] In embodiments, the quantum data services library 9206 includes quantum input filtering service 9224, quantum output filtering 9226, quantum application filtering 9235, and quantum database engine 9228. In embodiments, the quantum computing process modules 9218 include qubits 9236, adiabatic quantum computer 9237, trapped-ion computer module 9222, on-way
SFT-106-A-PCT quantum computer 9238, and quantum annealing module 9220. In embodiments, the quantum computing engine configurations 9216 include quantum circuit modeling 9239, analog configurations 9240, digital configurations 9241, quantum Turing machine 9242, conventional computing 9243, hybrid computing 9244, and quantum error corrected 9245. [01138] In embodiments, the quantum computing engine library 9208 includes quantum computing engine configurations 9216 and quantum computing process modules 9218 based on various supported quantum models. In embodiments, the quantum computing system 9200 may support many different quantum models, including, but not limited to, the quantum circuit model, quantum Turing machine, adiabatic quantum computer, spintronic computing system (such as using spin-orbit coupling to generate spin-polarized electronic states in non-magnetic solids, such as ones using diamond materials), one-way quantum computer, quantum annealing, and various quantum cellular automata. Under the quantum circuit model, quantum circuits may be based on the quantum bit, or "qubit", which is somewhat analogous to the bit in classical computation. Qubits may be in a 1 or 0 quantum state or they may be in a superposition of the 1 and 0 states. However, when qubits have measured the result of a measurement, qubits will always be in is always either a 1 or 0 quantum state. The probabilities related to these two outcomes depend on the quantum state that the qubits were in immediately before the measurement. Computation is performed by manipulating qubits with quantum logic gates, which are somewhat analogous to classical logic gates. [01139] In embodiments, the quantum computing system 9200 may be physically implemented using an analog approach or a digital approach. Analog approaches may include, but are not limited to, quantum simulation, quantum annealing, and adiabatic quantum computation. In embodiments, digital quantum computers use quantum logic gates for computation. Both analog and digital approaches may use quantum bits, or qubits. [01140] In embodiments, the quantum computing system 9200 includes a quantum annealing module 9220 wherein the quantum annealing module may be configured to find the global minimum or maximum of a given objective function over a given set of candidate solutions (e.g., candidate states) using quantum fluctuations. As used herein, quantum annealing may refer to a meta-procedure for finding a procedure that identifies an absolute minimum or maximum, such as a size, length, cost, time, distance or other measure, from within a possibly very large, but finite, set of possible solutions using quantum fluctuation-based computation instead of classical computation. The quantum annealing module 9220 may be leveraged for problems where the search space is discrete (e.g., combinatorial optimization problems) with many local minima, such as finding the ground state of a spin glass or the traveling salesman problem.
SFT-106-A-PCT [01141] In embodiments, the quantum annealing module 9220 starts from a quantum-mechanical superposition of all possible states (candidate states) with equal weights. The quantum annealing module 9220 may then evolve, such as following the time-dependent Schrödinger equation, a natural quantum-mechanical evolution of systems (e.g., physical systems, logical systems, or the like). In embodiments, the amplitudes of all candidate states change, realizing quantum parallelism according to the time-dependent strength of the transverse field, which causes quantum tunneling between states. If the rate of change of the transverse field is slow enough, the quantum annealing module 9220 may stay close to the ground state of the instantaneous Hamiltonian. If the rate of change of the transverse field is accelerated, the quantum annealing module 9220 may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem energy state or Hamiltonian. [01142] In embodiments, the quantum computing system 9200 may include arbitrarily large numbers of qubits and may transport ions to spatially distinct locations in an array of ion traps, building large, entangled states via photonically connected networks of remotely entangled ion chains. [01143] In some implementations, the quantum computing system 9200 includes a trapped ion computer module 9222, which may be a quantum computer that applies trapped ions to solve complex problems. Trapped ion computer module 9222 may have low quantum decoherence and may be able to construct large solution states. Ions, or charged atomic particles, may be confined and suspended in free space using electromagnetic fields. Qubits are stored in stable electronic states of each ion, and quantum information may be transferred through the collective quantized motion of the ions in a shared trap (interacting through the Coulomb force). Lasers may be applied to induce coupling between the qubit states (for single-qubit operations) or coupling between the internal qubit states and the external motional states (for entanglement between qubits). [01144] In some embodiments of the invention, a traditional computer, including a processor, memory, and a graphical user interface (GUI), may be used for designing, compiling, and providing output from the execution and the quantum computing system 9200 may be used for executing the machine language instructions. In some embodiments of the invention, the quantum computing system 9200 may be simulated by a computer program executed by the traditional computer. In such embodiments, a superposition of states of the quantum computing system 9200 can be prepared based on input from the initial conditions. Since the initialization operation available in a quantum computer can only initialize a qubit to either the |0> or |1> state, initialization to a superposition of states is physically unrealistic. For simulation purposes, however, it may be useful to bypass the initialization process and initialize the quantum computing service QNTM1114 directly.
SFT-106-A-PCT [01145] In some embodiments, the quantum computing system 9200 provides various quantum data services, including quantum input filtering, quantum output filtering, quantum application filtering, and a quantum database engine. [01146] In embodiments, the quantum computing system 9200 may include a quantum input filtering service 9224. In embodiments, quantum input filtering service 9224 may be configured to select whether to run a model on the quantum computing system 9200 or to run the model on a classic computing system. In some embodiments, quantum input filtering service 9224 may filter data for later modeling on a classic computer. In embodiments, the quantum computing system 9200 may provide input to traditional compute platforms while filtering out unnecessary information from flowing into distributed systems. In some embodiments, the platform 9200 may trust through filtered specified experiences for intelligent agents. [01147] In embodiments, a system in the system of systems may include a model or system for automatically determining, based on a set of inputs, whether to deploy quantum computational or quantum algorithmic resources to an activity, whether to deploy traditional computational resources and algorithms, or whether to apply a hybrid or combination of them. In embodiments, inputs to a model or automation system may include demand information, supply information, financial data, energy cost information, capital costs for computational resources, development costs (such as for algorithms), energy costs, operational costs (including labor and other costs), performance information on available resources (quantum and traditional), and any of the many other data sets that may be used to simulate (such as using any of a wide variety of simulation techniques described herein and/or in the documents incorporated herein by refence) and/or predict the difference in outcome between a quantum-optimized result and a non-quantum-optimized result. A machine learned model (including in a DPANN system) may be trained, such as by deep learning on outcomes or by a data set from human expert decisions, to determine what set of resources to deploy given the input data for a given request. The model may itself be deployed on quantum computational resources and/or may use quantum algorithms, such as quantum annealing, to determine whether, where and when to use quantum systems, conventional systems, and/or hybrids or combinations. [01148] In some embodiments of the invention, the quantum computing system 9200 may include a quantum output filtering service 9226. In embodiments, the quantum output filtering service 9226 may be configured to select a solution from solutions of multiple neural networks. For example, multiple neural networks may be configured to generate solutions to a specific problem and the quantum output filtering service 9226 may select the best solution from the set of solutions. [01149] In some embodiments, the quantum computing system 9200 connects and directs a neural network development or selection process. In this embodiment, the quantum computing system
SFT-106-A-PCT 9200 may directly program the weights of a neural network such that the neural network gives the desired outputs. This quantum-programmed neural network may then operate without the oversight of the quantum computing system 9200 but will still be operating within the expected parameters of the desired computational engine. [01150] In embodiments, the quantum computing system 9200 includes a quantum database engine 9228. In embodiments, the quantum database engine 9228 is configured with in-database quantum algorithm execution. In embodiments, a quantum query language may be employed to query the quantum database engine 9228. In some embodiments, the quantum database engine may have an embedded policy engine 9230 for prioritization and/or allocation of quantum workflows, including prioritization of query workloads, such as based on overall priority as well as the comparative advantage of using quantum computing resources versus others. In embodiments, quantum database engine 9228 may assist with the recognition of entities by establishing a single identity for that is valid across interactions and touchpoints. The quantum database engine 9228 may be configured to perform optimization of data matching and intelligent traditional compute optimization to match individual data elements. The quantum computing system 9200 may include a quantum data obfuscation system for obfuscating data. [01151] The quantum computing system 9200 may include, but is not limited to, analog quantum computers, digital computers, and/or error-corrected quantum computers. Analog quantum computers may directly manipulate the interactions between qubits without breaking these actions into primitive gate operations. In embodiments, quantum computers that may run analog machines include, but are not limited to, quantum annealers, adiabatic quantum computers, and direct quantum simulators. The digital computers may operate by carrying out an algorithm of interest using primitive gate operations on physical qubits. Error-corrected quantum computers may refer to a version of gate-based quantum computers made more robust through the deployment of quantum error correction (QEC), which enables noisy physical qubits to emulate stable logical qubits so that the computer behaves reliably for any computation. Further, quantum information products may include, but are not limited to, computing power, quantum predictions, and quantum inventions. [01152] In some embodiments, the quantum computing system 9200 is configured as an engine that may be used to optimize traditional computers, integrate data from multiple sources into a decision-making process, and the like. The data integration process may involve real-time capture and management of interaction data by a wide range of tracking capabilities, both directly and indirectly related to value chain network activities. In embodiments, the quantum computing system 9200 may be configured to accept cookies, email addresses and other contact data, social media feeds, news feeds, event and transaction log data (including transaction events, network
SFT-106-A-PCT events, computational events, and many others), event streams, results of web crawling, distributed ledger information (including blockchain updates and state information), results from distributed or federated queries of data sources, streams of data from chat rooms and discussion forums, and many others. [01153] In embodiments, the quantum computing system 9200 includes a quantum register having a plurality of qubits. Further, the quantum computing system 9200 may include a quantum control system for implementing the fundamental operations on each of the qubits in the quantum register and a control processor for coordinating the operations required. [01154] In embodiments, the quantum computing system 9200 is configured to optimize the pricing of a set of goods or services. In embodiments, the quantum computing system 9200 may utilize quantum annealing to provide optimized pricing. In embodiments, the quantum computing system 9200 may use q-bit based computational methods to optimize pricing. [01155] In embodiments, the quantum computing system 9200 is configured to automatically discover smart contract configuration opportunities. Automated discovery of smart contract configuration opportunities may be based on published APIs to marketplaces and machine learning (e.g., by robotic process automation (RPA) of stakeholder, asset, and transaction types. [01156] In embodiments, quantum-established or other blockchain-enabled smart contracts enable frequent transactions occurring among a network of parties, and manual or duplicative tasks are performed by counterparties for each transaction. The quantum-established or other blockchain acts as a shared database to provide a secure, single source of truth, and smart contracts automate approvals, calculations, and other transacting activities that are prone to lag and error. Smart contracts may use software code to automate tasks, and in some embodiments, this software code may include quantum code that enables extremely sophisticated optimized results. [01157] In embodiments, the quantum computing system 9200 or other system in the system of systems may include a quantum-enabled or other risk identification module that is configured to perform risk identification and/or mitigation. The steps that may be taken by the risk identification module may include, but are not limited to, risk identification, impact assessment, and the like. In some embodiments, the risk identification module determines a risk type from a set of risk types. In embodiments, risks may include, but are not limited to, preventable, strategic, and external risks. Preventable risks may refer to risks that come from within and that can usually be managed on a rule-based level, such as employing operational procedures monitoring and employee and manager guidance and instruction. Strategy risks may refer to those risks that are taken on voluntarily to achieve greater rewards. External risks may refer to those risks that originate outside and are not in the businesses’ control (such as natural disasters). External risks are not preventable or desirable. In embodiments, the risk identification module can determine a predicted cost for many categories
SFT-106-A-PCT of risk. The risk identification module may perform a calculation of current and potential impact on an overall risk profile. In embodiments, the risk identification module may determine the probability and significance of certain events. Additionally, or alternatively, the risk identification module may be configured to anticipate events. [01158] In embodiments, the quantum computing system 9200 or other system of the platform 9200 is configured for graph clustering analysis for anomaly and fraud detection. [01159] In some embodiments, the quantum computing system 9200 includes a quantum prediction module, which is configured to generate predictions. Furthermore, the quantum prediction module may construct classical prediction engines to further generate predictions, reducing the need for ongoing quantum calculation costs, which, can be substantial compared to traditional computers. [01160] In embodiments, the quantum computing system 9200 may include a quantum principal component analysis (QPCA) algorithm that may process input vector data if the covariance matrix of the data is efficiently obtainable as a density matrix, under specific assumptions about the vectors given in the quantum mechanical form. It may be assumed that the user has quantum access to the training vector data in a quantum memory. Further, it may be assumed that each training vector is stored in the quantum memory in terms of its difference from the class means. These QPCA algorithms can then be applied to provide for dimension reduction using the calculational benefits of a quantum method. [01161] In embodiments, the quantum computing system 9200 is configured for graph clustering analysis for certified randomness for proof-of-stake blockchains. Quantum cryptographic schemes may make use of quantum mechanics in their designs, which enables such schemes to rely on presumably unbreakable laws of physics for their security. The quantum cryptography schemes may be information-theoretically secure such that their security is not based on any non- fundamental assumptions. In the design of blockchain systems, information-theoretic security is not proven. Rather, classical blockchain technology typically relies on security arguments that make assumptions about the limitations of attackers’ resources. [01162] In embodiments, the quantum computing system 9200 is configured for detecting adversarial systems, such as adversarial neural networks, including adversarial convolutional neural networks. For example, the quantum computing system 9200 or other system of the platform 9200 may be configured to detect fake trading patterns. [01163] In embodiments, the quantum computing system 9200 includes a quantum continual learning (QCL) system 9232, wherein the QCL system 9232 learns continuously and adaptively about the external world, enabling the autonomous incremental development of complex skills and knowledge by updating a quantum model to account for different tasks and data distributions. The
SFT-106-A-PCT QCL system 9232 operates on a realistic time scale where data and/or tasks become available only during operation. Previous quantum states can be superimposed into the quantum engine to provide the capacity for QCL. Because the QCL system 9232 is not constrained to a finite number of variables that can be processed deterministically, it can continuously adapt to future states, producing a dynamic continual learning capability. The QCL system 9232 may have applications where data distributions stay relatively static, but where data is continuously being received. For example, the QCL system 9232 may be used in quantum recommendation applications or quantum anomaly detection systems where data is continuously being received and where the quantum model is continuously refined to provide for various outcomes, predictions, and the like. QCL enables asynchronous alternate training of tasks and only updates the quantum model on the real- time data available from one or more streaming sources at a particular moment. [01164] In embodiments, the QCL system 9232 operates in a complex environment in which the target data keeps changing based on a hidden variable that is not controlled. In embodiments, the QCL system 9232 can scale in terms of intelligence while processing increasing amounts of data and while maintaining a realistic number of quantum states. The QCL system 9232 applies quantum methods to drastically reduce the requirement for storage of historic data while allowing the execution of continuous computations to provide for detail-driven optimal results. In embodiments, a QCL system 9232 is configured for unsupervised streaming perception data since it continually updates the quantum model with new available data. [01165] In embodiments, QCL system 9232 enables multi-modal-multi-task quantum learning. The QCL system 9232 is not constrained to a single stream of perception data but allows for many streams of perception data from different sensors and input modalities. In embodiments, the QCL system 9232 can solve multiple tasks by duplicating the quantum state and executing computations on the duplicate quantum environment. A key advantage to QCL is that the quantum model does not need to be retrained on historic data, as the superposition state holds information relating to all prior inputs. Multi-modal and multi-task quantum learning enhance quantum optimization since it endows quantum machines with reasoning skills through the application of vast amounts of state information. [01166] In embodiments, the quantum computing system 9200 supports quantum superposition, or the ability of a set of states to be overlaid into a single quantum environment. [01167] In embodiments, the quantum computing system 9200 supports quantum teleportation. For example, information may be passed between photons on chipsets even if the photons are not physically linked. [01168] In embodiments, the quantum computing system 9200 may include a quantum transfer pricing system. Quantum transfer pricing allows for the establishment of prices for the goods
SFT-106-A-PCT and/or services exchanged between subsidiaries, affiliates, or commonly controlled companies that are part of a larger enterprise and may be used to provide tax savings for corporations. In embodiments, solving a transfer pricing problem involves testing the elasticities of each system in the system of systems with a set of tests. In these embodiments, the testing may be done in periodic batches and then may be iterated. As described herein, transfer pricing may refer to the price that one division in a company charges another division in that company for goods and services. [01169] In embodiments, the quantum transfer pricing system consolidates all financial data related to transfer pricing on an ongoing basis throughout the year for all entities of an organization wherein the consolidation involves applying quantum entanglement to overlay data into a single quantum state. In embodiments, the financial data may include profit data, loss data, data from intercompany invoices (potentially including quantities and prices), and the like. [01170] In embodiments, the quantum transfer pricing system may interface with a reporting system that reports segmented profit and loss, transaction matrices, tax optimization results, and the like based on superposition data. In embodiments, the quantum transfer pricing system automatically generates forecast calculations and assesses the expected local profits for any set of quantum states. [01171] In embodiments, the quantum transfer pricing system may integrate with a simulation system for performing simulations. Suggested optimal values for new product prices can be discussed cross-border via integrated quantum workflows and quantum teleportation communicated states. [01172] In embodiments, quantum transfer pricing may be used to proactively control the distribution of profits within a multi-national enterprise (MNE), for example, during the course of a calendar year, enabling the entities to achieve arms-length profit ranges for each type of transaction. [01173] In embodiments, the QCL system 9232 may use a number of methods to calculate quantum transfer pricing, including the quantum comparable uncontrolled price (QCUP) method, the quantum cost plus percent method (QCPM), the quantum resale price method (QRPM), the quantum transaction net margin method (QTNM), and the quantum profit-split method. [01174] The QCUP method may apply quantum calculations to find comparable transactions made between related and unrelated organizations, potentially through the sharing of quantum superposition data. By comparing the price of goods and/or services in an intercompany transaction with the price used by independent parties through the application of a quantum comparison engine, a benchmark price may be determined. [01175] The QCPM method may compare the gross profit to the cost of sales, thus measuring the cost-plus mark-up (the actual profit earned from the products). Once this mark-up is determined,
SFT-106-A-PCT it should be equal to what a third party would make for a comparable transaction in a comparable context with similar external market conditions. In embodiments, the quantum engine may simulate the external market conditions. [01176] The QRPM method looks at groups of transactions rather than individual transactions and is based on the gross margin or difference between the price at which a product is purchased and the price at which it is sold to a third party. In embodiments, the quantum engine may be applied to calculate the price differences and to record the transactions in the superposition system. [01177] The QTNM method is based on the net profit of a controlled transaction rather than comparable external market pricing. The calculation of the net profit is accomplished through a quantum engine that can consider a wide variety of factors and solve optimally for the product price. The net profit may then be compared with the net profit of independent enterprises, potentially using quantum teleportation. [01178] The quantum profit-split method may be used when two related companies work on the same business venture, but separately. In these applications, the quantum transfer pricing is based on profit. The quantum profit-split method applies quantum calculations to determine how the profit associated with a particular transaction would have been divided between the independent parties involved. [01179] In embodiments, the quantum computing system 9200 may leverage one or artificial networks to fulfill the request of a quantum computing client. For example, the quantum computing system 9200 may leverage a set of artificial neural networks to identify patterns in images (e.g., using image data from a liquid lens system), perform binary matrix factorization, perform topical content targeting, perform similarity-based clustering, perform collaborative filtering, perform opportunity mining, or the like. [01180] In embodiments, the system of systems may include a hybrid computing allocation system for prioritization and allocation of quantum computing resources and traditional computing resources. In embodiments, the prioritization and allocation of quantum computing resources and traditional computing resources may be measure-based (e.g., measuring the extent of the advantage of the quantum resource relative to other available resources), cost-based, optimality-based, speed- based, impact-based, or the like. In some embodiments the hybrid computing allocation system is configured to perform time-division multiplexing between the quantum computing system 9200 and a traditional computing system. In embodiments, the hybrid computing allocation system may automatically track and report on the allocation of computational resources, the availability of computational resources, the cost of computational resources, and the like.
SFT-106-A-PCT [01181] In embodiments, the quantum computing system 9200 may be leveraged for queue optimization for utilization of quantum computing resources, including context-based queue optimizations. [01182] In embodiments, the quantum computing system 9200 may support quantum- computation-aware location-based data caching. [01183] In embodiments, the quantum computing system 9200 may be leveraged for optimization of various system resources in the system of systems, including the optimization of quantum computing resources, traditional computing resources, energy resources, human resources, robotic fleet resources, smart container fleet resources, I/O bandwidth, storage resources, network bandwidth, attention resources, or the like. [01184] The quantum computing system 9200 may be implemented where a complete range of capabilities are available to or as part of any configured service. Configured quantum computing services may be configured with subsets of these capabilities to perform specific predefined function, produce newly defined functions, or various combinations of both. [01185] FIG. 93 illustrates quantum computing service request handling 9300 according to some embodiments of the present disclosure. In embodiments, the request handling 9300 includes inputs 9301, applicability analysis 9303, quantum computing configuration 9305, quantum computing execution 9307, and quantum computing configuration development 9309. In embodiments, inputs 9301 include external data requests 9306, directed quantum request 9302, and general quantum request 9304. In embodiments, applicability analysis 9303 includes quantum set library 9308 and quantum matrix analysis 9310. In embodiments, quantum computing configuration 9305 includes resource and priority analysis 9312 and quantum computing configuration 9314. In embodiments, quantum computing execution 9307 includes asset allocation and execution 9316 and quantum state output 9318. In embodiments, quantum computing configuration development 9309 includes quantum instance training 9320, quantum instance training data 9322, and alternate configuration analysis 9324. [01186] A directed quantum computing request 9302 may come from one or more quantum-aware devices or stack of devices, where the request is for known application configured with specific quantum instance(s), quantum computing engine(s), or other quantum computing resources, and where data associated with the request may be preprocessed or otherwise optimized for use with quantum computing. [01187] A general quantum computing request 9304 may come from any system in the system of systems or configured service, where the requestor has determined that quantum computing resources may provide additional value or other improved outcomes. Improved outcomes may also be suggested by the quantum computing service in association with some form of monitoring and
SFT-106-A-PCT analysis. For a general quantum computing request 9304, input data may not be structured or formatted as necessary for quantum computing. [01188] In embodiments, external data requests 9306 may include any available data that may be necessary for training new quantum instances. The sources of such requests could be public data, sensors, ERP systems, and many others. [01189] Incoming operating requests and associated data may be analyzed using a standardized approach that identifies one or more possible sets of known quantum instances, quantum computing engines, or other quantum computing resources that may be applied to perform the requested operation(s). Potential existing sets may be identified in the quantum set library 9308. [01190] In embodiments, the quantum computing system 9200 includes a quantum computing configuration service 9210. The quantum computing configuration service may work alone or with the intelligence service 9234 to select a best available configuration using a resource and priority analysis that also includes the priority of the requestor. The quantum computing configuration service may provide a solution (YES) or determine that a new configuration is required (NO). [01191] In embodiments, the quantum computing configuration service 9210 includes intelligence service 9234, resource and priority module 9246, quantum set configuration module 9247, and library management module 9248. [01192] In one example, the requested set of quantum computing services may not exist in the quantum set library 9308. In this example, one or more new quantum instances must be developed (trained) with the intelligence service 9234 using available data. In embodiments, alternate configurations may be developed with assistance from the intelligence service 9234 to identify alternate ways to provide all or some of the requested quantum computing services until appropriate resources become available. For example, a quantum/traditional hybrid model may be possible that provides the requested service, but at a slower rate. [01193] In embodiments, alternate configurations may be developed with assistance from the intelligence service 9234 to identify alternate and possibly temporary ways to provide all or some of the requested quantum computing services. For example, a hybrid quantum/traditional model may be possible that provides the requested service, but at a slower rate. This may also include a feedback learning loop to adjust services in real time or to improved stored library elements. [01194] When a quantum computing configuration has been identified and available, it is allocated and programmed for execution and delivery of one or more quantum states (solutions). Biology based systems. [01195] FIG. 94 shows a thalamus service 9400 and a set of input sensors streaming data from various sources across a system 9402, including a transportation system, as described herein, with its centrally managed data sources 9404. In embodiments, the thalamus service 9400 filters the into
SFT-106-A-PCT the control system 9402 such that the control system is never overwhelmed by the total volume of information. In embodiments, the thalamus service 9400 provides an information suppression mechanism for information flows within the system. This mechanism monitors all data streams and strips away irrelevant data streams by ensuring that the maximum data flows from all input sensors are always constrained. Input sensors may include but are not limited to sensors associated with one or more vehicles 110, which may include various mechanical, electrical, and software components and systems, such as a powertrain 113, a suspension system 117, a steering system, a braking system, a fuel system, a charging system, seats 128, a combustion engine, an electric vehicle drive train, a transmission 119, a gear set, and the like. The vehicle may have a vehicle user interface 123, which may include a set of interfaces that include a steering system, buttons, levers, touch screen interfaces, audio interfaces, and the like as described throughout this disclosure. [01196] The thalamus service 9400 may be a gateway for all communication that responds to the prioritization of the control system 9402. The control system 9402 may decide to change the prioritization of the data streamed from the thalamus service 9400, for example, during a known fire in an isolated area, and the event may direct the thalamus service 9400 to continue to provide flame sensor information despite the fact that majority of this data is not unusual. The thalamus service 9400 may be an integral part of the overall system communication framework. [01197] In embodiments, the control system 9402 includes the thalamus service 9400, intelligence service 9407, quantum computing service 9426, data interface(s) 9427, and system subsystems 9428. In embodiments, the data sources 9404 include analyses 9422, databases 9423, sensors 9424, and reports 9425. [01198] In embodiments, the thalamus service 9400 includes an intake management system 9406. The intake management system 9406 may be configured to receive and process multiple large datasets by converting them into data streams that are sized and organized for subsequent use by a central control system 9402 operating within one or more systems. For example, a robot may include vision and sensing systems that are used by its central control system 9402 to identify and move through an environment in real time. The intake management system 9406 can facilitate robot decision-making by parsing, filtering, classifying, or otherwise reducing the size and increasing the utility of multiple large datasets that would otherwise overwhelm the central control system 9402. In embodiments, the intake management system 9406 may include an intake controller 9408 that works with an intelligence system 9436 to evaluate incoming data and take actions-based evaluation results. Evaluations and actions may include specific instruction sets received by the thalamus service 9400, for example the use of a set of specific compression and prioritization tools stipulated within a “Networking” library module. In another example, thalamus service inputs may direct the use of specific filtering and suppression techniques. In a third
SFT-106-A-PCT example, thalamus service inputs may stipulate data filtering associated with an area of interest such as a certain type of financial transaction. The intake management system is also configured to recognize and manage datasets that are in a vectorized format such as PCMP, where they may be passed directly to central control, or alternatively deconstructed and processed separately. The intake management system 9406 may include a learning module 9430 that receives data from external sources that enables improvement and creation of application and data management library modules. In some cases, the intake management system may request external data to augment existing datasets. [01199] In embodiments, the intake management system 9406 includes intake learning module 9430, intake controller 9408, intake application library 9432, intake data management system 9434, intelligence system 9436, and configured thalamus parameters 9438. In embodiments, the intake application library 9432 includes networking 9440 and security 9442. In embodiments, the intake data management system 9434 includes prioritizing 9450, area focus 9452, formatting 9454, filtering 9456, suppression 9458, and combining 9460. [01200] In embodiments, the control system 9402 may direct the thalamus service 9400 to alter its filtering to provide more input from a set of specific sources. This indication more input is handled by the thalamus service 9400 by suppressing other information flows based to constrain the total data flows to within a volume the central control system can handle. [01201] The thalamus service 9400 can operate by suppressing data based on several different factors, and in embodiments, the default factor maybe unusualness of the data. This unusualness is a constant monitoring of all input sensors and determining the unusualness of the data. [01202] In some embodiments, the thalamus service 9400 may suppress data based on geospatial factors. The thalamus service 9400 may be aware of the geospatial location of all sensors and is able to look for unusual patterns in data based on geospatial context and suppress data accordingly. [01203] In some embodiments, the thalamus service 9400 may suppress data based on temporal factors. Data can be suppressed temporally, for example, if the cadence of the data can be reduced such that the overall data stream is filtered to level that can be handled by the central processing unit. [01204] In some embodiments, the thalamus service 9400 may suppress data based on contextual factors. In embodiments, context-based filtering is a filtering event in which the thalamus service 9400 is aware of some context-based event. In this context the filtering is made to suppress information flows not relating to the data from the event. [01205] In embodiments, the control system 9402 can override the thalamus filtering and decide to focus on a completely different area for any specific reason.
SFT-106-A-PCT [01206] In embodiments, the system may include a vector module. In embodiments, the vector module may be used to convert data to a vectorized format. In many examples, the conversion of a long sequence of oftentimes similar numbers into a vector, which may include short term future predictions, makes the communication both smaller in size and forward looking in nature. In embodiments, forecast methods may include: moving average; weighted moving average; Kalman filtering; exponential smoothing; autoregressive moving average (ARMA) (forecasts depend on past values of the variable being forecast, and on past prediction errors); autoregressive integrated moving average (ARIMA) (ARMA on the period-to-period change in the forecasted variable); extrapolation; linear prediction; trend estimation (predicting the variable as a linear or polynomial function of time); growth curve (e.g., statistics); and recurrent neural network. [01207] In embodiments, the system may include a predictive model communication protocol (PMCP) system to support vector-based predictive models and a predictive model communication protocol (PMCP). Under the PMCP protocol, instead of traditional streams where individual data items are transmitted, vectors representing how the data is changing or what is the forecast trend in the data is communicated. The PMCP system may transmit actual model parameters and receiving units such that edge devices can apply the vector-based predictive models to determine future states. For example, each automated device in a network could train a regression model or a neural network, constantly fitting the data streams to current input data. All automated devices leveraging the PMCP system would be able to react in advance of events actually happening, rather than waiting for depletion of inventory for an item, for example, to occur. Continuing the example, the stateless automated device can react to the forecast future state and make the necessary adjustments, such as ordering more of the item. [01208] In embodiments, the PMCP system includes a PMCP device interface 9410. In embodiments, the PMCP device interface 9410 relates to PMCP API 9411, intelligence system 9412, PMCP controller 9413, classification 9414, behavior analysis 9415, prediction 9416, augmentation 9417, networking module 9418, security module 9419, ETL interface 9420, and PMCP database 9421. [01209] In embodiments, the PMCP system enables communicating vectorized information and algorithms that allow vectorized information to be processed to refine the known information regarding a set of probability-based states. For example, the PMCP system may support communicating the vectorized information gathered at each point of a sensor reading but also adding algorithms that allow the information to be processed. Applied in an environment with large numbers of sensors with different accuracies and reliabilities, the probabilistic vector-based mechanism of the PMCP system allows large numbers, if not all, data streams to combine to produce refined models representing the current state, past states and likely future states of goods.
SFT-106-A-PCT Approximation methods may include importance sampling, and the resulting algorithm is known as a particle filter, condensation algorithm, or Monte Carlo localization. [01210] In embodiments, the vector-based communication of the PMCP system allows future security events to be anticipated, for example, by simple edge node devices that are running in a semi-autonomous way. The edge devices may be responsible for building a set of forecast models showing trends in the data. The parameters of this set of forecast models may be transmitted using the PMCP system. [01211] Security systems are constantly looking for vectors showing change in state, as unusual events tend to trigger multiple vectors to show unusual patterns. In a security setting, seeing multiple simultaneous unusual vectors may trigger escalation and a response by, for example, the control system. In addition, one of the major areas of communication security concern is around the protection of stored data, and in a vector-based system data does not need to be stored, and so the risk of data loss is simply removed. [01212] In embodiments, PMCP data can be directly stored in a queryable database where the actual data is reconstructed dynamically in response to a query. In some embodiments, the PMCP data streams can be used to recreate the fine-grained data so they become part of an Extract Transform and Load (ETL) process. [01213] In embodiments where there are edge devices with very limited capacities, additional edge communication devices can be added to convert the data into PMCP format. For example, to protect distributed medical equipment from hacking attempts many manufacturers will choose to not connect the device to any kind of network. To overcome this limitation, the medical equipment may be monitored using sensors, such as cameras, sound monitors, voltage detectors for power usage, chemical sniffers, and the like. Functional unit learning and other data techniques may be used to determine the actual usage of the medical equipment detached from the network functional unit. [01214] Communication using vectorized data allows for a constant view of likely future states. This allows the future state to be communicated, allowing various entities to respond ahead of future state requirements without needing access to the fine-grained data. [01215] In embodiments, the PMCP protocol can be used to communicate relevant information about production levels and future trends in production. This PMCP data feed, with its built-in data obfuscation allows real contextual information about production levels to be shared with consumers, regulators, and other entities without requiring sensitive data to be shared. For example, when choosing to purchase a new car, if there is an upcoming shortage of red paint then the consumer could be encouraged to choose a different color in order to maintain a desired delivery time. PMCP and vector data enables simple data informed interactive systems that user can apply
SFT-106-A-PCT without having to build enormously complex big data engines. As an example, an upstream manufacturer has an enormously complex task of coordinating many downstream consumption points. Through the use of PMCP, the manufacturer is able to provide real information to consumers without the need to store detailed data and build complex models. [01216] In embodiments, edge device units may communicate via the PMCP system to show direction of movement and likely future positions. For example, a moving robot can communicate its likely track of future movement. [01217] In embodiments, the PMCP system enables visual representations of vector-based data (e.g., via a user interface), highlighting of areas of concern without the need to process enormous volumes of data. The representation allows for the display of many monitored vector inputs. The user interface can then display information relating to the key items of interest, specifically vectors showing areas of unusual or troublesome movement. This mechanism allows sophisticated models that are built at the edge device edge nodes to feed into end user communications in a visually informative way. [01218] Functional units produce a constant stream of “boring” data. By changing from producing data, to being monitored for problems, issues with the logistical modules are highlighted without the need for scrutiny of fine-grained data. In embodiments, the vectorizing process could constantly manage a predictive model showing future state. In the context of maintenance, these changes to the parameters in the predictive model are in and of themselves predictors of change in operational parameters, potentially indicating the need for maintenance. In embodiments, functional areas are not always designed to be connected, but by allowing for an external device to virtually monitor devices, functional areas that do not allow for connectivity can become part of the information flow in the goods. This concept extends to allow functional areas that have limited connectivity to be monitored effectively by embellishing their data streams with vectorized monitored information. Placing an automated device in the proximity of the functional unit that has limited or no connectivity allows capture of information from the devices without the requirement of connectivity. There is also potential to add training data capture functional units for these unconnected or limitedly connected functional areas. These training data capture functional units are typically quite expensive and can provide high quality monitoring data, which is used as an input into the proximity edge device monitoring device to provide data for supervised learning algorithms. [01219] Oftentimes, locations are laden with electrical interference, causing fundamental challenges with communications. The traditional approach of streaming all the fine-grained data is dependent on the completeness of the data stream. For example, if an edge device was to go offline for 10 minutes, the streaming data and its information would be lost. With vectorized
SFT-106-A-PCT communication, the offline unit continues to refine the predictive model until the moment when it reconnects, which allows the updated model to be transmitted via the PMCP system. [01220] In embodiments, systems and devices may be based on the PMCP protocol. For example, cameras and vision systems (e.g., liquid lens systems), user devices, sensors, robots, smart containers, and the like may use PMCP and/or vector-based communication. By using vector-based cameras, for example, only information relating to the movement of items is transmitted. This reduces the data volume and by its nature filters information about static items, showing only the changes in the images and focusing the data communication on elements of change. The overall shift in communication to communication of change is similar to how the human process of sight functions, where stationary items are not even communicated to the higher levels of the brain. [01221] Radio Frequency Identification allows for massive volumes of mobile tags to be tracked in real-time. In embodiments, the movement of the tags may be communicated as vector information via the PMCP protocol, as this form of communication is naturally suited to handing information regarding the location of tag within the goods. Adding the ability to show future state of the location using predictive models that can use paths of prior movement allows the goods to change the fundamental communication mechanism to one where units consuming data streams are consuming information about the likely future state of the goods. In embodiments, each tagged item may be represented as a probability-based location matrix showing the likely probability of the tagged item being at a position in space. The communication of movement shows the transformation of the location probability matrix to a new set of probabilities. This probabilistic locational overview provides for constant modeling of areas of likely intersection of moving units and allows for refinement of the probabilistic view of the location of items. Moving to a vector- based probability matrix allows units to constantly handle the inherent uncertainty in the measurement of status of various items, entities, and the like. In embodiments, status includes, but is not limited to, location, temperature, movement and power consumption. [01222] In embodiments, continuous connectivity is not required for continuous monitoring of sensor inputs in a PMCP-based communication system. For example, a mobile robotic device with a plurality of sensors will continue to build models and predictions of data streams while disconnected from the network, and upon reconnection, the updated models are communicated. Furthermore, other systems or devices that use input from the monitored system or device can apply the best known, typically last communicated, vector predictions to continue to maintain a probabilistic understanding of the states of the goods. [01223] FIG.95 is a simplified thalamus service workflow 9500 according to some embodiments. In the example provided, the intake controller 9408 receives sensors, external systems, and process data 9502 and input from preconfigured PMCP devices with local processing 9504. The decision
SFT-106-A-PCT 9506 related to PMCP proceeds to decision 9508 when the decision is “no” and proceeds to decision 9510 when the decision is “yes.” If the data is not reduced as decided in decision 9508, the workflow proceeds to intake management system 9406. If the data is reduced according to decision 9508, the workflow 9500 proceeds to decision 9510. When the decision 9510 is yes regarding a PMCP consumer, the workflow 9500 proceeds to PMCP device interface 9512. When the decision 9510 is no regarding a PMCP consumer, the workflow 9500 proceeds to ETL (extract transform load) process from PMCP data base 9514. The output relates to downstream system of systems data consumer processing 9516. Dual Process Artificial Neural Network [01224] In embodiments, the transportation system includes a dual process artificial neural network (DPANN) system 9500. The DPANN system 9500 includes an artificial neural network (ANN) having behaviors and operational processes (such as decision-making) that are products of a training system and a retraining system. The training system is configured to perform automatic, trained execution of ANN operations. The retraining system performs effortful, analytical, intentional retraining of the ANN, such as based on one or more relevant aspects of the ANN, such as memory, one or more input data sets (including time information with respect to elements in such data sets), one or more goals or objectives (including ones that may vary dynamically, such as periodically and/or based on contextual changes, such as ones relating to the usage context of the ANN), and/or others. In cases involving memory-based retraining, the memory may include original/historical training data and refined training data. The DPANN system 9500 includes a dual process learning function (DPLF) configured to manage and perform an ongoing data retention process. The DPLF (including, where applicable, memory management process) facilitate retraining and refining of behavior of the ANN. The DPLF provides a framework by which the ANN creates outputs such as predictions, classifications, recommendations, conclusions and/or other outputs based on a historic inputs, new inputs, and new outputs (including outputs configured for specific use cases, including ones determined by parameters of the context of utilization (which may include performance parameters such as latency parameters, accuracy parameters, consistency parameters, bandwidth utilization parameters, processing capacity utilization parameters, prioritization parameters, energy utilization parameters, and many others). [01225] In embodiments, the DPANN system 9500 associated with the transportation system may store training data, thereby allowing for constant retraining based on results of decisions, predictions, and/or other operations of the ANN, as well as allowing for analysis of training data upon the outputs of the ANN. The management of entities stored in the memory allows the construction and execution of new models, such as ones that may be processed, executed or otherwise performed by or under management of the training system. The DPANN system 9500
SFT-106-A-PCT uses instances of the memory to validate actions (e.g., in a manner similar to the thinking of a biological neural network (including retrospective or self-reflective thinking about whether actions that were undertaken under a given situation where optimal) and perform training of the ANN, including training that intentionally feeds the ANN with appropriate sets of memories (i.e., ones that produce favorable outcomes given the performance requirements for the ANN). [01226] In embodiments, the DPLF may be or include the continued process retention of one or more training datasets and/or memories stored in the memory over time. The DPLF thereby allows the ANN to apply existing neural functions and draw upon sets of past events (including ones that are intentionally varied and/or curated for distinct purposes), such as to frame understanding of and behavior within present, recent, and/or new scenarios, including in simulations, during training processes, and in fully operational deployments of the ANN. The DPLF may provide the ANN with a framework by which the ANN may analyze, evaluate, and/or manage data, such as data related to the past, present and future. As such, the DPLF plays a crucial role in training and retraining the ANN via the training system and the retraining system. [01227] In embodiments, the DPLF is configured to perform a dual-process operation to manage existing training processes and is also configured to manage and/or perform new training processes, i.e., retraining processes. In embodiments, each instance of the ANN is trained via the training system and configured to be retrained via the retraining system. The ANN encodes training and/or retraining datasets, stores the datasets, and retrieves the datasets during both training via the training system and retraining via the retraining system. The DPANN system 9500 may recognize whether a dataset (the term dataset in this context optionally including various subsets, supersets, combinations, permutations, elements, metadata, augmentations, or the like, relative to a base dataset used for training or retraining), storage activity, processing operation and/or output, has characteristics that natively favor the training system versus the retraining system based on its respective inputs, processing (e.g., based on its structure, type, models, operations, execution environment, resource utilization, or the like) and/or outcomes (including outcome types, performance requirements (including contextual or dynamic requirements), and the like. For example, the DPANN system 9500 may determine that poor performance of the training system on a classification task may indicate a novel problem for which the training of the ANN was not adequate (e.g., in type of data set, nature of input models and/or feedback, quantity of training data, quality of tagging or labeling, quality of supervision, or the like), for which the processing operations of the ANN are not well-suited (e.g., where they are prone to known vulnerabilities due to the type of neural network used, the type of models used, etc.), and that may be solved by engaging the retraining system to retrain the model to teach the model to learn to solve the new classification problem (e.g., by feeding it many more labeled instances of correctly classified
SFT-106-A-PCT items). With periodic or continuous evaluation of the performance of the ANN, the DPANN system may subsequently determine that highly stable performance of the ANN (such as where only small improvements of the ANN occur over many iterations of retraining by the retraining system) indicates readiness for the training system to replace the retraining system (or be weighted more favorably where both are involved). Over longer periods of time, cycles of varying performance may emerge, such as where a series of novel problems emerge, such that the retraining system of the DPANN is serially engaged, as needed, to retrain the ANN and/or to augment the ANN by providing a second source of outputs (which may be fused or combined with ANN outputs to provide a single result (with various weightings across them), or may be provided in parallel, such as enabling comparison, selection, averaging, or context- or situation-specific application of the respective outputs). [01228] In embodiments, the ANN is configured to learn new functions in conjunction with the collection of data according to the dual-process training of the ANN via the training system and the retraining system. The DPANN system 9500 performs analysis of the ANN via the training system and performs initial training of the ANN such that the ANN gains new internal functions (or internal functions are subtracted or modified, such as where existing functions are not contributing to favorable outcomes). After the initial training, the DPANN system 9500 performs retraining of the ANN via the retraining system. To perform the retraining, the retraining system evaluates the memory and historic processing of the ANN to construct targeted DPLF processes for retraining. The DPLF processes may be specific to identified scenarios. The ANN processes can run in parallel with the DPLF processes. By way of example, the ANN may function to operate a particular make and model of a self-driving car after the initial training by the training system. The DPANN system 9500 may perform retraining of the functions of the ANN via the retraining system, such as to allow the ANN to operate a different make and model of car (such as one with different cameras, accelerometers and other sensors, different physical characteristics, different performance requirements, and the like), or even a different kind of vehicle, such as a bicycle or a spaceship. [01229] In embodiments, as quality of outputs and/or operations of the ANN improves, and as long as the performance requirements and the context of utilization for the ANN remain fairly stable, performing the dual-process training process can become a decreasingly demanding process. As such, the DPANN system 9500 may determine that fewer neurons of the ANN are required to perform operations and/or processes of the ANN, that performance monitoring can be less intensive (such as with longer intervals between performance checks), and/or that the retraining is no longer necessary (at least for a period of time, such as until a long-term maintenance period arrives and/or until there are significant shifts in context of utilization). As the ANN continues to
SFT-106-A-PCT improve upon existing functions and/or add new functions via the dual-process training process, the ANN may perform other, at times more “intellectually-demanding” (e.g., retraining intensive) tasks simultaneously. For example, utilizing dual process-learned knowledge of a function or process being trained, the ANN can solve an unrelated complex problem or make a retraining decision simultaneously. The retraining may include supervision, such as where an agent (e.g., human supervisor or intelligent agent) directs the ANN to a retraining objective (e.g., “master this new function”) and provides a set of training tasks and feedback functions (such as supervisory grading) for the retraining. In-embodiments, the ANN can be used to organize the supervision, training and retraining of other dual process-trained ANNs, to seed such training or retraining, or the like. [01230] In embodiments, one or more behaviors and operational processes (such as decision- making) of the ANN may be products of training and retraining processes facilitated by the training system and the retraining system, respectively. The training system may be configured to perform automatic training of ANN, such as by continuously adding additional instances of training data as it is collected by or from various data sources. The retraining system may be configured to perform effortful, analytical, intentional retraining of the ANN, such as based on memory (e.g., stored training data or refined training data) and/or optionally based on reasoning or other factors. For example, in a deployment management context, the training system may be associated with a standard response by the ANN, while the retraining system may implement DPLF retraining and/or network adaptation of the ANN. In some cases, retraining of the ANN beyond the factory, or “out- of-the-box,” training level may involve more than retraining by the retraining system. Successful adjustment of the ANN by one or more network adaptations may be dependent on the operation of one or more network adjustments of the training system. [01231] In embodiments, the training system may facilitate fast operating by and training of the ANN by applying existing neural functions of the ANN based on training of the ANN with previous datasets. Standard operational activities of the ANN that may draw heavily on the training system may include one or more of the methods, processes, workflows, systems, or the like described throughout this disclosure and the documents incorporated herein, such as, without limitation: defined functions within networking (such as discovering available networks and connections, establishing connections in networks, provisioning network bandwidth among devices and systems, routing data within networks, steering traffic to available network paths, load balancing across networking resources, and many others); recognition and classification (such as of images, text, symbols, objects, video content, music and other audio content, speech content, and many others); spoken words; prediction of states and events (such as prediction of failure modes of machines or systems, prediction of events within workflows, predictions of behavior in shopping
SFT-106-A-PCT and other activities, and many others); control (such as controlling autonomous or semi- autonomous systems, automated agents (such as automated call-center operations, chat bots, and the like) and others); and/or optimization and recommendation (such as for products, content, decisions, and many others). ANNs may also be suitable for training datasets for scenarios that only require output. The standard operational activities may not require the ANN to actively analyze what is being asked of the ANN beyond operating on well-defined data inputs, to calculate well-defined outputs for well-defined use cases. The operations of the training system and/or the retraining system may be based on one or more historic data training datasets and may use the parameters of the historic data training datasets to calculate results based on new input values and may be performed with small or no alterations to the ANN or its input types. In embodiments, an instance of the training system can be trained to classify whether the ANN is capable of performing well in a given situation, such as by recognizing whether an image or sound being classified by the ANN is of a type that has historically been classified with a high accuracy (e.g., above a threshold). [01232] In embodiments, network adaptation of the ANN by one or both of the training system and the retraining system may include a number of defined network functions, knowledge, and intuition-like behavior of the ANN when subjected to new input values. In such embodiments, the retraining system may apply the new input values to the DPLF system to adjust the functional response of the ANN, thereby performing retraining of the ANN. The DPANN system 9500 may determine that retraining the ANN via network adjustment is necessary when, for example, without limitation, functional neural networks are assigned activities and assignments that require the ANN to provide a solution to a novel problem, engage in network adaptation or other higher-order cognitive activity, apply a concept outside of the domain in which the DPANN was originally designed, support a different context of deployment (such as where the use case, performance requirements, available resources, or other factors have changed), or the like. The ANN can be trained to recognize where the retraining system is needed, such as by training the ANN to recognize poor performance of the training system, high variability of input data sets relative to the historical data sets used to train the training system, novel functional or performance requirements, dynamic changes in the use case or context, or other factors. The ANN may apply reasoning to assess performance and provide feedback to the retraining system. The ANN may be trained and/or retrained to perform intuitive functions, optionally including by a combinatorial or re-combinatorial process (e.g., including genetic programming wherein inputs (e.g., data sources), processes/functions (e.g., neural network types and structures), feedback, and outputs, or elements thereof, are arranged in various permutations and combinations and the ANN is tested in association with each (whether in simulations or live deployments), such as in a series of rounds, or evolutionary steps, to promote favorable variants until a preferred ANN, or preferred set of
SFT-106-A-PCT ANNs is identified for a given scenario, use case, or set of requirements). This may include generating a set of input “ideas” (e.g., combinations of different conclusions about cause-and-effect in a diagnostic process) for processing by the retraining system and subsequent training and/or by an explicit reasoning process, such as a Bayesian reasoning process, a casuistic or conditional reasoning process, a deductive reasoning process, an inductive reasoning process, or others (including combinations of the above) as described in this disclosure or the documents incorporated herein by reference. [01233] Referring to Fig. 96, in embodiments, the DPLF may perform an encoding process 9600 of the DPLF to process datasets into a stored form for future use, such as retraining of the ANN by the retraining system. The encoding process enables datasets to be taken in, understood, and altered by the DPLF to better support storage in and usage from the memory. The DPLF may apply current functional knowledge and/or reasoning to consolidate new input values. In the example provided, data is taken in as data input 9602. The memory can include short-term memory (STM) 9604, long- term memory (LTM) 9606, or a combination thereof. The datasets may be stored in one or both of the STM and the LTM. The STM may be implemented by the application of specialized behaviors inside the ANN (such as recurrent neural network, which may be gated or un-gated, or long-term short-term neural networks). The LTM may be implemented by storing scenarios, associated data, and/or unprocessed data that can be applied to the discovery of new scenarios. The encoding process may include processing and/or storing, for example, visual encoding data (e.g., processed through a Convolution Neural Network), acoustic sensor encoding data (e.g., how something sounds, speech encoding data (e.g., processed through a deep neural network (DNN), optionally including for phoneme recognition), semantic encoding data of words, such to determine semantic meaning, e.g., by using a Hidden Markov Model (HMM); and/or movement and/or tactile encoding data (such as operation on vibration/accelerometer sensor data, touch sensor data, positional or geolocation data, and the like). While datasets may enter the DPLF system through one of these modes, the form in which the datasets are stored may differ from an original form of the datasets and may pass-through neural processing engines to be encoded into compressed and/or context- relevant format. For example, an unsupervised instance of the ANN can be used to learn the historic data into a compressed format. [01234] In embodiments, the encoded datasets are retained within the DPLF system. Encoded datasets are first stored in short-term DPLF, i.e., STM. For example, sensor datasets may be primarily stored in STM, and may be kept in STM through constant repetition. The datasets stored in the STM are active and function as a kind of immediate response to new input values. The DPANN system 9500 may remove datasets from STM in response to changes in data streams due to, for example, running out of space in STM as new data is imported, processed and/or stored. For
SFT-106-A-PCT example, it is viable for short-term DPLF to only last between 15 and 30 seconds. STM may only store small amounts of data typically embedded inside the ANN. [01235] In embodiments, the DPANN system 9500 may measure attention based on utilization of the training system, of the DPANN system 9500 as a whole, and/or the like, such as by consuming various indicators of attention to and/or utilization of outputs from the ANN and transmitting such indicators to the ANN in response (similar to a “moment of recognition” in the brain where attention passes over something and the cognitive system says “aha!”). In embodiments, attention can be measured by the sheer amount of the activity of one or both of the systems on the data stream. In embodiments, a system using output from the ANN can explicitly indicate attention, such as by an operator directing the ANN to pay attention to a particular activity (e.g., to respond to a diagnosed problem, among many other possibilities). The DPANN system 9500 may manage data inputs to facilitate measures of attention, such as by prompting and/or calculating greater attention to data that has high inherent variability from historical patterns (e.g., in rates of change, departure from norm, etc.), data indicative of high variability in historical performance (such as data having similar characteristics to data sets involved in situations where the ANN performed poorly in training), or the like. [01236] In embodiments, the DPANN system 9500 may retain encoded datasets within the DPLF system according to and/or as part of one or more storage processes. The DPLF system may store the encoded datasets in LTM as necessary after the encoded datasets have been stored in STM and determined to be no longer necessary and/or low priority for a current operation of the ANN, training process, retraining process, etc. The LTM may be implemented by storing scenarios, and the DPANN system 9500 may apply associated data and/or unprocessed data to the discovery of new scenarios. For example, data from certain processed data streams, such as semantically encoded datasets, may be primarily stored in LTM. The LTM may also store image (and sensor) datasets in encoded form, among many other examples. [01237] In embodiments, the LTM may have relatively high storage capacity, and datasets stored within LTM may, in some scenarios, be effectively stored indefinitely. The DPANN system 9500 may be configured to remove datasets from the LTM, such as by passing LTM data through a series of memory structures that have increasingly long retrieval periods or increasingly high threshold requirements to trigger utilization (similar to where a biological brain “thinks very hard” to find precedent to deal with a challenging problem), thereby providing increased salience of more recent or more frequently used memories while retaining the ability to retrieve (with more time/effort) older memories when the situation justifies more comprehensive memory utilization. As such, the DPANN system 9500 may arrange datasets stored in the LTM on a timeline, such as by storing the older memories (measured by time of origination and/or latest time of utilization) on a separate
SFT-106-A-PCT and/or slower system, by penalizing older memories by imposing artificial delays in retrieval thereof, and/or by imposing threshold requirements before utilization (such as indicators of high demand for improved results). Additionally or alternatively, LTM may be clustered according to other categorization protocols, such as by topic. For example, all memories proximal in time to a periodically recognized person may be clustered for retrieval together, and/or all memories that were related to a scenario may be clustered for retrieval together. [01238] In embodiments, the DPANN system 9500 may modularize and link LTM datasets, such as in a catalog, a hierarchy, a cluster, a knowledge graph (directed/acyclic or having conditional logic), or the like, such as to facilitate search for relevant memories. For example, all memory modules that have instances involving a person, a topic, an item, a process, a linkage of n-tuples of such things (e.g., all memory modules that involve a selected pair of entities), etc. The DPANN system 9500 may select sub-graphs of the knowledge graph for the DPLF to implement in one or more domain-specific and/or task-specific uses, such as training a model to predict robotic or human agent behavior by using memories that relate to a particular set of robotic or human agents, and/or similar robotic or human agents. The DPLF system may cache frequently used modules for different speed and/or probability of utilization. High value modules (e.g., ones with high-quality outcomes, performance characteristics, or the like) can be used for other functions, such as selection/training of STM keep/forget processes. [01239] In embodiments, the DPANN system 9500 may modularize and link LTM datasets, such as in various ways noted above, to facilitate search for relevant memories. For example, memory modules that have instances involving a person, a topic, an item, a process, a linkage of n-tuples of such things (such as all memory modules that involve a selected pair of entities), or all memories associated with a scenario, etc., may be linked and searched. The DPANN system 9500 may select subsets of the scenario (e.g., sub-graphs of a knowledge graph) for the DPLF for a domain-specific and/or task-specific use, such as training a model to predict robotic or human agent behavior by using memories that relate to a particular set of robotic or human agents and/or similar robotic or human agents. Frequently used modules or scenarios can be cached for different speed/probability of utilization, or other performance characteristics. High value modules or scenarios (ones where high-quality outcomes results) can be used for other functions, such as selection/training of STM keep/forget processes, among others. [01240] In embodiments, the DPANN system 9500 may perform LTM planning, such as to find a procedural course of action for a declaratively described system to reach its goals while optimizing overall performance measures. The DPANN system 9500 may perform LTM planning when, for example,^a problem can be described in a declarative way, the DPANN system 9500 has domain knowledge that should not be ignored, there is a structure to a problem that makes the problem
SFT-106-A-PCT difficult for pure learning techniques, and/or the ANN needs to be trained and/or retrained to be able to explain a particular course of action taken by the DPANN system 9500. In embodiments, the DPANN system 9500 may be applied to a plan recognition problem, i.e., the inverse of a planning problem: instead of a goal state, one is given a set of possible goals, and the objective in plan recognition is to find out which goal was being achieved and how. [01241] In embodiments, the DPANN system 9500 may facilitate LTM scenario planning by users to develop long-term plans. For example, LTM scenario planning for risk management use cases may place added emphasis on identifying extreme or unusual, yet possible, risks and opportunities that are not usually considered in daily operations, such as ones that are outside a bell curve or normal distribution, but that in fact occur with greater-than-anticipated frequency in “long tail” or “fat tail” situations, such as involving information or market pricing processes, among many others. LTM scenario planning may involve analyzing relationships between forces (such as social, technical, economic, environmental, and/or political trends) in order to explain the current situation, and/or may include providing scenarios for potential future states. [01242] In embodiments, the DPANN system 9500 may facilitate LTM scenario planning for predicting and anticipating possible alternative futures along with the ability to respond to the predicted states. The LTM planning may be induced from expert domain knowledge or projected from current scenarios, because many scenarios (such as ones involving results of combinatorial processes that result in new entities or behaviors) have never yet occurred and thus cannot be projected by probabilistic means that rely entirely on historical distributions. The DPANN system 9500 may prepare the application to LTM to generate many different scenarios, exploring a variety of possible futures to the DPLM for both expected and surprising futures. This may be facilitated or augmented by genetic programming and reasoning techniques as noted above, among others. [01243] In embodiments, the DPANN system 9500 may implement LTM scenario planning to facilitate transforming risk management into a plan recognition problem and apply the DPLF to generate potential solutions. LTM scenario induction addresses several challenges inherent to forecast planning. LTM scenario induction may be applicable when, for example, models that are used for forecasting have inconsistent, missing, unreliable observations; when it is possible to generate not just one but many future plans; and/or when LTM domain knowledge can be captured and encoded to improve forecasting (e.g., where domain experts tend to outperform available computational models). LTM scenarios can be focused on applying LTM scenario planning for risk management. LTM scenarios planning may provide situational awareness of relevant risk drivers by detecting emerging storylines. In addition, LTM scenario planning can generate future scenarios that allow DPLM, or operators, to reason about, and plan for, contingencies and opportunities in the future.
SFT-106-A-PCT [01244] In embodiments, the DPANN system 9500 may be configured to perform a retrieval process via the DPLF to access stored datasets of the ANN. The retrieval process may determine how well the ANN performs with regard to assignments designed to test recall. For example, the ANN may be trained to perform a controlled vehicle parking operation, whereby the autonomous vehicle returns to a designated spot, or the exit, by associating a prior visit via retrieval of data stored in the LTM. The datasets stored in the STM and the LTM may be retrieved by differing processes. The datasets stored in the STM may be retrieved in response to specific input and/or by order in which the datasets are stored, e.g., by a sequential list of numbers. The datasets stored in the LTM may be retrieved through association and/or matching of events to historic activities, e.g., through complex associations and indexing of large datasets. [01245] In embodiments, the DPANN system 9500 may implement scenario monitoring as at least a part of the retrieval process. A scenario may provide context for contextual decision-making processes. In embodiments, scenarios may involve explicit reasoning (such as cause-and-effect reasoning, Bayesian, casuistic, conditional logic, or the like, or combinations thereof) the output of which declares what LTM-stored data is retrieved (e.g., a timeline of events being evaluated and other timelines involving events that potentially follow a similar cause-and-effect pattern). For example, diagnosis of a failure of a machine or workflow may retrieve historical sensor data as well as LTM data on various failure modes of that type of machine or workflow (and/or a similar process involving a diagnosis of a problem state or condition, recognition of an event or behavior, a failure mode (e.g., a financial failure, contract breach, or the like), or many others). [01246] Referring to Fig.97, in embodiments, the transportation methods and systems described herein may include AI capabilities, convergence technology stack capabilities and software-defined vehicle (SDV) modules 9700, including a governance layer 9702. The governance layer 9702 AI capabilities may include, but are not limited to, embedded policy and governance. Convergence technology stack examples of the governance layer 9702 include, but are not limited to, policy automation, regulatory compliance automation, and reporting automation. SDV module examples of the governance layer 9702 include, but are not limited to, automated governance of vehicle operations. [01247] In embodiments, the transportation methods and systems described herein may include an enterprise layer 9704. The enterprise layer 9704 AI capabilities may include, but are not limited to, contextual simulation and forecasting. Convergence technology stack examples of the enterprise layer 9704 include, but are not limited to, executive digital twins for vehicle fleet operations, vehicle digital twins for design and simulation, and enterprise access layer for fleet transactions. SDV module examples of the enterprise layer 9704 include, but are not limited to, SDV fleet management. [01248] In embodiments, the transportation methods and systems described herein may include an offering layer 9706. The offering layer 9706 AI capabilities may include, but are not limited to, expert
SFT-106-A-PCT systems and generative AI. Convergence technology stack examples of the offering layer 9706 include, but are not limited to, location-based offering systems, advertising and marketplace integration and customer-facing vehicle digital twins. SDV module examples of the offering layer 9706 include, but are not limited to, digital platform for rider UX and vehicle management. [01249] In embodiments, the transportation methods and systems described herein may include a transaction layer 9708. The transaction layer 9708 AI capabilities may include, but are not limited to, discovery, generation and optimization. Convergence technology stack examples of the transaction layer 9708 include, but are not limited to, user profiling and targeting, smart contract configuration for in- vehicle offers and automated transaction orchestration. SDV module examples of the transaction layer 9708 include, but are not limited to, in-vehicle content and advertising platform. [01250] In embodiments, the transportation methods and systems described herein may include an operations layer 9710. The operations layer 9710 AI capabilities may include, but are not limited to, routing, control, optimization, and generation. Convergence technology stack examples of the operations layer 9710 include, but are not limited to, automated rider satisfaction monitoring, location of resources for mobility demand, and vehicle drivetrain optimization. SDV module examples of the operations layer 9710 include, but are not limited to, converged, AI-based fleet orchestration and optimization. [01251] In embodiments, the transportation methods and systems described herein may include a network layer 9712. The network layer 9712 AI capabilities may include, but are not limited to, adaptive networking. Convergence technology stack examples of the network layer 9712 include, but are not limited to, energy-aware edge and cloud, cellular, WiFi, ORAN, Bluetooth, and vehicle-aware internet of things. SDV module examples of the network layer 9712 include, but are not limited to, V2X networking. [01252] In embodiments, the transportation methods and systems described herein may include a data layer 9714. The data layer 9714 AI capabilities may include, but are not limited to, sensor and data fusion. Convergence technology stack examples of the data layer 9714 include, but are not limited to, sensor and energy ops data, market, environmental and alternative data, APIs, SOA & distributed data. SDV module examples of the data layer 9714 include, but are not limited to, context aware sensor fusion to inform analytics and AI. [01253] In embodiments, the transportation methods and systems described herein may include a resource layer 9716. The resource layer 9716 AI capabilities may include but are not limited to resource optimization. Convergence technology stack examples of the resource layer 9716 include, but are not limited to, charging infrastructure optimization, vehicle energy efficiency optimization, and attention resource management. SDV module examples of the resource layer 9716 include, but are not limited to, optimization of energy for vehicle fleet management.
SFT-106-A-PCT [01254] Referring to Fig.98, in embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, automated governance of vehicle operations 9802. In an example, automated governance of vehicle operations modules 9802 may address how increasingly networked vehicle fleets can be governed by policy automation that keeps in step with ever-shifting regulatory frameworks that apply to transportation entities and operations. [01255] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, software defined vehicle fleet management 9804. In an example, software defined vehicle fleet management modules 9804 may use AI and automation to enable fleet-level management and operation of vehicles based on sensor data from vehicles and transportation networks. [01256] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, digital platform for rider UX and vehicle management 9806. In an example, digital platform for rider UX and vehicle management modules 9806 may enable AI optimization of features across all aspects of the in- vehicle user experience and all aspects of the vehicle drivetrain during all phases of the vehicle life cycle, from initial design to operation. [01257] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, in-vehicle content and advertising platform 9808. In an example, in-vehicle content and advertising platform modules 9808 may enable targeted delivery of localized, route-relevant content and advertising to vehicle operators and passengers. [01258] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, converged, AI-based fleet orchestration and optimization 9810. In an example, converged, AI-based fleet orchestration and optimization modules 9810 may enable automated orchestration of vehicle fleets and supporting infrastructure, such as charging networks, based on environmental and marketplace conditions, as well as sensor data from vehicles, infrastructure, and riders. [01259] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, V2X networking 9812. In an example, V2X networking modules 9812 may enable high QoS, adaptive communication across vehicle networks, using the latest generations of networking technologies and other advanced adaptive networking capabilities. [01260] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, context aware sensor fusion
SFT-106-A-PCT to inform analytics and AI 9814. In an example, context aware sensor fusion to inform analytics and AI modules 9814 may fuse data from disparate vehicle sensor and other transportation-relevant data sources to facilitate AI classification, prediction, and optimization for rider satisfaction, vehicle optimization, and other use cases. [01261] In embodiments, the transportation methods and systems described herein may include software defined vehicle modules 9800 including, but not limited to, optimization of energy for vehicle fleet management 9816. In an example, optimization of energy for vehicle fleet management modules 9816 may provide for automated optimization of energy, networking and other resources for a vehicle or a fleet based on environmental, marketplace, vehicle, or other data. [01262] In embodiments, a software-defined vehicle may be configured for mitigating rider seat fatigue. In an example, the vehicle is equipped with a plurality of seat sensors configured for detecting physical parameters indicative of rider fatigue, such as pressure points, posture, and movement. These sensors generate real-time data that is analyzed by a generative AI engine. This engine may create personalized seat adjustment profiles by fusing the sensor data with rider preference data to construct a comprehensive model of rider comfort, such as with a digital twin. By employing advanced machine learning algorithms, the AI engine identifies patterns of rider discomfort and dynamically suggests ergonomic changes to the seating arrangement. It may generate real-time recommendations for micro-adjustments to seat positions, thereby redistributing pressure and improving circulation for the rider. [01263] In embodiments, to ensure the security of the transmitted sensor data, an encryption module may be integrated within the vehicle, safeguarding the communication between the seat sensors, a VCU (vehicle control unit), and the generative AI engine. The VCU itself is fortified with a secure access control system, limiting modifications of the generative AI engine to authorized personnel and enabling detection and response to cybersecurity threats by initiating protective protocols. The vehicle seating system is virtually represented by a digital twin, which the generative AI engine utilizes to simulate and evaluate the effectiveness of fatigue mitigation strategies. This digital twin is capable of real-time updates with sensor data to accurately reflect the current state of the seating system and rider fatigue levels and may also be used for virtual testing of potential new seat materials and designs aimed at fatigue reduction. [01264] In embodiments, the software-defined vehicle may feature a user-friendly interface that displays the AI-generated seat adjustment recommendations and allows the rider to provide feedback. This interface may include a haptic feedback mechanism to alert the rider about the need for a change in seating position, enhancing the interactive experience. Additionally, the interface may be designed to integrate seamlessly with a mobile application that monitors seating patterns
SFT-106-A-PCT and offers personalized fatigue mitigation advice based on output from the generative AI engine, promoting a comfortable and safe journey for the rider. [01265] In embodiments, a software-defined vehicle may be equipped with a monitoring system that continuously observes the driver's interactions with vehicle controls and navigation systems for cognitive engagement and routing behavior analysis. This system may be designed to identify patterns in driving behavior that may indicate routine and potentially cognitively under-stimulating activities. Upon detecting such patterns, the vehicle's processor and memory, executing specialized software instructions, initiate the generation of cognitive challenges. These challenges are tailored to engage the driver's cognitive functions, such as memory recall, spatial navigation, and executive decision-making. For instance, the vehicle may present route deviation prompts that encourage the driver to navigate familiar routes without step-by-step navigation assistance, thereby stimulating the driver's spatial awareness and problem-solving skills. [01266] In embodiments, a generative AI module within the vehicle may be configured to create a variety of cognitive tasks that stimulate key brain functions. This AI module may utilize a comprehensive database of information about the driver, including past driving experiences, to generate challenges that are both engaging and appropriate for the driver's cognitive abilities. The complexity of these challenges is dynamically adjusted by the AI based on real-time analysis of the driver's interactions with the challenges. For example, if the driver consistently performs well on certain types of tasks, the AI may increase the difficulty to provide a continuous cognitive stimulus. [01267] In embodiments, the vehicle may include a gamification module and driver incentivization. For example, the gamification module may assign points and rewards for the successful completion of cognitive challenges, thereby introducing a competitive and rewarding element to the driving experience. The gamification module may include features such as a leaderboard, which fosters a competitive environment by allowing drivers to compare their performance with historical data or against other drivers. For example, rewards for completing challenges could range from virtual badges to unlocking new vehicle features, or even personalized messages of encouragement, all aimed at promoting regular cognitive engagement. [01268] In embodiments, the vehicle may be designed to adapt an internal environment to create optimal conditions for cognitive function. The generative AI engine, in conjunction with environmental control systems, adjusts parameters such as cabin lighting and temperature based on factors like the time of day and the detected state of the driver. For example, the AI may increase cabin brightness during early morning drives to enhance alertness. Additionally, the vehicle may include an emergency intervention protocol that activates if the driver does not respond to cognitive challenges, potentially indicating an acute cognitive impairment. The emergency intervention
SFT-106-A-PCT protocol may take measures such as safely pulling the vehicle over and alerting emergency services. [01269] In embodiments, recognizing the importance of physical activity in maintaining cognitive health, the software-defined vehicle may suggest periodic breaks for physical exercises during long journeys. The generative AI engine may propose a selection of exercises tailored to the driver's physical capabilities and preferences. Furthermore, the vehicle may feature a natural language processing module that enables the driver to interact with the system and the cognitive challenges using voice commands. This hands-free interaction ensures that the driver remains focused on the road while engaging with the cognitive tasks. [01270] In embodiments, a vehicle maintenance system may utilize a data processing unit for an analysis of a comprehensive set of data that may include environmental conditions, user behavioral patterns, and vehicle diagnostic information. In an example, the environmental data encompasses variables such as weather patterns, daylight hours, and seasonal changes, which can influence a mood of a user and receptivity to vehicle maintenance activities. For instance, during winter months with shorter daylight hours, users may be less inclined to schedule maintenance due to the potential impact of seasonal affective disorder on their mood states. The emotional state detection module within the system may be engineered to discern patterns in emotional states of the user by analyzing the comprehensive data set. This module may infer mood states associated with different seasons to optimize user engagement with maintenance activities. For example, the system may detect that a user is more likely to respond positively to maintenance reminders during spring, a season sometimes associated with renewal and taking action. [01271] In embodiments, user behavioral data may be a component of the analysis. The user behavioral data may include historical maintenance records, social media activity, and vehicle usage patterns. The emotional state detection module may leverage this data to identify periods when the user is historically more engaged in maintenance activities. For example, these periods could be times when the user has consistently scheduled maintenance in the past or periods of increased vehicle usage when maintenance becomes more pertinent. [01272] In embodiments, a scheduling module of the system may employ generative AI algorithms to predict the most favorable times within a maintenance window to present maintenance recommendations to the user. This prediction is based on the user's emotional state patterns and historical engagement with maintenance activities. The generative AI algorithms may be, for example, similar to the AI system or the expert system/artificial intelligence features described herein, which process a wide range of parameters and inputs to inform decision-making processes. [01273] In embodiments, a communication module uses generative AI to craft personalized maintenance notifications. These notifications are designed to resonate with the user's current
SFT-106-A-PCT mood and are seasonally adjusted to increase the likelihood of a positive response. For example, the generative AI may utilize AI systems like the hybrid neural network to process data to optimize interactions and responses. [01274] In embodiments, the vehicle maintenance system may include a user feedback interface that allows for the collection of user responses to maintenance notifications. This feedback may be used by the scheduling module to refine and adjust future maintenance recommendations, ensuring a continuous improvement loop. The system's ability to delay maintenance recommendations during seasons associated with lower maintenance interest by the user demonstrates dynamic adaptability, as discussed herein. [01275] In embodiments, a refueling planning system may integrate an emotional state system designed to predict changes in a user's emotional state in response to refueling decisions. For example, the refueling planning system may be a configuration of the artificial intelligence systems described herein. In embodiments, the emotional state system utilizes neural network configurations to detect and optimize for favorable emotional states of the user during the refueling process. [01276] In embodiments, a fuel status system identifies a current fuel status of the vehicle and predicts future refueling requirements for a trip. In the context of the refueling planning system, refueling includes electrical storage charging and combustion fuel refilling. [01277] In embodiments, a refueling recommendation engine generates a refueling plan that aligns with the user's emotional preferences and the vehicle's refueling requirements. In embodiments, the refueling recommendation engine optimizes parameters for a charging plan and uses neural networks to optimize vehicle operational states. For example, the refueling planning engine may use predictive models to generate refueling plans that consider both combustion fuel refilling and electrical energy storage refueling. [01278] In embodiments, a generative AI module associated with the refueling recommendation engine simulates potential refueling and charging scenarios to generate the refueling plan. For example, the generative AI module may generate creative content and recommendations based on a set of inputs to simulate and evaluate various refueling scenarios to devise an optimal plan. [01279] In embodiments, a digital twin may refer to a virtual representation that simulates vehicle operation and predicts future refueling needs. In embodiments, a user interface with graphical elements represents the impact of refueling options on emotional states. For example, the user interface may include negotiation systems and reward-negotiating systems. In embodiments, the user interface in the refueling planning system provides visual and interactive elements to engage the user and collect feedback. In embodiments, the refueling recommendation engine may analyze social media data to identify refueling locations to achieve positive emotional feedback.
SFT-106-A-PCT [01280] In embodiments, any mobile platform can be fit with sensors in seats or to watch riders while they travel in the mobile platform. In examples, the sensors can be in the seats and experience the moving weight of the riders as the mobile platform moves along its journey. In examples, the sensors can watch the riders relative to fixed points in the mobile platform and determine the amount of motion experienced by the riders to estimate their motion in the seats. In examples, the sensors in the seats can be combined with sensors in the vehicle to determine occupant motion. [01281] In embodiments, the sensors in the seats and/or around the mobile platform can keep track of how many hours you spent in the seat. In examples, the sensors can estimate pressure points on the riders. In examples, the sensors can detect various forms of movement. In examples, the sensors can estimate road vibrations. [01282] In embodiments, the sensors can facilitate creation of and/or input for a digital twin that can detail the experience of one or more riders and how the journey affects the body. In examples, the transportation system can be onboard and/or accessible through a cloud facility that can connect to onboard and/or relevant mobile-connected devices and can host the digital twin. In examples, the digital twin can be used to take into account a complete (or at least more complete) journey so that exposure to vibrations on one journey on day can be determined to be cumulative and added to another journey in another vehicle or other mobile platforms. [01283] In embodiments, the transportation system can then make recommendations as to various sources of exercise, stretching, body manipulation, or the like to mitigate any less than beneficial effects of one or more journeys as determined by the sensors that can sense the riders in the seats on their journeys. [01284] In examples, the recommendations can include stretching and recommendations for stretching based on hours in the car, truck, or other mobile platforms. [01285] In embodiments, there are several examples of suitable seat sensors that can determine riders' movement in seats from which to base a stretching or exercise routine. In examples, pressure sensors can measure the distribution of weight and pressure on the seat to detect movements and shifts in posture including whether a rider is sitting, leaning, or shifting position. In examples, the accelerometers can detect changes in acceleration, which can be used to infer movements and vibrations. By way of these examples, the placing of accelerometers within the seat can help identify when a rider is shifting, getting in or out of the seat, or making sudden movements. In examples, strain gauges can measure the deformation or strain on the seat caused by a rider's movement. By way of these examples, the strain gauges can detect bending or twisting of the seat, indicating changes in posture. In examples, capacitive sensors can detect changes in capacitance caused by the proximity or touch of a rider. By way of these examples, these sensors are often used in touchscreens and can also be integrated into seats to detect when a rider is sitting or moving. In
SFT-106-A-PCT examples, the ultrasonic sensors use sound waves to detect the distance between the sensor and an object (in this case, the rider). By placing these sensors under the seat, they can detect changes in distance as the rider moves. In examples, infrared sensors can detect the presence and movement of a rider based on changes in infrared radiation. They are commonly used in occupancy detection and can be applied to seats to determine when a rider is present or moving. In examples, the flex sensors are often thin, flexible strips that change resistance when bent. Placing them within the seat material can help detect when a rider is shifting or adjusting their position. In examples, the can generate an electric charge in response to mechanical stress. Placing them within the seat structure can detect vibrations caused by a rider's movements. In examples, optical sensors use light to detect changes in position or movement. Placed within the seat, they can identify when a rider is sitting, leaning, or changing position. In examples, magnetic sensors can detect changes in magnetic fields caused by the presence or movement of a rider. These sensors can be integrated into the seat structure to monitor changes in position. In examples, flex capacitive sensors can combine the principles of capacitive and flex sensors, allowing them to detect both touch and movement in a seat. In examples, force sensing resistors (FSR) change resistance based on the amount of force applied to them. Placing FSRs within the seat can detect shifts in weight and pressure. In examples, the gyroscope sensors can detect rotational movements and changes in orientation. When integrated into a seat, they can help identify when a rider is adjusting their sitting position. It will be appreciated in light of the disclosure that these sensors can be used individually or in combination to accurately detect and interpret riders' movements in seats for various applications such as automotive seats, office chairs, gaming chairs, boating chairs, and the like. The choice of the sensors can depend on the specific requirements of the application and the level of precision needed to capture the desired movements. It will be appreciated in light of the disclosure that in marine, off-road, military applications, and the like, further ruggedization of the sensors and other components can be provided to ensure durability within the given deployment. [01286] In embodiments, the exercises that can be suggested can include deep breathing, which begins by taking deep breaths to help relax your body and reduce any tension. Inhale deeply through your nose, expanding your chest and abdomen, and then exhale slowly through your mouth. In examples, these instructions can be delivered at the vehicle or delivered to your home, place of residence, hotel, boat, or the like after the one or more journeys in the one or more mobile platforms. In examples, the exercises can include neck stretches that can include gently tilting your head to one side, bringing your ear towards your shoulder. Hold for about 15-20 seconds, then switch to the other side. You can also do forward and backward head nods to release neck tension. In examples, the exercises can include shoulder rolls that include rolling your shoulders in a circular motion, first forward and then backward. This can help release tension in your upper back
SFT-106-A-PCT and shoulders. In examples, the exercises can include a seated spinal twist that can include sitting upright and crossing one leg over the other at a 90-degree angle. Place the opposite elbow on the outside of the bent knee and gently twist your torso in that direction. Hold for about 15-20 seconds and then switch sides. In examples, the exercises can include a seated hamstring stretch that includes extending one leg straight out in front of you and flexing your foot. Lean forward slightly from your hips, keeping your back straight, and reach towards your toes. Hold the stretch for about 15-20 seconds on each leg. In examples, the exercises can include ankle circles that include lifting one foot off the ground while seated and rotate your ankle in circular motions. This helps improve blood circulation and reduce stiffness in your ankles. In examples, the exercises can include a seated cat-cow stretch that includes sitting on the edge of your seat with your feet flat on the floor. Place your hands on your knees. As you inhale, arch your back and look up (cow pose), and as you exhale, round your back and tuck your chin towards your chest (cat pose). In examples, the exercises can include a quad stretch that can include standing up and holding onto something for balance. Bend one knee and grab your ankle behind you. Gently pull your foot towards your glutes to stretch your quadriceps. Hold for about 15-20 seconds on each leg. In examples, the exercises can include calf raises that can include standing with your feet hip-width apart. Slowly rise up onto your toes, lifting your heels off the ground. Then lower them back down. Repeat this motion for about 10-15 repetitions to stretch your calf muscles. In examples, the exercises can include a seated forward fold that can include sitting on the edge of a chair with your legs extended in front of you. Hinge forward from your hips, reaching your hands towards your feet. Keep your back straight and hold the stretch for about 15-20 seconds. [01287] In embodiments, mobile platforms in which beneficial functions of the system can be deployed can include car, buses, off-road vehicles, trucks, motorcycles, bicycles, horseback riding, boat rides, airplane rides, trains, helicopters, ski lifts, amusement park rides, and the like. [01288] After riding a bumpy form of transportation and feeling a bit uncomfortable or stiff, a rider might want to perform some exercises to help alleviate any discomfort. By way of these examples, the instructions for exercises can be sent to mobile apps, email, text message, QR code, printed handout, website link, local audio guide, in-app notification based on one or more user selected apps, a kiosk or information booth, social media, and/or virtual assistants. In embodiments, the exercises can be done in one or more locations such as at home, outdoors, a gym, an office, a hotel room, a community center, a professional yoga or pilates studio, any beach or lake, any quiet park, and the like. It will be appreciated in light of the disclosure that the exercises can be done in various settings depending on preferences and the resources available. In embodiments, exercises can be completed in whole or portions that can or may not be carried over to following days. In examples, massage can be integrated into the suggestions so as to not always rely on individual stretching.
SFT-106-A-PCT [01289] While only a few embodiments of the disclosure have been shown and described, it will be obvious to those skilled in the art that many changes and modifications may be made thereunto without departing from the spirit and scope of the disclosure as described in the following claims. All patent applications and patents, both foreign and domestic, and all other publications referenced herein are incorporated herein in their entireties to the full extent permitted by law. [01290] It will be appreciated in light of the disclosure that the routine use of navigation such as turn-by-turn guidance or other regular prompts during a trip can contribute, along with other factors, to reduced neural capacity in drivers in the long term. It can be shown that can eventually lead to drivers becoming overly dependent on their navigation systems, along with potentially diminishing their memory and increasing the likelihood of developing other neurological conditions. It will further be appreciated in light of the disclosure that additional route guidance or purposeful lack of route guidance can convey many of the benefits of conventional navigation systems without the aforementioned disadvantages. Rather than providing turn-by-turn guidance in the form of visual and auditory prompts to help a driver adhere to a predetermined route, a more limited form of assistance can be provided using conventional navigation systems. Such systems are deployed in vehicles, on smartphones, via wearables, or through other types of mobile devices. The following are two example use cases that describe user-selectable levels of navigation prompts and their implied effects on driver behavior and route optimization. [01291] Use-case Example 1: Route deviation [01292] In an example, a trip begins with the driver or other user inputting one or more destinations into their navigation system. User-selectable graphics are then presented, which could include a map, a list of turn-by-turn directions, or other route visualization. Rather than presenting prompts in advance of each turn or step of the route, reactive prompts are presented after a driver deviates from the predetermined route. These prompts can provide varying levels of assistance depending on user preference. For example: • Warning – (e.g., “You’ve deviated from your selected route.”) • Warning + Offer for Minimal Assistance – (e.g., “You’ve deviated from your selected route. Would you like help returning to your route?”) • Warning + Offer for Full Assistance – (e.g., “You’ve deviated from your selected route. Would you like turn-by-turn directions for the rest of your trip?”) [01293] Use-case Example 2: Changing conditions [01294] In an example, a trip begins with the driver or other user inputting one or more destinations into their navigation system. User-selectable graphics are then presented, which could include a map, a list of turn-by-turn directions, or other route visualization. Rather than presenting prompts in advance of each turn or step of the route, reactive prompts are presented after the navigation
SFT-106-A-PCT system receives updated traffic, weather, road closure, or other condition information about the originally selected route. These prompts can provide varying levels of assistance depending on user preference and based on the severity of delays or other conditions that arise during a trip: • Warning – (e.g., “A crash ahead has caused an estimated delay of x minutes.”) • Warning + Offer for Minimal Assistance – (e.g., “A crash ahead has caused an estimated delay of x minutes on your selected route. Divert to Highway X to avoid this delay.”) • Warning + Offer for Full Assistance – (e.g., “An accident ahead has caused an estimated delay of x minutes on your selected route. Would you like to use a new route with turn-by- turn directions?) [01295] These use cases are intended to be simplified examples of limited route guidance. Other events that could cause delays to a selected route or make it impassable may also be used as triggers for navigation system intervention. Additionally, more granular levels of intervention and subsequent assistance by the navigation system can also be included. [01296] With the proliferation of voice command capabilities in vehicle infotainment systems and smartphones, user preference selection and responses to auditory (voice) prompts as described in these example use-cases can typically be safely performed by a driver during a trip. [01297] Gamification may also be applied to navigation systems using limited route guidance. For example, the navigation system can keep track of which destinations or routes are new for each driver along with how many times they have driven each route or to each destination. The navigation system can award points to the driver proportional to the complexity and length of the route and inversely proportional to the amount of navigation assistance they require combined with how many times they’ve driven that route or to that destination. That is, a new destination or route successfully reached by a driver on the first attempt with no navigation assistance is rewarded with maximum points. [01298] Waypoints may be added along the journey to facilitate further gamification points in play. Waypoints can also be added along the journey to facilitate the decision whether to add navigation aids or to remove them. In those examples, the navigation aids can be increased as the overall distance from not yet visited waypoints increases. Waypoints from previous trips can also be added as further data points in gamification and can be noted to increase the mental acuity of the driver. [01299] In embodiments, the transportation methods and systems described herein may include the use of a generative artificial intelligence engine (GAIE) within a vehicle's ecosystem. The GAIE may be designed to generate data, predictions, and responses based on a multitude of inputs from vehicle sensors, user inputs, and external data sources. The GAIE is capable of learning and adapting over time, improving its functionality and the vehicle's performance and user experience.
SFT-106-A-PCT [01300] In embodiments, the GAIE may be integrated with various transportation and vehicle systems, as described herein, including, but not limited to: powertrain control - the GAIE may, in an example, optimize fuel efficiency and power output by analyzing driving patterns, road conditions, and real-time engine data, and/or generate novel engine tuning profiles for different driving conditions; autonomous driving features - the GAIE may, in an example, enhance autonomous driving capabilities by generating predictive models of traffic patterns, pedestrian behavior, and environmental conditions and/or simulate and learn from virtual driving scenarios to improve decision-making algorithms; infotainment and user interface - the GAIE may, in an example, personalize a user’s experience by learning driver preferences and generating custom infotainment content and/or predict and suggest routes, music, or cabin settings based on the driver's habits and current context; maintenance and diagnostics - the GAIE may, in an example, predict vehicle maintenance needs by analyzing sensor data and historical maintenance records and/or generate maintenance schedules and proactive measures to prevent breakdowns; safety systems - the GAIE may, in an example, improve vehicle safety by generating real-time responses to potential hazards, such as deploying airbags optimally or adjusting suspension settings for better control during an emergency maneuver. [01301] In embodiments, the GAIE may receive data from an array of sources, including, but not limited to, onboard sensors (e.g., LIDAR, cameras, accelerometers), user input (e.g., touch inputs, voice commands), external data sources (e.g., GPS, traffic reports, weather forecasts), and vehicle maintenance records and historical data. In an example, the GAIE may process such data, as described herein, to learn and adapt its functions, including using a feedback loop to continuously improve its predictions and outputs based on the outcomes of its previous actions. [01302] In embodiments, users can interact with the GAIE through, for example, voice commands, touch interfaces, or through a dedicated app. The GAIE may generate responses and take actions based on user requests, such as adjusting the vehicle's interior environment or planning a trip itinerary. [01303] In embodiments, the GAIE may employ a variety of machine learning algorithms to process data and generate outputs. The algorithms may include, but are not limited to, convolutional neural networks (CNNs), for example used for processing visual data from cameras and LIDAR sensors, and the like. In an example, CNNs may assist the GAIE in object detection, classification, and environmental understanding crucial for autonomous driving and safety systems; recurrent neural networks (RNNs) for example utilized for temporal data analysis and in predicting time-series events such as traffic flow or driver behavior patterns over time; generative adversarial networks (GANs) for example, employed for generating new data points and/or used in the GAIE to simulate driving scenarios, design new engine tuning maps, and create personalized infotainment content; reinforcement learning (RL) for example using an algorithm to assist the GAIE in learning optimal actions through trial and error and/or to improve decision-making in autonomous driving and to optimize powertrain control for better fuel
SFT-106-A-PCT efficiency and performance; genetic algorithms (GAs) for example used for optimization problems, such as finding the best route for a trip considering current traffic conditions, or optimizing the layout of sensors and components within the vehicle for cost and performance. [01304] In embodiments, the GAIE may use a plurality of data sources to inform its learning and decision-making processes including, but not limited to, onboard sensors, including LIDAR, cameras, accelerometers, gyroscopes, thermometers, and pressure sensors. Data from such sensors may provide real-time information about the vehicle's environment and internal state; user input, for example, touch commands on the vehicle's display, voice commands captured by onboard microphones, and settings preferences set by the user may be fed into the GAIE to tailor the driving experience to individual preferences; external data sources may be used for comprehensive situational awareness, for example data sources such as GPS for location tracking, real-time traffic report systems, weather forecast services, and even social media or event databases for contextual recommendations; vehicle maintenance records and historical data may be used, including in aggregated form, to predict future maintenance needs and to learn from past incidents; vehicle-to-vehicle (V2V) and vehicle-to- infrastructure (V2I) communications may be used, including data received from other vehicles and infrastructure elements, such as traffic lights and road sensors, are used to enhance the GAIE's situational awareness and decision-making capabilities. [01305] In embodiments, the GAIE may be designed with adaptive learning capabilities, allowing it to refine its algorithms based on feedback including, but not limited to, supervised learning - the GAIE may, in an example, use labeled data to learn the correct output for given inputs for tasks like image recognition and predictive maintenance; unsupervised learning - the GAIE may, in an example, cluster and interpret unlabeled data to find patterns or anomalies for identifying novel driving conditions or diagnosing unexpected vehicle behavior; semi-supervised learning - the GAIE may, in an example, leverage a combination of labeled and unlabeled data, for example when there is a limited amount of labeled data available; reinforcement learning feedback loops - the GAIE may, in an example, use reinforcement learning to take actions and receive feedback in the form of rewards or penalties, allowing it to learn optimal strategies over time. [01306] In example embodiments, a generative artificial intelligence engine (GAIE) may be combined with a machine learning system in a transaction environment. Input to the GAIE may include images, video, audio, text, programmatic code, data, and the like. Outputs from a GAIE may include structured and organized prose, images, video, audio content, software / programming source code, formatted data (e.g., arrays), algorithms, definitions, context-specific structures (e.g., smart contacts, transaction platform configuration data sets, and the like), machine language-based data (e.g., API-formatted content), and the like. For GAIE instances in which the models are designed to process text data, the GAIE may interface to other programmatic systems (such as
SFT-106-A-PCT traditional machine learning engines) to process other forms of data into text data. In example embodiments, the other programmatic systems, including systems executing machine learning algorithms, may produce textual based (optionally at volume) that may be consumed by GAIE. For example, consider such another system building a series of one thousand text-based observations on the other-formatted data; this may be a useful input for a GAIE model to learn and process (e.g., summarize) into text-formatted output information. In example embodiments, an interface between the GAIE and its combined machine learning system may be extended to include a dialogue between the systems, where the GAIE includes and/or accesses a capability to ask the machine learning system specific questions to facilitate the refining of its knowledge. For example, the dialogue capability may include a request of the machine learning system to provide an assessment of current market trading positions. In another example, the dialogue capability may encode numeric outputs from the machine learning engine into text (e.g., words, such as high, medium, low) that may be input for interpretation by the GAIE. [01307] In example embodiments, the data processed by a GAIE may include one or more types of content. For example, a GAIE may receive, as input, data that represents one or more natural- language expressions, single- or multidimensional shapes or models, real-world and/or virtual scene representations, LIDAR point-cloud representations, sensor inputs and/or outputs, vehicle and/or machine telemetry, geographic maps, authentication credentials, financial transactions, smart contracts, processing directives and/or resources such as shaders, device configurations such as HDL specifications for programming FPGAs, databases and/or database structural definitions, or the like, including metadata associated with any such data types. Input to the GAIE may also include data that represents one or more features of another machine learning model, such as a configuration (e.g., model type, parameters, and/or hyperparameters), input, internal state (e.g., weights and biases of at least a portion of the model), and/or output of the other machine learning model. These and other forms of content may be received as various forms of data. For example, a natural-language expression received as input by a GAIE could be encoded as one or more of: encoded text, an image of a writing, a sound recording of human speech, a video of an individual exhibiting sign language, an encoding according to a machine learning model embedding, or the like, or any combination thereof. In example embodiments, an input received and processed by the GAIE can include an internal state of the GAIE, such as a partial result of a partial processing of an input, or a set of weights and/or biases of the GAIE as a result of prior processing (e.g., an internal state of a recurrent neural network (RNN)). [01308] In some embodiments, the data and/or content received and processed by a GAIE originates from one or more individuals, such as a person speaking a natural-language expression. In some embodiments, the data and/or content received and processed by a GAIE originates from
SFT-106-A-PCT one or more natural sources, such as patterns formed by nature. In some embodiments, the data and/or content received and processed by a GAIE originates from one or more other devices, such as another machine learning model executing on another device, or from another component of the same device executing the GAIE, such as output of another machine learning model executing on the same device executing the GAIE, or a sensor in an Internet-of-Things (IoT) and/or cloud architecture. In some embodiments, the data and/or content received and processed by a GAIE is artificially synthesized, such as synthetic data generated by an algorithm to augment a training data set. In some embodiments, the data and/or content received and processed by a GAIE is generated by the same GAIE, such as an internal state of the GAIE in response to previous and/or concurrent processing, or a previous output of the GAIE in the manner of a recurrent neural network (RNN). [01309] In some embodiments, at least some or part of the data and/or content received and processed by a GAIE is also used to train the GAIE. For example, a variational GAIE could be trained on an input and a corresponding acceptable output, and could later receive the same input in order to output one or more variations of the acceptable output. In some embodiments, at least some or part of the data and/or content received and processed by a GAIE is different than data and/or content that was used to train the GAIE. In some such embodiments, the data and/or content received and processed by the GAIE is different than but similar to the data and/or content that was used to train the GAIE, such as new inputs that are exhibit a similar statistical distribution of features as the training data. In some such embodiments, the data and/or content received and processed by the GAIE is different than and dissimilar to the data and/or content that was used to train the GAIE, such as new inputs that exhibit a significantly different statistical distribution of features than the training data. In scenarios that involve dissimilar inputs, one or more first outputs of the GAIE in response to a new input may be compared to one or more second outputs of the GAIE in response to inputs of the training data set to determine whether the first outputs and the second outputs are consistent. The GAIE may request and/or receive additional training based on the new inputs and corresponding acceptable outputs. In scenarios that involve dissimilar inputs, the GAIE may present an alert and/or description that indicates how the new inputs and/or corresponding outputs differ from previously received inputs and/or corresponding outputs. [01310] In example embodiments, the output of a GAIE may include one or more types of content. For example, a GAIE may generate, as output, data that represents one or more natural-language expressions, single- or multidimensional shapes or models, real-world and/or virtual scene representations, LIDAR point-cloud representations, sensor inputs and/or outputs, vehicle and/or machine telemetry, geographic maps, authentication credentials, financial transactions, smart contracts, processing directives and/or resources such as shaders, device configurations such as HDL specifications for programming FPGAs, databases and/or database structural definitions, or
SFT-106-A-PCT the like, including metadata associated with any such data types. Output of the GAIE may also include data that represents one or more features of another machine learning model, such as a configuration (e.g., model type, parameters, and/or hyperparameters), input, internal state (e.g., weights and biases of at least a portion of the model), and/or output of the other machine learning model. These and other forms of content may be generated by the GAIE as various forms of data. For example, a natural-language expression generated as output by the GAIE could be encoded as one or more of: encoded text, an image of a writing, a sound recording of human speech, a video of an individual exhibiting sign language, an encoding according to a machine learning model embedding, or the like, or any combination thereof. In example embodiments, an output of the GAIE can include an internal state of the GAIE, such as a partial result of a partial processing of an input, or a set of weights and/or biases of the GAIE as a result of prior processing (e.g., an internal state of a recurrent neural network (RNN)). [01311] In example embodiments, a language-based dialogue-enabled GAIE may be configured to produce (e.g., write) new machine learning models that may process various types of data to provide new and extended text input for processing by the GAIE. In example embodiments, humans may observe and interact with this ongoing dialogue between the two systems. In example embodiments, the dialogue is initiated by an expression of a conversation partner (e.g., a human or another device), and the GAIE generates one or more expressions that are responsive to the expression of the conversation partner. In example embodiments, the GAIE generates an expression to initiate the dialogue, and further responds to one or more expressions of the conversation partner in response to the initiating expression. In example embodiments, the ongoing dialogue occurs in a turn-taking manner, wherein each of the conversational partner and the GAIE generating an expression based on a previous expression of the other of the conversation partner and the GAIE. In example embodiments, the ongoing dialogue occurs extemporaneously, with each of the conversation partner and the GAIE generating expressions irrespective of a timing and/or sequential ordering of previous and/or concurrent expressions of the conversation partner and/or the GAIE. [01312] In example embodiments, the dialogue occurs between a GAIE and a plurality of conversation partners, such as two or more humans, two or more other GAIEs, or a combination of one or more humans and one or more other GAIEs. In some such example embodiments, the GAIE and each of the other conversation partners take turns generating expressions in response to prior expressions from the GAIE and the other conversation partners. In some such embodiments, one or more sub-conversations occur among one or more subsets of the GAIE and the plurality of conversation partners. Such sub-conversations may occur concurrently (e.g., the GAIE concurrently engages in a first conversation with a first conversation partner and a second
SFT-106-A-PCT conversation with a second conversation partner) and/or consecutively (e.g., the GAIE concurrently engages in a first conversation with a first conversation partner, followed by a second conversation with a second conversation partner). Such sub-conversations may involve the same or similar topics or expressions (e.g., the GAIE may present the same or similar conversation- initiating expression to each of a plurality of conversation partners, and may concurrently engage each of the plurality of conversation partners in a separate conversation on the same or similar topic). Such sub-conversations may involve different topics or expressions (e.g., the GAIE may present different conversation-initiating expressions to each of a plurality of conversation partners, and may concurrently engage each of the plurality of conversation partners in a separate conversation on different topics). In example embodiments, a first conversation among a first subset of the GAIE and conversation partners may be related to a second conversation among a second subset of the GAIE and conversation partners (e.g., the second subset may engage in a second conversation based on content of the first conversation among a first subgroup). [01313] In example embodiments, one or more of the GAIE and the conversation partner may embody one or more roles. For example, the GAIE may generate expressions based on a role of a conversation starter, a conversation responder, a teacher, a student, a supervisor, a peer, a subordinate, a team member, an independent observer, a researcher, a particular character in a story, an advisor, a caregiver, a therapist, an ally or enabler of a conversation partner, or a competitor or opponent of a conversation partner (e.g., a “devil’s advocate” that presents opposing and/or alternative viewpoints to a belief or argument of a conversation partner). In example embodiments, at least one of the one or more conversation partner embodies one or more aforementioned roles or other rules. In example embodiments, a role of a GAIE is relative to a role of a conversation partner (e.g., the GAIE may embody a superior, peer, or subordinate role with respect to a role of a conversation partner). In example embodiments, a role of a GAIE in a first conversation among a first subset of the GAIE and a plurality of conversation partners may be the same as or similar to a role of a GAIE in a second conversation among a first subset of the GAIE and the plurality of conversation partners. In example embodiments, a role of a GAIE in a first conversation among a first subset of the GAIE and a plurality of conversation partners may differ from a role of a GAIE in a second conversation among a first subset of the GAIE and the plurality of conversation partners (e.g., the GAIE may embody a role of a teacher in a first conversation and a role of a student in a second conversation). In example embodiments, a role of a GAIE in a conversation may change over time (e.g., the GAIE may first embody a role of a student in a conversation, and may later change to a role of a teacher in the same conversation). In example embodiments, a GAIE may embody two or more roles in a conversation (e.g., the GAIE may exhibit two personalities in a conversation that respectively represent one of two characters in a
SFT-106-A-PCT story). In example embodiments, a GAIE generates expressions between two or more roles in a conversation (e.g., the GAIE may generate a dialogue between each of two characters in a story). In example embodiments, a GAIE may engage in each of multiple conversations in a same or similar modality (e.g., engaging in multiple text-based conversations concurrently). In example embodiments, a GAIE may engage in each of multiple conversations in different modalities (e.g., engaging in a first conversation via text and a second conversation via voice). [01314] In example embodiments, a GAIE participating in a conversation is associated with an avatar (e.g., a name, color, image, two- or three-dimensional model, voice, or the like). Expressions generated by the GAIE may be presented as if originating from the GAIE (e.g., in the voice associated with the GAIE, or in a speech bubble that is displayed near a visual position of a GAIE in a virtual or augmented-reality environment). In example embodiments, an avatar of a GAIE may be based on a role of the GAIE (e.g., a GAIE embodying a role of a teacher may be associated with an avatar depicting a teacher). In example embodiments, an avatar of a GAIE may be included in a real-world actor, such as a robot in a real-world environment such as a stage performance. [01315] In example embodiments, a GAIE may include generative pretrained transformer elements that may be configured as a language model designed to understand various types of input and produce chat commands for a chat-type interface system. These commands may include software development tasks, API calls, and the like. In example embodiments, such a language model may include input functions that support receiving images, including video, to build textual output, functions, and additional questions that may be injected into the dialogue between the two systems in the dialogue embodiment described above. In example embodiments, this multimodal support may allow for contextual analysis of images and other media formats. In an example, users/customers may upload images or other media into a GAIE enabled platform. Based on aspects of a corresponding input prompt, a multi-modal GAIE may be configured for use in a valuation workflow to identify both macro and micro attributes and their correlated effects on valuation from a plurality of perspectives. In this example, photographs/images of an old car may be input along with a valuation-related prompt. In response, the GAIE may identify one or more typical values based on detected attributes of the car, such as the make/model, etc. The GAIE may further take into account finer details in the image to suggest potential value-altering metrics. In one example, a finer detail in the image such as damaged body panels may reduce the car value below a typical value. In another example, a finer detail in the image that shows a marking consistent with a limited production run may increase the valuation. [01316] In example embodiments, such a GAIE may facilitate workflow orchestration for a process that uses a conversational, generative AI agent and another AI-supported process in an orchestrated sequence. In example embodiments, a GAIE may generate, perform, maintain, and/or
SFT-106-A-PCT supervise one or more workflows in a robotic process automation (RPA) environment. For example, a GAIE may be trained to monitor expressions and/or actions of an individual during interaction with other individuals, and may generate similar expressions and/or perform similar actions during similar interactions between the GAIE and other individuals. In some such scenarios, the GAIE passively observes the individual during the interactions with other individuals and self-trains to behave similarly to the individual in similar interactions with other individuals. In some such scenarios, the individual actively trains and/or teaches the GAIE to generate expressions and/or actions (e.g., by creating and/or performing example or pedagogical interactions the GAIE), and based on the training and/or teaching, the GAIE behaves similarly during subsequent interactions between the GAIE and other individuals. In example embodiments, the GAIE is trained and/or taught by an individual to perform a behavior while interacting with individuals, and subsequently performs the behavior while interacting with the same individual who provided the training and/or teaching. [01317] In example embodiments, a pretrained GAIE system may have a smart contract analysis engine that determines one or more features of a smart contract that is under consideration by a user. The GAIE may further have a conversation engine that explains the features of the smart contract to the user, including summarizing contents of smart contracts. [01318] In example embodiments, a GAIE may be pre-trained to perform prompt generation based on a data story or a plurality of sources across systems. Example generated prompts may include instructing and/or requesting the pre-trained GAIE to tell a story about a journey of a product, a business relationship, an event, a service provider, a smart container fleet, a robotic fleet, and the like. [01319] In example embodiments, a GAIE may be pre-trained to generate multi-modal and/or multi-media data stories. In an example of in-cabin push-narration of maintenance/repair sequence of events and locations based on a sensor reading, a sensor detects a drop in oil pressure while an automobile is in use. A data story narrative may be created based on datasets such as 1) the historical oil pressure readings for the specific car; 2) data about oil pressure issues with the make/model/year of the specific car (e.g., service reports, recalls, prevalence of given issues, points of failure etc.); 3) data regarding remediation of potential issues (timing and types of recommended remediation or repair; 4) data regarding the location of organizations qualified to inspect or repair the diagnosed issue that might be causing the drop in oil pressure, based on proximity to current location of the car, cost, quality of customer reviews, certifications of employees and so on. This may be presented to the driver or rider of the vehicle automatically in the form of a paragraph narration similar to “The oil pressure of the vehicle may be X% lower than the recommended level. This may be caused by X, Y or Z. Based on the reading from the Z sensor, it appears that the most
SFT-106-A-PCT likely cause of the oil pressure drop may be X. There are two known part points of failures known to potentially cause this pressure drop and there are 5 service facilities within 5 miles of the car’s current location that are qualified to evaluate and repair this problem. Of these, Shop 1 has the highest customer satisfaction, Shop 2 has the lowest part cost and Shop 3 has the highest number of certified employees for your make and model. The phone number for Shop 1 is… Please indicate if you would like to initiate a call to arrange an appointment for the car to be inspected…” [01320] In example embodiments, the GAIE may receive a plot or outcome of the story, and may generate content that is content with the plot or that produces the outcome. In example embodiments, the GAIE may generate a plot or outcome of the story, and may also generate content that is consistent with the GAIE-generated plot or outcome of the story. In example embodiments, the GAIE may receive a world or environment of a story, and may generate content that occurs within the given world or environment. In example embodiments, the GAIE may generate a world or environment of a story, and may also generate content that occurs within the GAIE-generated world or environment. In example embodiments, the GAIE may receive a character or event to be included in a story, and may generate content that includes the given character or event in the story. In example embodiments, the GAIE may generate a character or event to be included in a story, and may also generate content that includes the GAIE-generated character or event in the story. In example embodiments, the GAIE may generate a world, environment, character, event, or the like “from scratch” (e.g., based on randomized inputs). In example embodiments, the GAIE may generate a world, environment, character, event, or the like based on a given world, environment, character, event, or the like (e.g., a story that is based on a real-world public figure or event). [01321] In example embodiments, the GAIE may receive a first story and may generate a second story that is related to the first story. For example, the GAIE may generate a second story that is an alternative retelling of the first story (e.g., a second story that includes a retelling of the first story from a perspective of a different character than a narrating character of the first story). The GAIE may generate a second story that occurs in a same or similar world or environment as the first story, or a different world or environment that is related to a world or environment of the first story. The GAIE may generate a second story that features a character or event of the first story, or a different character or event that is related to a character or event of the first story. [01322] In example embodiments, the GAIE may generate a story from the perspective of a narrator or independent observer of the story (e.g., a third-person story). In example embodiments, the GAIE may generate a story from the perspective of a character or point of view within the story (e.g., a first-person story), including a character generated and/or embodied by the GAIE. In example embodiments, the GAIE may generate a story from the perspective of a listener or audience member to whom the story is presented (e.g., a second-person story). In example
SFT-106-A-PCT embodiments, the GAIE may generate a story from multiple perspectives, such as a first part of a story generated from a perspective of a first character, a second part of the story generated from a perspective of a second character, and a third part of a story generated from a perspective of a narrator. In example embodiments, the GAIE may generate a story involving a sequence of two or more events (e.g., a story that involves two or more events observed by a character). In example embodiments, the GAIE may generate a story involving an event that is portrayed from multiple perspectives (e.g., a story that describes an event from a perspective of a first character, and that also describes the same event from a perspective of a second character). [01323] In example embodiments, a GAIE may generate a static story that remains the same upon retelling. In example embodiments, the GAIE may generate a dynamic story that changes upon retelling (e.g., adding more detail to a story upon each retelling). In example embodiments, a GAIE may change a story based on an input of a user (e.g., based on a choice of outcomes selected by one or more receivers of the story). In example embodiments, a GAIE may generate a story based on one or more inputs received from one or more receivers of the story (e.g., based on a prompt of a user, such as a request to create a story that includes a certain event specified by the user). In example embodiments, a GAIE may receive feedback from a receiver about a story (e.g., an expression of pleasure, displeasure, approval, disapproval, delight, dissatisfaction, confusion, or the like regarding a character, event, or property of the story), and the GAIE may update the story based on the feedback (e.g., adding, removing, or clarifying an event in the story, or switching a perspective of an event from a first character in the story to a second character in the story). [01324] In example embodiments, a GAIE may be trained by loading data (such as structured and un-structured data that may be dominated by numerical or non-text values) to the GAIE. Examples of such training data may include one or more database schemas. Techniques for curation and integration of purpose-specific data, including curation of models as inputs to a GAIE may include curating domain-specific data, data and model discovery. [01325] Candidate areas of innovation enabled by and/or associated with GAIE advances may include user behavior models (optionally with feedback and personalization), group clustering and similarity, personality typing, governance of inputs and process, explaining the basis of GAIE knowledge and proof points, genetic programming with feedback functions, intelligent agents, voice assistants and other user experiences, transactional agents (counterparty discovery and negotiation), agents that deal with other agents, opportunity miners, automated discovery of opportunities for agent generation and application, user interfaces that adapt to the user and context, hybrid content generation, collaboration units of humans and generative AI, purpose-specific data integration, a selected set of data sources, curation of data as models as input to generative AI, and the like.
SFT-106-A-PCT [01326] In embodiments of a GAIE-enabled system, such as one for robotic process automation, the GAIE system may summarize a set of actions being subjected to robotic automation and describe context for the actions, such as, “I found these properties as fitting your criteria because of the following features. Which ones are most attractive?” In this way, a process automation system enabled with GAIE may solicit feedback for faster feedback-based training. [01327] In example embodiments, emerging capabilities of GAIE technology may greatly improve upon earlier versions in terms of, for example, integration of domain-specific knowledge (e.g., math) with a chat interface. Further emerging capabilities may include being better informed about and for processing prompts of complex topics. Yet further, knowledge organization is becoming much improved as GAIE systems evolve. In example embodiments, updated GAIEs may correctly answer a prompt asking about today’s date, whereas prior versions may answer that today’s date (e.g., the current date) may be the date on which the GAIE was last trained. [01328] In example embodiments, a context pretrained (e.g., subject matter focused) GAIE may provide better personalization than a base GAIE instance. In general, while a base GAIE, if explicitly informed of details of the user may attempt to personalize its responses, a subject matter focused or other pre-trained GAIE may be configured with and/or with access to structured information about users (e.g., determined based on user identification and/or prompt-based clues, and the like) to provide inherent, latent context for a dialogue that includes user personalized responses. [01329] In example embodiments, a GAIE is configured to support interpretability and/or explainability of its outputs. In example embodiments, a GAIE provides, along with an output, a description of a basis of the output, such as an explanation of the reason for generating this particular output in response to an input. In example embodiments, a GAIE provides, along with an output, a description of an internal state of the GAIE that resulted in the output, such as a set of variational parameters of a variational encoder that were processed in combination with an input to produce an output, and/or an internal state of the GAIE due to a previous processing of the GAIE that resulted in the output (e.g., similar to a recurrent neural network (RNN)). In example embodiments, a GAIE provides, along with an output, an indication of one or more subsets of features of an input that are particularly associated with the output (e.g., in a GAIE that outputs a caption or summary of an image, the GAIE can also identify the particular portions or elements of the image that are associated with the caption or portions of the summary). [01330] In example embodiments, an advanced GAIE, such as one pretrained for subject matter specific operation, may be trained for improved epistemology, to help determine evidence of the content that it represents as facts in responses that it provides. One example of improved epistemology may include citing sources of knowledge pertinent to facts in a response as a step
SFT-106-A-PCT toward proof of facts of a response – essentially a way of the GAIE “showing its work,” or at least where its work originates. In example embodiments, a GAIE generates output based on information received from one or more external sources (e.g., one or more messages in a message set, or one or more websites on the Internet), and the GAIE indicates one or more portions of the information that are associated with the output (e.g., one or more websites on the Internet that provided information that is included in the output of the GAIE). [01331] An advanced GAIE as described and envisioned herein may maintain contextual awareness across chat (user-prompt/GAIE-response) interactions. Maintaining contextual awareness may help avoid the GAIE beginning each chat session from scratch, with no context as to prior chats with the same user. Maintaining contextual awareness may also enable picking up and resuming a conversation from earlier interactions between the GAIE and a user. Yet further maintaining contextual awareness and awareness of passage of time between interaction sessions may facilitate adapting responses to prompts in a later resumed chat session based on trained knowledge of the intervening passage of time and/or changing circumstances. In an example, a GAIE may determine that a deadline described in an earlier chat has expired, a consequential intervening event has occurred (your home-town team lost the big game), and the like. Further, contextual awareness across time-separate chat sessions may be highly valuable when being employed for projects that may have real-world physical constraints on time (e.g., smart contract negotiation may involve human evaluation, discussion, and decision making that may take time based, for example on other priorities seeking involvement of the human). This may determine the difference between treating each conversation as individual/compartmentalized/isolated, and treating ongoing, time-separated conversations as resumable, optionally as if (almost) no time had passed. In example embodiments, a GAIE may be configured with a contextualization module that maintains some notion of conversation sessions and interconnections that may be referred (e.g., a conversation from yesterday) for details and continuity. This contextualization may further enable avoiding repeating responses, making it more efficient to reference a previous conversation. Yet further, a contextualization module may provide context to the GAIE of other conversations between the user and the system, between other users and the system, and the like. [01332] In such a contextually maintained instance, a context-enabled GAIE may provide a response regarding forecasted weather that references an earlier period of time. In an example, a context-enabled GAIE may provide a weather-related response such as, “On Monday, we discussed the weather, you asked if you would need an umbrella on Wednesday, and I answered ‘probably not’ based on the forecast at that time. I need to inform you that the updated weather forecast indicates that rain may be more likely on Wednesday, so you probably may need an umbrella.”
SFT-106-A-PCT [01333] Other capabilities of emerging GAIE systems may include adapting a GAIE to the generation and operation of digital avatars. In example embodiments, digital avatars may be programmed with their own visual representations. To accomplish greater similarity between an avatar and its owner based on visual and audio interpretation of users, a GAIE training and/or pre- training data set may require information about body language and nonverbal cues, such as gaze, posture, speech pitch and volume, and the like. [01334] Emerging GAIE systems may include determining and adapting responses with variations and nuances based on, for example, user activities. A user’s physical disposition may influence content production by a GAIE (e.g., presenting different cues) based on if the user is sitting, walking, driving, exercising, and the like. Further, a GAIE system may adapt responses to prompts based on variations and nuances of real-life interactions versus voice interfaces versus virtual reality. Other aspects that may impact GAIE responding to prompts may include variations and nuances of different cultures, demographics, and the like. Yet further, in example embodiments, methods and systems for advanced GAIE training and operation may include recognition of higher- level communication features of users (humor, sarcasm, dishonesty, double entendre, etc.) and user emotional state, for example. [01335] In example embodiments, methods and systems for enhancing GAIE platforms, such as those described herein, may include configuring a GAIE to participate in multi-user dialogue, where strict turn-taking interaction with one person might be difficult in a group setting, where the context of who may be speaking to whom matters for each expression. The more fluid multi-user conversational structure vs. turn-taking structure may indicate advances to a GAIE may include developing understanding of: social interactions and cues, such as to whom each expression may be directed; group dynamics (e.g., who may be the group leader?) and interpersonal relationships; the notion of threaded discussions with branches; concurrent discussions between various sub- groups of a group; when to chime in with input so as to avoid interrupting other users; some notion about conversational balance, to avoid dominating the conversation; tact: users’ sensitivity about personal information, and when it may and cannot be shared in a group setting based on context, relationships with other users, and the like. [01336] Independent of whether interactions are one-on-one or multi-user, it is envisioned that a GAIE may be adapted to evolve beyond a turn-taking paradigm. In an example, a GAIE may currently create media (images, music, video, and the like) based on a user prompt (that itself may be one or more types of media), and may refine the created media based on user interactions, such as changing the content in certain ways or extending the boundaries of an image with more content that may be consistent with the existing content (e.g., outpainting). A more sophisticated version of generative AI may flexibly and continuously adapt its generated content to contextual user input
SFT-106-A-PCT and interactions. In an example, generating media may be adapted by the GAIE in response user integration with the generated media content, such as in response to allowing a user to virtually walk around inside the content to interact with and/or react to content items. Such a media-adapting GAIE may generate new content or update the content based on the user input/content virtual interactions. Yet further to facilitate a user to virtually interact immersively with generated content details about the user may be considered part of the criteria for newly generating and/or updating the media. [01337] In example embodiments, a media-output enabled GAIE without user immersive interaction and feedback may generate media (e.g., a first image) based on a prompt in which a user specifies a theme for a story. The user may then specify a series of scenes that follow, and the GAIE generates an image for each scene, leading to a storyboard series for the story. [01338] When a media-out enabled GAIE is teamed with user immersive capabilities, the user may control, for example, an avatar that may walk around within the scene and interact with generated media objects. Based, for example, on an order and manner with which the user traverses the scene and interacts with the objects, the generative algorithm may generate new content (e.g., the user looks at a particular painting on the wall of a gallery and then opens the curtains of a window). Outside the window may be an entire world that may be consistent with the particular painting that the user viewed. If the user chooses to move the avatar into that world, the painting on the wall updates to reflect the user’s interactions. [01339] In another example of immersive user-generated media content engagement, a user may request a science fiction story. In addition to generating a story based on tropes that are generally relevant to science fiction, the GAIE may include tropes that are likely familiar to the user, such as based on the user’s age, culture, other interests, etc. (such as science fiction versions of characters that are well-known in the oeuvre of myth and literature to which the user belongs). In some cases, the algorithm may even include individuals in the created story that are analogous to celebrities or public figures in the user’s culture or generation, or even the user’s own friends and acquaintances. [01340] In example embodiments, a superintelligence system may be based on a pre-trained GAIE that facilitates automated discovery of relevant domain-specific knowledge and examples. The superintelligence system may further leverage pre-trained advanced GAIE to leverage domain- specific examples to generate content. Yet further the superintelligence system may include a genetic programming capability to create novel variation. In example embodiments, a superintelligence system may further include feedback systems (e.g., collaborative filtering and automated outcome tracking) to prune variation to favorable outcomes (financial, personalization, group targeting, and the like).
SFT-106-A-PCT [01341] In example embodiments, an adapted GAIE system may improve rider satisfaction through, for example generative itineraries that facilitate automated discovery of relevant domain- specific knowledge and examples. Such an AI system may facilitate user discovery of a series of available routes to a known destination (e.g., where a tourist’s next hotel reservation takes place). Such a GAIE system may discover and express to the rider content for potential waypoints on those routes, including dining, shopping, and tourism opportunities, as well as friends or potential group activities (a lesson, a hike, an exercise class). [01342] In example embodiments, an adapted GAIE system may improve rider satisfaction by leveraging domain-specific examples to generate content. In an example, the GAIE may generate an itinerary for a day of travel from a current location (or a planned location) to the destination, including recommendations and ratings, photographs, time windows, and contingency options (if this goes quickly, you may add a side trip to that). [01343] In example embodiments, an adapted GAIE system may improve rider satisfaction through use of genetic programming to create and present novel variation, such as by generating a variety of new itineraries and novel ways of presenting them (e.g., with a variety of styles of graphic art, introduction of humor, introduction of deep historical background (e.g., coverage of historical/indigenous people, and the like.)). Novel ways of presenting may include gamifying the content of the itinerary, ascribing rewards for hypermiling the itinerary, ascribing rewards for completion of tasks related to experiencing it, offering a scavenger hunt, setting up a content for the best photograph, funniest photograph, and the like. Other aspects of use of an adapted GAIE for novel presentation may include competing for the best addition of content for the next version of the itinerary, generating legacy content that memorializes the experience, generating varied legacy content elements for sharing to different audiences including parents, travel companions, friends, emphasizing specific areas of interest of the intended audience (e.g., deep content on birds for a birder, etc.). An adapted GAIE system may improve rider satisfaction through use of genetic programming plus feedback systems (e.g., collaborative filtering and automated outcome tracking) to prune variation to favorable outcomes (financial, personalization, group targeting, etc., etc.), tracking financial impact on local businesses (spending), tracking user satisfaction as reported, tracking user satisfaction as indicated by physiological monitoring, tracking time at each destination as an indicator of enjoyment, adjusting future itineraries based on feedback functions, and the like. [01344] In example embodiments, other features may include an in-cabin voice interface for processing of information regarding objects in proximity of a current location; allow querying the transportation-pretrained GAIE by voice for everything from physical entities in proximity to the vehicle to the history of the area, average cost of housing employment data, crime data,
SFT-106-A-PCT recommended route based on criteria specified by vehicle occupant (“take me through NYC on a route that maximizes the prior residences of well-known artists”), and the like. The GAIE system may interface with a vehicle navigation system to map a route. In this regard, an adapted GAIE system may improve rider satisfaction through deployment of a voice interface for constructing a route based upon a criterion provided by a vehicle occupant. [01345] In example embodiments, a transportation subject matter based GAIE system may interact with a machine learning and/or AI system trained using a corpus of data relating to a defined geographic location. This system may include an in-cabin voice interface for accessing and querying the machine learning and/or AI system. The system may be configured with a processor for automatically selecting from the corpus those data that relate to a current location datum associated with a vehicle. Yet further, a navigation system interface for facilitating presentation (for presenting) of a route to a vehicle occupant, wherein the route may be selected based on a query datum provided by the vehicle occupant and the resulting output of the machine learning/AI system. [01346] In example embodiments, a GAIE system may include a tool to query maintenance records for models of vehicles to offer a user a prospective view of the likely timing of failure and points of failure in a particular model (e.g., struts to first fail after 38 months of daily use and/or X miles); offer comparisons to other models; integrate location data into the forecast based on areas traveled, types of roads, amount of traffic, and the like. In example embodiments, a GAIE system may be adapted and/or pretrained for use in a transportation domain and may predict points of failure in a vehicle make/model based upon a criterion related to vehicle usage, and the like. [01347] Another candidate use of a transportation-domain focused GAIE may include establishing a conversational dialogue and/or interaction with a passenger and/or driver about a topic of the user’s choice, simply to maintain passenger and/or driver alertness on long routes. In example embodiments, a system may include an individual alertness sensor that determines an alertness of an individual in a vehicle, and a conversation engine that engages the individual in a conversation based on the alertness of the individual. [01348] In example embodiments, deployment of a transportation subject matter configured GAIE may include dialogue-driven trip planning. In example embodiments, a user prompts the GAIE to plan a trip from origin to destination with stops for food and fuel along the way. In response, the GAIE may provide some initial suggestions. The user may interact via dialogue to make explicit changes and/or prompt the GAIE to provide another/alternate suggestion. For example, the user may say “I’d rather stop at a restaurant than get fast food,” or “I’d prefer not to use this highway because there may be a lot of traffic,” or “I’d like to stop for a break every 60-90 minutes.” In response to this dialogue prompt, the GAIE may update the suggestion based on the user’s specific
SFT-106-A-PCT feedback; this may be much more fluid and natural than conventional, hard-coded travel planning and adjustment. In example embodiments, a system having a route planning engine may determine a suggested route for travel of a user, and a conversation engine may adjust the suggested route based on a conversation with the user about the suggested route. [01349] In example embodiments, deployment of a transportation subject matter configured GAIE may facilitate defusing driver frustration, such as by settling down the user through calming conversation. [01350] In example embodiments, deployment of a configured GAIE may facilitate dialogue- driven knowledge discovery about a travel condition. In an example, a high number of vehicles are stopping or performing erratically at a particular intersection. A prompted conversation may be initiated between the GAIE and drivers (perhaps just past the location of the anomaly) in which the rider may ask what was happening there and why the other vehicles were behaving as they did. This type of conversational interface may be much more fluid than conventional, facts-only information gathering processes (e.g., searching for traffic issues and the like). A more fluid conversational interaction may ultimately determine that there was an accident at the location. However, the dialogue may lead to the GAIE presenting a range of different candidate actions and/or outcomes, such as advising other users who are approaching the location, contacting first responders based on the nature of the problem, and the like. In example embodiments, a system may include a travel anomaly detector that detects a location associated with an anomalous behavior of an operator of a vehicle, and a conversation engine that determines information about the anomaly based on a conversation with the operator of the vehicle. In embodiments, the system may be GAIE-based. The GAIE may be adapted and/or pre-trained for use in a transportation deployment environment. [01351] In example embodiments, a GAIE platform may include an interactive recommendation engine for nearby or en route options related to entertainment, dining, scenic viewing, and the like. A voice enabled user interface for a GAIE may include a recommendation engine that may be configured to provide recommendations for dining, entertainment, or scenic viewing destinations wherein the destinations are within a prescribed distance from the desired travel route. [01352] In example embodiments, an adapted GAIE deployment may include conversational interfaces for personal assistants, such as voice-based interaction with drivers; this by itself may be a vast improvement over known interfaces, (e.g., visual / tactile input with some very simple and low-spec voice interfaces). [01353] A pre-trained GAIE may overcome problems with current transportation system voice interfaces that may be limited in bandwidth to communicate with a driver, due to a driver’s fluctuating attention span and degree of perceived safety. In example embodiments, when a driver
SFT-106-A-PCT may be driving on a familiar highway with cruise control engaged, their ability to listen, speak, and interact may be much higher than when they are driving (a) somewhere new and unfamiliar, (b) in circumstances that require fast reflexes and many decisions in a short period of time, such as stop-and-go traffic, (c) in poor weather conditions, and/or (d) in circumstances where vehicle passengers are loud or engaging in distracting behaviors. Therefore, a voice interface for a pre- trained GAIE may be adapted to be selective as to: (1) what information to present (e.g., prioritizing based on subject matter: emergency information vs. driving-related information vs. social media updates) and (2) when to present information (e.g., don’t talk to the user while they are driving in tense circumstances, such as approaching a highway exit ramp; wait for a moment when the user has available attention, such as waiting at a traffic light, ask the user if now is a good time to discuss). [01354] In example embodiments, a GAIE may be pre-trained for use by and/or in cooperative operation with a digital twin engine, such as an instance of an executive digital twin and the like. In an exemplary deployment, a GAIE may interact with a digital twin to provide a narrative about a topic of the digital twin to give to a viewer. In this example, the digital twin may interact with the GAIE (e.g., through an API and the like) to generate a narrative summary for a CEO and a detailed narrative for a CFO. [01355] Executive digital twins may be configured for a particular role or user. Therefore, a GAIE system with a digital twin interface may improve executive digital twin capabilities by curating the data for and populating content for consumption by executive digital twins for different roles. In an example a GAIE may receive information about the executive digital twin as well as about the intended human being represented by the executive digital twin (e.g., the role of the user). The GAIE may determine a degree of narrative detail for each executive digital twin. This may be based on generic executive digital twin/user role criteria and/or refined through interaction with a particular user for the executive digital twin. In example embodiments, a CEO with a tech focus may receive more “in-depth” narrative relating to tech or R&D, whereas a CEO with a financial background may end up receiving narratives that are more focused on financial analysis but less granular on tech-related features. [01356] In example embodiments, a GAIE system that interacts with a digital twin engine (e.g. an executive digital twin instance and/or engine) may determine of the potential universe of content on which it is trained, what may be relevant and what may be noise or unrelated for the specific narrative topic, the target human consumer, and the like. Based on this relevance determination, the GAIE system may generate the output data based on the relevant data and the determined degree of detail.
SFT-106-A-PCT [01357] Further, the GAIE system may also select real time data sources to connect to a target / requesting executive digital twin. The GAIE may further configure consumption pipelines for those sources on the spot (e.g., data source identification, data requests for identified data sources, API configuration, and the like). Therefore, in this example the GAIE system would be identifying data sources and connecting them to an executive digital twin instance/engine. [01358] An example use case may include an executive digital twin that has access to full financial data from a previous timeframe (e.g., a previous year/quarter/month, and the like). The executive digital twin may enable access by the GAIE to all of this data. The GAIE may determine a degree of detail of the data for the intended viewer (e.g., target consumer of a narrative regarding a topic captured in the full financial data). [01359] In the case of a target consumer/view having a role of CEO, the GAIE may determine that the narrative for the CEO will include key insights but not full details. The GAIE may then generate a narrative of the top insights for a target time-frame (e.g., a current quarter) from at least the received data. [01360] A pre-trained GAIE may be used to generate, manage, and/or manipulate digital twins, such as by describing attributes of a digital twin, describing interactions with other digital twins or environments, describing simulations, using digital twin simulation data to generate content, enabling context-adaptive executive digital twins, facilitating development of narratives about ongoing, real time operations, tuned to the preferred conversation style of a user represented by a digital twin, and the like. In example embodiments, a context-adaptive executive digital twin integrated with a generative conversational AI system may be configured to generate a set of narratives about operations of an enterprise based on an input data set of real-time sensor data from the operations of the enterprise. The digital twin (or human user) may prompt the GAIE and/or conversational AI system to compare financials with real-time sensor data. [01361] A GAIE may be adapted (e.g., pre-trained) to facilitate enhancement of AI training data associated with a digital twin application. In example embodiments, a method may include using an AI conversational agent to create synthetic training data. [01362] Further in association with digital twin technology, a GAIE may be adapted for summarizing highly granular data for consumption by an executive digital twin. In this regard, an executive digital twin system may include an intelligent agent that receives a set of customization features from a user (e.g., an executive represented by the digital twin) that include a role of the user within an organization. The intelligent agent may also determine a respective granularity level of a report based on the customization features. In example embodiments, the set of customization features include granularity designations for different types of reports. Yet further, the intelligent agent determines the granularity level of a report based on the role of the user within an
SFT-106-A-PCT organization. Further, the subject matter of the report may be generated based on the role of the user within the organization. [01363] In example embodiments, a speech-based user interface for customizing a level of specificity for generating executive digital twin reports may be operatively coupled to a customized GAIE that processes the speech into a set of report instructions (and optionally report content) based on aspects of the user(s). An example of a speech-based request that may be processed as described may include, “I’d like an executive-summary level report on predictive maintenance” or “I’d like a detailed report on competitor analysis.” The speech-based user interface may respond to such a request by directing a corresponding executive digital twin system to feed a specificity level for parameters to a generative AI engine (e.g., GAIE) as additional input along with the data. In this example, IoT data from manufacturing facilities may be used in predictive maintenance. A response to a prompt regarding preventive maintenance may be customized with a level of specificity based on target report consumer role(s), such as for an operations-based role. A level of specificity may include what are the costs, when is the maintenance needed by, what may be the predicted downtime, how to offset and/or time the maintenance activity, and the like. For a financial-based role, specificity levels may be adapted to address what may be the disruption going to do for the bottom line in the short term; how does this impact our supply; what may the disruption do to our market-share; will it impact our stock price, and the like. [01364] When a digital twin may be used to model an individual, a fine-tuned GAIE may be used to coordinate the digital twin with the human for improved fidelity (e.g., when the human behaves or reacts differently than the digital twin predicts, a GAIE may initiate a dialogue with the user to determine why, and the results may be used to update the digital twin model for the individual). Instead of having a human expert occasionally participate in automated digital twin model training (e.g., to correct errors or provide new examples, and the like), a corresponding GAIE may be occasionally querying the user to solicit more information to update the digital twin model of the individual. As an example, a system may include a digital twin that models an individual, and may further include a conversation engine that facilitates determining an update of the digital twin based on a conversation with the individual that is associated with a difference between an action of the individual and a corresponding action prediction by the digital twin. [01365] In example embodiments, a GAIE system may be configured for use in an automated manufacturing environment. In one example, a user may prepare a descriptive prompt of a desired product to have it 3D printed. The GAIE system may generate a 3D printing set of instructions, such as a configuration of an automated 3D printing machine and a rendering indicative of a result of the 3D printing machine following the instructions. In another example, a user may include a
SFT-106-A-PCT photo/video of product as a prompt along with a request for instructions to 3D print an improved version, such as “I want this bike but I want different tires and I want it to be red.” [01366] Another exemplary use of a pre-trained GAIE may include using user behavioral data to generate guiding recommendations for energy conservation, usage shifting, and the like. In particular, a recommendation system for energy conservation, usage shifting, or optimization may include an integrated generative, conversational AI system that adapts generated output based on user behavior from a user behavior data set. [01367] In example embodiments, an adapted GAIE may facilitate management of energy resources. An energy resource management system may be enhanced to provide advanced intelligence (e.g., superintelligence) to plan, manage, and/or govern DERs and energy generation, storage, consumption, and transmission facilities. Elements of a superintelligent energy management system may include automated discovery of relevant domain-specific knowledge and examples, generative AI to leverage domain-specific examples to generate content, genetic programming to create novel variation, feedback systems (e.g., collaborative filtering and automated outcome tracking) to prune variation to favorable outcomes (financial, personalization, group targeting, etc., etc.), and the like. In an example, a superintelligent AI-enabled management system may be configured to manage a plurality of systems of an energy edge platform via automated discovery, generative AI, genetic programming, and feedback systems. [01368] In example embodiments, a GAIE may be adapted (e.g., trained, pre-trained, and the like) for the field of patents to generate patent claims responsive to being provided a patent disclosure. An enabled GAIE may receive patent claims as a prompt and may generate a supportive patent disclosure therefrom. In example embodiments, an enabled GAIE may be trained to understand a patent structure and a claim structure for a plurality of jurisdictions. [01369] In example embodiments, a GAIE may be pretrained (e.g., finetuned) with a private instance of an enterprise’s intellectual property data (e.g., products, business goals, competitive considerations, core inventive ideas, and the like). In example embodiments, a private instance of enterprise data for patent generation may be configured (e.g., as prompt-response pairs) for finetuning the GAIE instance. [01370] Beyond patent disclosure and figure preparation, a GAIE may be fine-tuned to generate figures, disclosure from figures, claims from figures, office action responses, evidence of use (EOU) for patent monetizing, preparing a matrix of patent claims across a portfolio, high level landscape search strings, enhancement of search strings, and the like. Finetuning may include preparation of prompt-response sets for a range of IP-related actions, such as patent claim assertion, infringement analysis and discovery, claim (term) acceptance and/or rejection, estimate of claim scope broadness, claim quality, and the like. In example embodiments, an IP-tuned GAIE may be
SFT-106-A-PCT pre-trained with information from proceedings related to infringement cases to understand the likelihood of infringement, and the like. [01371] GAIE training and IP-integration may facilitate elaboration of broadly stated inventive concepts into disclosure that reflects robust enablement and/or support. In an example, an outline may be an input prompt for the purposes of drafting a patent application (e.g., disclosure, figures, summary, abstract, and optionally claims). A generated result may become a portion of a subsequent prompt along with a description of the general theme, category, focus area and/or other categorization or classification of innovation. In an example, one may describe a transaction environment processing platform and ask for examples of a technical implementation, system, and/or method design, such as: “In the context of a transaction environment processing platform as previously described, what types of hardware and software might be used to implement a governance engine for the transaction environment?” [01372] Regarding an intellectual property (e.g., patent) monetization-focused development process, a GAIE may facilitate predicting, from a market development view, which domains to select and which categories within domains to emphasize based on the ability to determine where business may be shifting over a longer time (e.g., beyond short-term trends). This may include analyzing historical data and current data for one or more IP domains, optionally in near-real time. An IP-monetization-focused GAIE may tie historical and/or current data to investments and actions having occurred in the IP world for, among other things, patent sales and licensing. An IP- monetizing trained GAIE may also develop particular leads and domain categories with the highest probability of success based on previous sales and/or licensing and/or where the market may be heading. There may be risk in making these decisions but using a trained GAIE may lower this risk so that these decisions become more predictable in the future, especially with company data increasing and likely accessible through various channels.. [01373] A GAIE may be configured, trained, and/or fine-tuned for a range of functions, including, for example, ingestion of proprietary data, determination of a route, determination of an outcome, approval of release/access to data, making a prediction, pattern recognition, and the like. Yet another example application of a fine-tuned GAIE may include layering of voice and visual commands that may be graduated in sound, volume, or spacing similar to flight avionics, thereby generating scripts for voice over of data and/or presentation material. This may enable the development of synthetic speech technology that generates lifelike (AI-generated) voices for podcasts, slideshows, and professional presentations. This may mitigate needs for hiring a voice artist or using any complex recording equipment (e.g., background noise separation, dubbing, and the like).
SFT-106-A-PCT [01374] In example embodiments, GAIE systems may be configured for facilitating news delivery from NPC-type avatars to adapt current “clickbait” content to conversationally conveyed world news/happenings. In this example, a metaverse environment may include a news-based GAIE conversation agent configured to conversationally inform users of recent events. [01375] Further in context of metaverse technology, a generative AI conversational agent may be configured to populate the metaverse. [01376] Yet further within a context of metaverse technology, a GAIE system may be enabled to augment training data for a customized conversational agent with real-time sensor data sets through collecting information from real-world sensors. In an example, a training data augmentation system may be configured for augmenting training of a conversational agent with data from a real-time sensor data set. Further, a metaverse-associated GAIE system may facilitate augmenting training data for a customized conversational agent with process outcome data. A training data augmentation system may be configured for augmenting training of a conversational agent with process outcome data from a process outcome data set, user behavior data, and the like. In example embodiments, a training data augmentation system based on a GAIE may be enabled (e.g., pre- trained) for augmenting training of a conversational agent with user behavior data from a user behavior data set. [01377] In example embodiments, application of fine-tuned GAIE systems in the field of governance may facilitate advances in automation of governance, such as governing use of copyrighted material. GAIE-based governance systems may further enhance governing AI training, such as conversational AI training data sets for bias and error, governing conversational AI for contextual appropriateness and other stylistic requirements, and the like. A fine-tuned GAIE system may further improve governing secrecy, such as a progression of what elements of secret, proprietary or confidential information are allowed based on a depth of conversation. Governance may further apply to individuals. Therefore, a governance fine-tuned GAIE system may enhance and/or automate determining a measure of trustworthiness of a user that may be interacting with a generative conversational AI system. Further a governance fine-tuned GAIE system may enrich governance for a generative AI system, such as determining a measure of trustworthiness of a generative conversational AI system. In general, governance use cases may be expanded further in light of GAIE topic-targeting training capabilities. [01378] A fine-tuned GAIE system may play a role in systematic risk identification, management. and opportunity mining. GAIE-based risk identification systems may respond to risk-related prompts, such as “What may else might we know and should be paying attention to?” by curating data sets and automating the processes of identification of systemic risks, identifying a set of likely
SFT-106-A-PCT scenarios and the risks and opportunities arising from those scenarios, identifying paths for resolution and recommending resolutions. [01379] Yet another area of risk identification and/or management may involve security concerns with GAIE systems that are configured to generate computer executable code. At the least relying on computers to write computer code raises questions about what security measures are effective and what measures are able to be circumvented by the AI. [01380] A further area of risk identification, management and/or opportunity harvesting may apply to copyright material. Automated computer code generation may inadvertently introduce copyrighted material, such as algorithms. A risk-finetuned copyright GAIE may assist in detecting candidate copyright violations in any programmatic code, including machine generated code. [01381] Risk identification of visual training sets (e.g., images, graphs, and the like) may be enhanced by a fine-tuned GAIE that can process these visual training data sets for authenticity indicators that are coded as non-visual data. This may be similar to tail voltage devices providing messages on the end of sine waves. Visual training sets may be coded with non-visual indicators of authenticity that may be detectable by a fine-tuned GAIE. [01382] Yet another risk-identification related area includes fraud detection. Integrating customer fraud reporting and questioning into pretraining data may enrich holistic scoring, which may comprise a composite score that bridges customer evidence, transactions, and environmental trends. In an example, an AI based fraud detection system may integrate customer fraud reports and questioning into a training/query data set to produce a holistic scoring system, utilizing a composite score that combines customer evidence, transaction data, and environmental trends to provide a comprehensive approach to fraud detection. [01383] Imaging applications may benefit from fine-tuned GAIE systems. In example embodiments, optical content (e.g., screen shots and the like) may be processed by machine vision systems so that the GAIE may describe a scene in the optical content using a generative conversational AI agent. In example embodiments, a GAIE may be configured as a first AI/NN sub-system in a Dual Process Artificial Neural Network (DPANN) architecture. Such a DPANN architecture may include, as a second NN sub-system, a formal logic-based and/or fuzzy-based system. Together these DPANN systems may implement learning processes, model management, and the like. In example embodiments, a DPANN architecture may include features that describe building and managing large scale models. [01384] Referring to Fig. 99, a platform for the application of generative AI 9900 may include a robust task-agnostic next-token prediction artificial intelligence model 9902 that operates to predict a next token given a set of inputs encoded as embedded tokens. A robust task-agnostic next-token prediction AI engine 9902 may include deep learning models, which use multi-layered neural
SFT-106-A-PCT networks to process, analyze, and make predictions with complex data, such as language. An objective of the robust next-token prediction AI engine 9902 may include data science modeling through, among other things, use of topic-specific embeddings, attention mechanisms, and decoder-only transformer models. Capabilities of such an engine 9902 may include a pre-training capability to facilitate configuring next-token prediction for specific subject matter (e.g., marketplace item valuation), a tokenizing capability to facilitate converting complex terms into actionable tokens (e.g., converting compound chemical names into fundamental elements), access to distributed training (e.g., data-parallel training and/or model-parallel training, and the like), few- shot learning to reduce training demand for updates, such as new business intelligence data, and the like. In general the next-token prediction AI engine 9902 may combine large language modeling techniques and decoder-only transformer models to generate powerful foundation models for next-token prediction AI content generation. [01385] In example embodiments, the next-token prediction AI engine 9902 may be structured with a machine learning (sparse Multi-Layer Perceptron) architecture configured to sparsely activate conditional computation using, for example mixture-of-experts (MoE) techniques. A machine learning architecture may be configured with expert modules that may be used to process inputs and a gating function that may facilitate assigning expert modules to process portion(s) of input tokens. A machine learning architecture may further include a combination of deterministic routing of input tokens to expert modules and learned routing that uses a portion of input tokens to predict the expert modules for a set of input tokens. [01386] A GAIE 9900 may be trained to operate within a domain, such as written language, computer programming language, subject matter-specific domains (e.g., a software orchestrated marketplace domain), and the like to generate content (constructs) that comply with rules of the domain. In general, a GAIE may generate content for any topic for which the GAIE is trained. So, for example, a GAIE may be trained on a topic of pig farmers and may therefore generate language- based descriptions, images, contracts, breeding guidance, textual output, and the like for any of a potentially wide range of pig farmer sub-topics. [01387] Adapting a generative AI engine for subject matter-specific applications may include pretraining a next-token prediction AI model-based system through the use of, for example, in- context (e.g., application, domain, topic-specific) examples that are responsive to a corresponding prompt. While the next-token predictive capabilities of the underlying next-token prediction AI engine may remain unaffected by this pre-training, subject matter-specific pre-trained instances may be developed/deployed. [01388] In example embodiments, a platform for the application of generative AI 9900 may include a set of subject matter-specific pretrained examples and prompts 9904. This set 9904 may
SFT-106-A-PCT be configured by analyzing (e.g., by a human expert and/or computer-based expert and/or digital twin) information that characterizes various aspects of the domain to generate example prompts and preferred and/or correct responses. Pretraining may also include training the next-token prediction AI engine 9902 by sampling some text (e.g., prompt/response sets) from the set of subject matter-specific pretrained examples and prompts 9904 and training it to predict a next word, object, and/or term. Pretraining may also include sampling some images, contracts, architectures, and the like to predict a next token. These prompt-response sub-sets may facilitate pre-training the prediction AI engine 9902 for predicting a next token (e.g., word, object, image element, and the like) for various aspects. [01389] When an instance is implemented for textual generation, such a GAIE instance may be referred to as a natural language generation system that constructs words (e.g., from sub-word tokens), sentences, and paragraphs for a target subject and/or domain. [01390] In example embodiments, real-world instances of the platform 9900 may require ongoing updates to facilitate the platform 9900 being responsive as aspects of a domain (e.g., a business entity in the domain) change, such as business goals change, new products are released, competitors merge, new markets emerge, and the like. In this regard, training the platform 9900 with in-context prompts and examples may be automated and repeated as new data is released for an enterprise to prevent snapshot-in-time data aging-based errors. The platform the application of generative AI 9900 may include an ongoing pre-training module 9928 that processes new and updated content into prompt and/or response sets and interactively iterates through rounds of pre- training. New and updated data and/or information may regularly be found in various subject matter specific information sets, such as: a dataset of medical records (e.g., to assist with medical diagnoses), a dataset of legal documents and court decisions (e.g., to provide legal advice), a release of a new product (e.g., images of the product), or a financial dataset such as SEC filings or analyst reports. In example embodiments, uses of the platform 9900 may include applying the pre-training and optimizing techniques to a range of different domains (e.g., medical diagnosis, business operation, marketplace operation, and the like) to produce a fine-tuned domain specific token- predictive engine including ongoing refinement through (daily) in-context pretraining. [01391] In example embodiments, an ongoing pre-training module 9928 may work with the next- token prediction AI engine 9902 to update a set of subject matter specific tokens that may be maintained in a subject matter specific instance token storage facility 9908. This subject matter specific instance token storage facility 9908 may be referenced by a subject matter specific instance of the next-token prediction AI engine 9902 during an operational mode (e.g., when processing inputs / prompts). In example embodiments, the platform 9900 may include a plurality of sets of
SFT-106-A-PCT subject matter specific tokens that may be maintained by corresponding ongoing pre-training modules 9928. [01392] Training, however, may not ensure that the responses to prompts are correct every time. In general, a business entity is likely to be less interested in a tool that provides answers that are probably right and may differ from time to time. A product that can provide accurate responses (e.g., including taking actions) based on what the end-user wants vastly increases the potential use cases and product value. A high level of accuracy and integration with operational systems may enable such a tool to go beyond just generating new content to be more productive; through integration with workflows, it may facilitate automating workflow actions. In this regard, the platform for the application of generative AI 9900 may also include a pre-training optimizing module 9906 that may work cooperatively with the ongoing pre-training module 9928 to further refine accuracy of responses to prompts for a domain. The pre-training optimizing circuit 9906 may facilitate improved accuracy of in-context responses, task-specific fine-tuning, and for sparse model variants of the platform 9900, enrich few-shot learning capabilities. In example embodiments, fine tuning may further benefit the platform by reducing bias that may be present in the training data. This may be essential to ensure subject matter specific jargon is adapted as training data changes (e.g., in the digital marketing/promotional space, ensure that “influencer” is replaced with “creator”). Further, a pre-training optimizing engine 9906 may provide a wider range of prompts and responses based on user preferences (e.g., speaking styles) to enrich the platform’s ability to provide user-centric responses. In example embodiments, user-centric responses may include fine tuning the platform 9900 for different roles in an organization. As an example, when a user in a financial planning role inquires about a business development topic, responses may be directed toward the financial planning role (e.g., as compared to a customer/client inquiry about that topic). [01393] A platform for the application of generative AI 9900 may be used to produce text-based content for a multi-national entity with employees who speak different languages. While the platform 9900 may be trained (and pre-trained) to operate interactively in a plurality of languages, generating automated content may benefit from use of a neural machine translation module 9910. In example embodiments, a portion of the entity in a first jurisdiction may produce content in a first language and resulting recurring generated output (e.g., types of reports and the like) may be generated in the first language. However, employees who speak a second language may benefit from the type of report when translated into the employee’s native language. Therefore, associating the neural machine translation module 9910 with the platform may prove valuable while reducing compute demand for the platform 9900.
SFT-106-A-PCT [01394] Emerging next-token prediction AI systems feature increasingly adaptable next token prediction capabilities. These capabilities may be further adapted to assist in closed problem set solution prediction, such as allocation of resources, deployment of a robotic fleet and the like. To achieve greater prediction capabilities, a subject matter specific next-token prediction AI-based engine, such as the platform for the application of generative AI 9900, may include a solution- predictive engine 112 that leverages next-token (e.g., next word) predictive capabilities to predict a most-likely solution to a closed solution-set problem. This may be accomplished optionally through use of sets of problem domain-specific pre-training prompts and examples. Such examples may be adapted for different user preferences. In example embodiments, each user in a closed problem set environment may generate prompts and responses that may enable the platform 9900 to respond to the user based on the user’s inquiry style. Alternatively, the solution prediction engine 9912 may adapt a user’s prompt and/or configure a prompt based on user preferences to attempt to deliver responses that are consistent with a user’s preferences (e.g., engineering-based responses for an engineer role-user and legal-based responses for a lawyer). [01395] For more complex analysis and decision making/predicting, a formal logic-based AI system 9914 may be incorporated into and/or be referenced by the subject matter specific platform 9900. [01396] Further, the basic concepts of next-token prediction of a generative AI engine, such as the platform for subject matter-based application of generative AI 9900 may be applied to analyzed expressions of images, audio (e.g., encoded text), video (e.g., sequences of related images), programmatic code (domain-specific text with readily understood rules), and the like. Therefore, a next-token prediction AI platform (e.g., platform 9900) may further include an image/video analysis engine 9916 (optionally NN-based) that adds a spatial aspect to the next-token predictive capabilities of a next-token prediction AI system. Images used for training may include 3D CAD images (for a domain that includes physical devices such as vehicles), radiologic images (for a medical analysis domain), business performance graphs, schematics, and the like. In example embodiments, aspects of the underlying task-agnostic next-token prediction AI engine 9902 may be adapted (e.g., different embeddings, neural network structures and the like) for different input formats, such as images, temporal-spatial content, and the like. [01397] The platform 9900 may further include an expert review and approval portal 9918 through which an expert (e.g., human / digital twin, and the like) can review, edit, and approve content generated. Examples include review and adaptation by a subject matter specific data story expert; a data scientist, and the like. The expert review and approval portal 9918 may operate cooperatively with, for example, the pre-training optimizing module 9906 that may receive and analyze expert
SFT-106-A-PCT feedback (e.g., edits to the content and the like) for opportunities to further optimize the platform 9900. [01398] The platform 9900 may further include a training data generation facility 9920 that may generate natural language prompts, such as subject matter specific prompts that may be applied by, for example, the pre-training optimizing engine 9906 to increase platform response accuracy and/or efficiency while fine tuning a subject matter specific instance. [01399] In example embodiments, the platform 9900 may further be configured to access a corpus of domain and/or problem relevant content as a step in responding to a prompt. In example embodiments, the platform may be pre-trained on the content of the corpus While the content of the corpus may not be directly included in the response, such as if it provides a level of detail beyond what the platform 9900 has been trained to provide in a response, it may be cited in the response to facilitate identifying and expressing sources from which a response is derived. These external source references may be handled via a citation module 9922. [01400] Business decisions are often context-based. Understanding both the context for a decision and aspects and/or assumptions of the decision process may prove highly valuable for evaluating, for example, competing decisions and/or recommendations. Context may include both tangible and intangible factors. An intangible factor may include historical interactions between parties involved in the evaluation process, for example. A decision process may include not only assumptions on which a decision or recommendation is based, but also criteria by which tangible factors are processed, evaluated, analyzed, and the like. To provide such context for generated output of the platform 9900, an interpretability engine 9924 may be incorporated into and/or be accessible to the platform 9900. An objective of use of the interpretability engine 9924 may be to generate additional content that reflects context for, among other things, how the next-token prediction AI instance operates and/or generates a corresponding output. [01401] In example embodiments, the next-token predictive capabilities of a next-token prediction AI engine 9902 may be utilized for developing a set of emergent data science predictive and/or interpretive skills. While such a platform may be trained directly on various data sets, context for elements and results in such data sets may be a rich source of complementary training data. By associating data elements with descriptions thereof, the platform 9900 may gain data science capabilities, such as to group by or pivot categorical sums, infer feature importance, derive correlations, predict unseen test cases, and the like. In this regard, a data science emergent skill development system 9926 may be utilized by the platform to enhance further subject matter specific applicability and utility. [01402] In embodiments, the transportation methods and systems described herein may include systems and methods for processing graph data, for example to represent transportation networks, using machine
SFT-106-A-PCT learning algorithms to analyze and optimize vehicle routing, traffic flow, and network efficiency. Such systems and methods may be integrated into vehicle navigation systems and transportation management infrastructures to provide real-time, adaptive routing suggestions and traffic predictions. In embodiments, graph data may be used to represent transportation networks, where nodes correspond to intersections, points of interest, or other significant locations, and edges represent the pathways, such as roads or transit lines, connecting these nodes. Attributes such as travel time, distance, traffic conditions, and road type may be further associated with the edges. In embodiments, a plurality of machine learning algorithms may process graph data and provide actionable insights including, but not limited to, graph neural networks (GNNs) may, in an example, be used to capture the spatial structure of the transportation network and to predict traffic conditions or travel times for each edge in the graph; reinforcement learning (RL) may, in an example, be used to optimize routing decisions in real-time, learning from past experiences to improve future route suggestions, clustering algorithms may, in an example, be used to segment the graph data into clusters based on traffic patterns or geographic features to identify areas of congestion or to optimize traffic signal timings; time-series forecasting models may, in an example, be employed to predict future traffic conditions based on historical data, enabling the system to anticipate and react to changing traffic patterns. [01403] In various embodiments, one or more techniques involve the processing of graph data using one or more machine learning algorithms. In some such embodiments, the one or more machine learning algorithms include one or more graph neural networks (GNNs). The following discussion provides an overview of graph data and graph neural networks. [01404] In a graph data set, a set of nodes is interconnected by one or more edges that respectively represents a relationship among two or more connected nodes. In many graph data sets, each edge connects two nodes. In other graph data sets that represent hypergraphs, a hyperedge can connect three or more nodes. In various graph data sets, each of the one or more edges is directed or undirected. An undirected edge represents a relationship that relates two or more nodes without any particular ordering of the related nodes. A first undirected relationship that connects a first node N1 and a second node N2 may be equivalent to a second undirected relationship that also connects a first node N1 and a second node N2. In some such graphs, the relationship represents a group to which the two or more related nodes belong. In some such graphs, the relationship represents an undirected and/or omnidirectional connection between two or more nodes. For example, in a graph representing a geographic region, each node may represent a city, and each edge may represent a road that connects two or more cities and that can be traveled in either direction. By contrast, a directed edge includes a direction of the relationship between a first node and a second node. For example, in a graph representing a genealogy or lineage, each node represents a person, and each edge connects a parent to a child. A first directed edge that connects
SFT-106-A-PCT a first node N1 to a second node N2 is not equivalent to a second directed edge that connects the second node N2 to the first node N1. Some graph data sets include one or more unidirectional edges, that is, an edge with one direction among two or more connected nodes. Some graph data sets include one or more multidirectional edges, that is, an edge with two or more directions among the two or more connected nodes. Some graph data sets may include one or more undirected edges, one or more unidirectional edges, and/or one or more multidirectional edges. For example, in a graph representing a geographic region, each node may represent a city; one or more unidirectional edges may represent a one-way road that connects a first city to a second city and can only be traveled from the first city to the second city; and one or more bidirectional or undirected edges that represent a bidirectional road between the first city and the second city that can be traveled in either direction. Some graph data may include, for two or more nodes, a plurality of edges that interconnect the two or more nodes. For example, a graph data set representing a collection of devices may include nodes that respectively correspond to each device of the collection and edges that respectively correspond to an instance of communication and/or interaction among two or more of the devices. In such a graph data set, a particular subset of two or more devices may engage in a plurality, including a multitude, of instances of communication and/or interaction, and may therefore be connected by a plurality, including a multitude, of edges. [01405] Some directed and/or undirected graph data sets may include one or more cycles. For example, in a graph representing a social network, a first edge E1 may connect a first node N1 (representing a first person) and a second node N2 (representing a second person) to represent a relationship between the first person and the second person. A second edge E2 may connect the second node N2 and a third node N3 (representing a third person) to represent a relationship between the second person and the third person. A third edge E3 may connect the third node N3 and the first node N1 to represent a relationship between the third person and the first person. Such cycles can occur in undirected graphs (e.g., edges in a social network graph that indicate mutual relationships among two or more individuals), directed graphs (e.g., edges in a social network graph that indicate that a first person is influenced by a second person, a second person is influenced by a third person, and a third person is influenced by the first person), and/or hypergraphs (e.g., cycles of relationships among three or more clusters that respectively include three or more nodes). Some cyclic graphs may include one or more cycles that are interlinked (e.g., one or more nodes and/or edges that are included in two or more cycles). Other directed and/or undirected graph data sets may be acyclic (e.g., graphs in which nodes are strictly arranged according to a top-down hierarchy). Still other directed and/or undirected graph data sets may be partially acyclic (e.g., mostly acyclic) but may include one or more cycles among one or more subsets of nodes and/or edges.
SFT-106-A-PCT [01406] In some graph data sets, one or more nodes includes one or more node properties. For example, in a graph representing a geographic area, each node may represent a city, and each node may include one or more node properties that correspond to one or more properties of the city, such as a size, a population, or a latitude and/or longitude coordinate. Each node property may be of various types, including (without limitation) a Boolean value, an integer, a floating-point number, a set of numbers such as a vector, a string, or the like. In some graph data sets, one or more nodes does not include a node property. For example, in a graph data set representing a set of particles, each particle may be identical to each other particle, and there may be no specific data that distinguishes any particle from any other particular. Thus, the nodes of the graph data set may not include any node properties. [01407] In some graph data sets, one or more edges includes one or more edge properties. For example, in a graph representing a geographic area, each edge may represent a road, and each edge may include one or more edge properties that correspond to one or more properties of the road, such as a distance, a number of lanes, a direction, a speed limit, a volume of traffic, a start latitude and/or longitude coordinate, and/or an ending latitude and/or longitude coordinate. In some graph data sets, a direction of an edge may be represented as an edge property. Alternatively or additionally, in some graph data sets, a direction of an edge may be represented separately from one or more edge properties. In some graph data sets, one or more edges does not include an edge property. For example, in a graph data set representing a line drawing of a set of points, each edge may represent a line connecting two points, and the edges may be significant only due to connecting two points. Thus, the edges of the graph data sets may not include any edge properties. [01408] In some graph data sets, the graph includes one or more graph properties. Such graph properties may be global graph properties that correspond to one or more properties of the entire graph. For example, in a graph data set representing a geographic region, the graph may include graph properties such as a total number of nodes and/or cities, a two-dimensional or three- dimensional area represented by the graph, and/or a latitude and/or longitude of a center of the graph. Such graph properties may be global graph properties that correspond to one or more properties of all of the nodes of the graph. For example, in a graph data set representing a geographic region, the graph may include graph properties such as an average population size of the cities represented by the nodes and/or an average connectedness of each city to other cities included in the graph. [01409] Some graph data sets include a single set of data that includes all nodes and all edges. For example, a graph representing a geographic region may include a set of nodes that represent all cities in the geographic region. Some other graph data sets include one or more subgraphs, wherein each subgraph includes a subset of the nodes of the graph and/or a subset of the edges of the graph.
SFT-106-A-PCT For example, a graph representing a geographic region may include a number of subgraphs, each representing a subregion of the geographic region, and the edges that interconnect the cities within each subregion. As another example, a graph representing a geographic region may include a first subgraph representing cities (e.g., groups of people over a threshold population size and/or population density) and a second subgraph representing towns (e.g., groups of people under the threshold population size and/or population density). In some graph data sets, each node and/or each edge belongs exclusively to one subgraph. In some graph data sets, at least one node and/or at least one edge can belong to two or more subgraphs. For example, in a graph representing a geographic region that includes a number of subgraphs respectively representing different geographic subregion, each node representing a city may be exclusively included in one subgraph, while each edge may interconnect two or more cities within one subgraph (i.e., within one subregion) or may interconnect a first city in a first subgraph (i.e., within a first subregion) and a second city in a second subgraph (i.e., within a second subregion). [01410] Graph neural networks can include features and/or functionality that are the same as or similar to the features and/or functionality of other neural networks. For example, graph neural networks include one or more neurons arranged in various configurations. Each neuron receives one or more inputs from the graph data set or another neuron, evaluates the one or more inputs (e.g., via an activation function), and generates one or more outputs that are delivered to one or more other neurons and/or as an output of the graph neural network. Examples of activation functions that can be included in various neurons of the graph neural network include (without limitation) a Heaviside or unit step activation function, a linear activation function, a rectified linear unit (ReLU) activation function, a logistic activation function, a tanh activation function, a hyperbolic activation function, or the like. [01411] As an example, some graph neural networks include only a single neuron, or only a single layer of neurons that is configured to receive graph data as input and to provide graph data as output of the graph neural network. Some graph neural networks are arranged in a series of two or more layers, wherein input is received by neurons included in a first layer. The output of one or more neurons included in the first layer is delivered, as input, to one or more neurons included in a second layer. For example, each neuron in the first layer may include one or more synapses that respectively interconnect the neuron to one or more neurons of the second layer. In many graph neural networks, each neuron N1 of a preceding layer L1 is connected to each neuron N2 of a following layer by a synapse that includes a weight W. Neuron N2 receives, as input, the output of the neuron N1 multiplied by the weight of the synapse connecting neuron N1 and neuron N2. In many neural networks, layer L1 includes a bias B, which is added to the product of the output of neuron N1 and the weight W of the synapse connecting neuron N1 and neuron N2. As a result, the
SFT-106-A-PCT input to neuron N2 includes the sum of the bias B of layer L1 and the product of the output of neuron N1 and the weight W of the synapse connecting neuron N1 and neuron N2. The output of the neurons included in the second layer can be provided as an output of the graph neural network and/or as input to one or more neurons included in a third layer. Each layer of the graph neural network may include a same number of neurons as a preceding and/or following layer of the graph neural network, or a different number of neurons as preceding and/or following layer of the graph neural network. [01412] As another example, some graph neural networks include one or more layers that perform particular functions on the output of neurons of another layer, such as a pooling layer that performs a pooling operation (e.g., a minimum, a maximum, or an average) of the outputs of one or more neurons, and that generates output that is received by one or more other neurons (e.g., one or more neurons in a following layer of the graph neural network) and/or as an output of the graph neural network. For example, some graph neural networks (e.g., graph convolution networks) include one or more convolutional layers, each of which performs a convolution operation to an output of neurons of a preceding layer of the graph neural network. [01413] As another example, some graph neural networks include memory based on an internal state, wherein the processing of a first input data set causes the graph neural network to generate and/or alter an internal state, and the internal state resulting from the processing of one or more earlier input data sets affects the processing of second and later input data sets. That is, the internal state retains a memory of some aspects of earlier processing that contribute to later processing of the graph neural network. Examples of graph neural networks that include memory features and/or stateful features include graph neural networks featuring one or more gated recurrence units (GRUs) and/or one or more long-short-term-memory (LSTM) cells. [01414] As another example, some graph neural networks feature recurrent and/or reentrant properties. For example, at least a portion of output of the graph neural network during a first processing is included as input to the graph neural network during a second or later processing, and/or at least a portion of an output from a layer is provided as input to the same layer or a preceding layer of the graph neural network. As another example, in some graph neural networks, an output of a neuron is also received as input by the same neuron during a same processing of an input and/or a subsequent processing of an input. The output of the neuron may be evaluated (e.g., weighted, such as decayed) before being provided to the neuron as input. As another example, some graph neural networks may include one or more skip connections, in which at least a portion of an output of a first layer is provided as input to a third layer without being processed by a second layer. That is, the output of the first layer is provided as input both to the second layer (which generates a second layer output) and to the third layer. In some such graph neural networks, the
SFT-106-A-PCT third layer receives, as input, either the output of the first layer or the output of the second layer. That is, the third layer multiplexes between the output of the first layer and the output of the second layer. Alternatively or additionally, in some such graph neural networks, the third layer receives, as input, both the output of the first layer and the output of the second layer (e.g., as a concatenation of the output vectors to generate the input vector for the third layer), and/or an aggregation of the output of the first layer and the output of the second layer (e.g., a sum or average of the output of the first layer and the output of the second layer). Examples of graph neural networks that include one or more skip connections include jump knowledge networks and highway graph neural networks (highway GNNs). [01415] As another example, some graph neural networks include two or more subnetworks (e.g., two or more graph neural networks that are configured to process graph data concurrently and/or consecutively). Some graph neural networks include, or are included in, an ensemble of two or more neural networks of the same, similar, or different types (e.g., a graph neural network that outputs data that is processed by a non-graph neural network, Gaussian classifier, random forest, or the like). For example, a random graph forest may include a multitude of graph neural networks, each configured to receive at least a portion of an input graph data set and to generate an output based on a different feature set, different architectures, and/or different forms of processing. The outputs of respective graphs of the random graph forest may be combined in various ways (e.g., a selection of an output based on a minimization and/or maximization of an objective function, or a sum and/or averaging of the outputs) to generate an output of the random graph forest. [01416] In these and other graph neural networks, the number of layers and the configuration of each layer of the graph neural network (e.g., the number of neurons and the activation function used by each neuron of each layer) can be referred to as hyperparameters of the graph neural network that are determined upon generation of the graph neural network. The weights of node synapses and/or the biases of the layers can be referred to as parameters of the graph neural network that are learned through a training or retraining process. Further explanation and/or examples of various concepts of other types of neural networks that can also apply to graph neural networks, and additional concepts that apply to other types of neural networks that can also be included in graph neural networks, are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01417] Unlike other types of neural networks, graph neural networks are configured to receive, process, generate, and/or transform one or more graph data sets. Some graph neural networks are configured to receive data representing and/or derived from a graph data set, such as an input vector that includes data representing one or more nodes of the graph (optionally including one or more node properties of one or more nodes), one or more edges of the graph (optionally including one
SFT-106-A-PCT or more edge properties of one or more edges), and/or one or more graph properties of the graph. Some graph neural networks are configured to receive an input vector comprising all of the data of a graph data set (e.g., all of the data representing all nodes, all edges, and the graph). Some graph neural networks are configured to receive an input vector comprising only a portion of the data of a graph data set (e.g., only a subset of the nodes of the graph and/or only a subset of the edges of the graph). For example, some graph data sets include a number of subgraphs, and the input vector to the graph neural network includes the data for all of the nodes and/or all of the edges included in one subgraph of the graph. The entire graph can be processed by processing (e.g., concurrently and/or consecutively) each subgraph and combining the output resulting from the processing of each subgraph. As another example, a graph data set representing a set of users of a social network may be processed by a graph neural network that receives, as input, a subset of nodes that correspond to the most influential users of the social network (e.g., those having more than a threshold number of social network connections) and a subset of edges that interconnect the nodes representing those users. Some graph neural networks are configured to receive, as input, data derived from a graph neural network. For example, a graph data set representing a social network may be processed by a graph neural network that receives, as input, data associated with messages exchanged among users of the social network, and provides, as output, an analysis of the messages. Some graph neural networks are configured to receive, as input, non-graph data (e.g., an input vector including coordinates of roads and/or cities in a geographic region) and generate graph data as output (e.g., a graph including odes that represent the cities, and edges that represent roads interconnecting the nodes representing the cities). [01418] Some graph neural networks are configured to process input data as graph data. As an example, some graph neural networks are configured to receive, as input, data that represents each of one or more nodes of a graph data set and one or more edges that respectively interconnect two or more nodes of the graph data set. The graph neural network may process a state of each node and/or edge of the input graph data in order to generate an updated state of the node and/or edge. The term “message passing” refers to evaluating and updating the state of a node N or an edge E of a graph based on the states of one or more neighboring nodes N and/or connecting edges E. For example, for each node N1, the graph neural network may evaluate the state of node N1 and/or states of a set of nodes N that are connected to node N1 by at least one edge (e.g., a neighborhood of nodes that includes N1) and may determine an updated state of node N1 based on the state of the node N1 and/or the states of the neighboring nodes N. As another example, for each node N1, the graph neural network may evaluate the state of the node N1 and/or the states of a set of edges E that connect node N1 to one or more other nodes of the graph, and may determine an updated state of node N1 based on the state of the node N1 and/or the states of the edges E. As yet another
SFT-106-A-PCT example, for each edge E1 of the input graph, the graph neural network may evaluate a state of the edge E1 and/or the states of a set of nodes N of the graph that are connected to the edge E1 and may determine an updated state of edge E1 based on the state of the edge E1 and/or the states of the connected nodes N. As yet another example, for each edge E1 of the input graph that connects a set of nodes N of the graph, the graph neural network may evaluate the state of the edge E1 and the states of the set of edges E that are also connected at least one of the set of nodes N and may determine an updated state of edge E1 based on the state of the edge E1 and/or the states of the other edges. In these and other scenarios, each node N and/or each edge E is evaluated and updated based on a collection of “messages” corresponding to the states of neighboring nodes N and/or connecting edges E. [01419] In some graph neural networks, each node N1 is updated based on a neighborhood of size 1, including only on the states of the edges E that are directly connected to node N1 and/or the states of the other nodes N that are directly connected to node N1 by an edge. In some other graph neural networks, each node N1 is updated based on a neighborhood of a size S greater than 1, including the states of other nodes N that are within S edge connections of node N1 and/or edges E that are connected to any such nodes N. In some graph neural networks, each edge E1 is updated based on a neighborhood of size 1, including only on the states of the nodes N that edge E1 connects and/or the edges E that are also connected to the nodes N that edge E1 connects. In some other graph neural networks, each edge E1 is updated based on a neighborhood of a size S greater than 1, including the states of other nodes N that are within S edge connections of node N1 and/or the set of edges E that are connected to any such nodes N. In some graph neural networks with a neighborhood of size greater than 1, one or more first layers of neurons process each node and/or edge based on the nodes and/or edges within a neighborhood of size 1; a second one or more following layers of neurons further process each node and/or edge based on the nodes and/or edges within a neighborhood of size 2; and so on. That is, the first one or more layers update the state of each node and/or edge based on the states of the directly connected nodes and/or edges, and each following one or more layers further updates the state of each node and/or edge additionally based on the states of indirectly connected nodes and/or edges that are one or more further connections. [01420] In some graph neural networks, the states of nodes N and/or edges E are evaluated and updated concurrently (e.g., the graph neural network may evaluate the features relevant to each node N and/or each edge E to determine an update, and may do so for all nodes N and/or all edges E, before applying the updates to update the internal states of each node N and/or each edge E). In some graph neural networks, the states of nodes N and/or edges E are evaluated and updated consecutively (e.g., the graph neural network may evaluate the features relevant to a first node N1 and update the state of node N1 before evaluating the features relevant to a second node N2 and
SFT-106-A-PCT updating the state of node N2). In some graph neural networks, the states of the nodes N and/or the edges E are consecutively evaluated and updated according to a sequential order (e.g., the graph neural network first evaluates and updates a state of a first node N1 that is of a high priority, and then evaluates and updates a state of a first node N2 that is of a lower priority than N1). In some graph neural networks, a state of a node N2 is evaluated after updating a state of a node N1 and, further, based on the updated state of node N1. In some graph neural networks, a state of an edge E2 is evaluated after updating a state of an edge E1 and, further, based on the updated state of an edge E1. In some graph neural networks, the states of nodes N are concurrently evaluated and updated, and then the states of edges E are concurrently evaluated and updated concurrently. In some graph neural networks, the states of edges E are concurrently evaluated and updated, and then the states of nodes N are concurrently evaluated and updated concurrently. These variations in the order of updating the nodes N and/or edges E can be variously combined with the previously discussed variations in the processing of neighborhoods. For example, a graph neural network may include a first one or more layers that are configured to evaluate and concurrently update the states of all nodes and edges within a neighborhood of size 1, followed by a second one or more layers that are configured to evaluate and concurrently update the states of all nodes and edges within a neighborhood of size 2. Another graph neural network may a graph neural network may include a first one or more layers that are configured to evaluate and concurrently update the states of all nodes within a neighborhood of size 1, followed by a second one or more layers that are configured to evaluate and concurrently update the states of all nodes within a neighborhood of size 2, further followed by one or more layers that are configured to update the states of all edges within a neighborhood of size 1 or more. [01421] Some graph neural networks are configured to evaluate and/or update one or more node properties of one or more nodes of a graph data set. For example, a graph representing a social network may include nodes that represent people, and a graph neural network may evaluate the nodes and/or edges of the graph to predict one or more node properties that correspond to attributes of the person, such as a type of the person, an age of the person, or an opinion of the person. Some graph neural networks are configured to evaluate and/or update one or more edge properties of one or more edges of a graph data set. Some graph neural networks are configured to evaluate and/or update one or more edge properties of one or more edges of a graph data set. For example, a graph representing a social network may include nodes that represent people and edges that represent relationships between people, and a graph neural network may evaluate the nodes and/or edges of the graph to predict one or more edge properties that correspond to attributes of a relationship among two or more people, such as a type of the relationship, a strength of a relationship, or a recency of the relationship. Some graph neural networks are configured to evaluate and/or update
SFT-106-A-PCT one or more graph properties of the graph data set. For example, a graph representing a social network may include nodes that represent people and edges that represent relationships between people, and a graph neural network may evaluate the nodes and/or edges of the graph to predict a feature of a social group to which all of the people belong, such as a common interest or a common demographic trait that is shared by at least many of the people of the social network. [01422] Some graph neural networks are configured to generate graph data as output. The generated graph data may include one or more nodes (optionally including one or more node properties), one or more edges (optionally including one or more edge properties), and/or one or more graph properties. The generated graph data may be based on input graph data. Some graph neural networks may be configured to receive at least a portion of a graph data set as input, and may generate, as output, modified graph data. As an example, the input graph data set may include a number of nodes and a number of edges interconnecting the nodes, and in the output graph data set generated by the graph neural network, each of the nodes and/or edges of the graph may have been updated based on one or more nodes and/or one or more edges of the input graph data. For example, an input graph data set may represent a social network including a nodes representing people and edges representing relationships between people. A graph neural network may be configured to receive at least a portion of the input graph data set, and may output an adjusted graph data set, wherein a state at least one of the nodes and/or at least one of the edges is updated based on the processing of the input data set. For example, various edges representing relationships may be updated to include additional data (e.g., edge properties) to represent an updated relationship between two people represented by nodes. Various nodes may be updated with to include additional data (e.g., node properties) to represent updated information about corresponding people based on the relationships. Various graph properties of the at least a portion of the graph data set may be updated based on the updated edges and/or nodes, e.g., a new common interest that is shared among many of the people in the social network. [01423] Some graph neural networks may be configured to output graph data that includes one or more newly discovered nodes based on the input graph data set. For example, an input graph data set representing travel events may include edges that include routes of travelers and nodes that represent locations of interest. A graph neural network may receive the input graph data set, and based on processing of the routes of the travelers, may output an updated graph data set that includes a new node that represents a new location of interest (e.g., a destination of a large number of recent travelers). The output of the graph neural network may include, for one or more new or existing nodes, one or more new or updated node properties (e.g., a classification of the location of interest based on the travel routes). Alternatively or additionally, some graph neural networks may be configured to output graph data that excludes one or more existing nodes of an input graph
SFT-106-A-PCT data set. For example, based on processing the input data set representing routes of travelers, a graph neural network may output an updated graph data set that excludes one of the nodes of the input graph data set representing a location that is no longer a location of interest (e.g., a destination that travelers no longer visit). [01424] Some graph neural networks may be configured to output graph data that includes one or more newly discovered edges based on the input graph data set. For example, an input graph data set may represent a social network including nodes that represent people and edges that represent connections between people. A graph neural network may receive the input graph data set, and based on processing of the people and connections, may output an updated graph data set that includes a new connection between two people (e.g., a likely relationship based on shared traits and/or mutual relationships with a number of other people representing a social circle). The output of the graph neural network may include, for one or more new or existing edges, one or more new or updated edge properties (e.g., a classification of a relationship between two or more people). Alternatively or additionally, some graph neural networks may be configured to output graph data that excludes one or more existing edges of an input graph data set. For example, based on processing the input data set representing a social network, a graph neural network may output an updated graph data set that excludes one or more of the edges of the input data set representing a relationship that no longer exists (e.g., a lost connection based on a splitting of a social circle). [01425] Some graph neural networks may output graph data that is based on data that does not represent an input graph data set. For example, a graph neural network may be configured to receive non-graph data, such as lists of travel routes of drivers, and may generate and output a graph data set including nodes that represent locations of interest and edges that interconnect the locations of interest. Conversely, some graph neural networks may receive input that includes at least a portion of a graph data set and that outputs non-graph data based on the input graph data. For example, a graph neural network may be configured to receive input including graph data, such as a graph of a social network including nodes that represent people and edges that represent connections, and to output non-graph data based on analyses of the input graph data, such as statistics about the people represented in the social network and activity occurring therein. [01426] GRAPH NEURAL NETWORKS - PROPERTIES [01427] Graph neural networks, including (without limitation) those described above, may be subject to various properties and/or considerations of design and/or operation. These considerations may affect their architecture, processing, implementation, deployment, efficiency, and/or performance.
SFT-106-A-PCT [01428] As previously discussed, graph neural networks may include edges with varying directionality, such as undirected edges (e.g., edges that represent distances between pairs of nodes that represent cities in a graph that represents a region), unidirectional edges (e.g., edges that represent parent/child relationships among nodes that represent people in a graph that represents a genealogy or lineage), and/or multidirectional edges (e.g., bidirectional edges that represent bidirectional roads between nodes that represent cities in a graph that represents a region). In some graph data sets, all of the edges have a same directionality (e.g., all edges are undirected). A graph neural network can be configured to receive an input vector corresponding to the input data set and to process the edges according to the uniform directionality of the edges (e.g., processing undirected edges without regard to the order in which the nodes are represented as being connected to the edge). Other graph data sets may include edges with different directionality (e.g., in a graph that represents a region, edges can represent roads between nodes that represent cities, and each edge can be either unidirectional to represent a one-way road or bidirectional to represent a two- way road). A graph neural network can be configured to receive an input vector corresponding to the input data set and to process the edges according to the distinct directionality of each edge (e.g., processing a unidirectional edge in a different manner than a bidirectional edge). As one such example, the graph neural network can interpret a bidirectional edge connecting two nodes N1, N2 as a first unidirectional edge that connects node N1 to N2 and a second unidirectional edge that connected node N2 to node N1. The pair of unidirectional edges can share various edge properties and/or can be evaluated and/or updated in a same or similar manner (e.g., for a pair of unidirectional edges corresponding to a bidirectional road, the graph neural network can process data representing a weather condition in a same or similar manner to both unidirectional edges associated with the bidirectional road). [01429] As previously discussed, some graph neural networks are configured to process nodes according to a “message passing” paradigm, in which the evaluation of each node N1 is based on the states and/or evaluations of other nodes within a neighborhood of the node N1 and/or the edges that connect the node N1 to other nodes in the neighborhood of the node N1. That is, the state of each node in the neighborhood of the node N1 and/or the state of each edge that connects N1 to other nodes of the neighborhood serves as a “message” that informs the evaluation and/or updating of the state of node N1 by the graph neural network. Alternatively or additionally, the evaluation of each edge E1 is based on the states and/or evaluations of other edges within a neighborhood of the edge E1. That is, the state of each node connected by edge E1, and, optionally, the states of other nodes connected to those nodes and/or other edges in such connections, serves as a “message” that informs the evaluation and/or updating of the state of edge E1 by the graph neural network. In each case, the size of the neighborhood can vary; for example, the graph neural network can
SFT-106-A-PCT evaluate each node according to a one-hop neighborhood or a multi-hop neighborhood. Graph neural networks that perform multi-hop neighborhood evaluation can include multiple layers, where a first one or more layers are configured to process a first hop between a node N1 and a one- hop neighborhood including its directly connected neighbors and/or directly connected edges, and a second one or more layers following the first one or more layers are configured to process a second hop between the nodes and/or edges of the one-hop neighborhood and additional nodes and/or edges that are directly connected to the nodes and/or edges of the one-hop neighborhood. In this manner, each node N1 is first evaluated and/or updated based on a message passing among the one-hop neighborhood, and is then evaluated and/or updated based on additional messages within the two-hop neighborhood, etc. Other architectures of graph neural networks may perform multi-hop neighborhood evaluation in other ways, e.g., by processing individual clusters of nodes and/or edges to perform message passing among the nodes and/or edges of each cluster, and then performing additional message passing between clusters to update nodes and/or edges of each cluster based on the nodes and/or edges of one or more neighboring clusters. [01430] In some scenarios, a graph may include nodes and/or edges that are stored, represented, and/or provided as input that is not subject to any particular order (e.g., nodes representing points in a line drawing may not have any node properties, and may therefore be represented in arbitrarily different orders in the input graph data set). In such scenarios, a multitude of semantically equivalent input graph data sets may be logically equivalent to one another. That is, a first representation of a graph may include the nodes and/or edges in a particular order, while a second representation of the same graph may include the same nodes and/or edges in a different order. While both representations of the graph are logically equivalent, the different ordering in which the nodes and/or edges are provided as input to the graph neural network may cause the graph neural network to provide different output. In other scenarios, a graph comprising a set of nodes and a set of interconnecting edges may be organized, stored, and/or represented in a particular order. For example, the nodes may be ordered according to a property of the nodes, and/or edges may be ordered according to a property of the edges (e.g., in a social network, nodes representing people may be ordered according to the alphabetical order of their names, and edges representing relationships may be ordered according to the alphabetical order of the names of the related people). In such scenarios, changes to the order and/or the selected subsets of graph data may result in different input data sets that represent the same or similar (e.g., logically equivalent) graphs. Due to the manner in which a graph neural network processes the input graph data set, logically equivalent input graph data sets may result in different and logically distinct output data. [01431] In such scenarios, it may be undesirable for the graph neural network to generate different output for different but logically equivalent representations. That is, it may be desirable for the
SFT-106-A-PCT graph neural network to provide the same or equivalent output for different but logically equivalent representations of a graph. Graph neural networks that exhibit this property can be referred to as “permutation invariant,” that is, capable of providing output that does not vary across permutations in the representation of the input graph data set. A variety of techniques may be used to achieve, improve, and/or promote permutation invariance. Some such techniques involve changing representations of the input data set. For example, before processing an input graph data set, the graph neural network may reorder the input data set (e.g., by reordering the units of an input vector) such that nodes and edges are represented in a consistent order. As one such example, an input graph data set may include nodes that represent cities, and the input graph data set may include the nodes and/or edges in varying orders. Prior to processing the input graph data set, the graph neural network may reorder the nodes based on latitude and longitude coordinates of the cities, and the edges can similarly be reordered based on the latitude and longitude coordinates of the nodes connected by each edge. Thus, any representation of the graph including nodes that represent the same set of cities is processed in a similar manner. Similar reordering may involve various node properties and/or edge properties, including (without limitation) an alphabetic ordering of names in a graph including nodes that represent people, a chronological ordering of dates in a graph including nodes that represent events, a numeric ordering of content-based hashcodes in a graph including nodes that represent objects, and/or a numeric ordering of identifiers in a graph including nodes that possess unique numeric identifiers. Other techniques for achieving, improving, and/or promoting permutation invariance involve transforming an input graph data set into a different, permutation-invariant representation that is provided as input to and processed by the graph neural network. For example, a graph data set representing a two-dimensional image or a three- dimensional point cloud may include nodes that represent pixels and edges that represent spatial relationships (e.g., distances and/or orientations) between respective pairs of pixels of the image or respective pairs of points in the point cloud. Different orderings of the pixels and/or points may result in differently ordered, but logically equivalent, graph data sets for a particular image or point cloud. Instead of processing the graph data sets as input, a graph neural network may be configured to convert the input graph data set into a spectral representation, e.g., based on a spectral decomposition of a Laplacian L of the input graph data set. Instead of encoding information about individual pixels and/or points, the spectral representation instead encodes spectral components of the input graph data sets. The spectral components can be ordered in various ways (e.g., by frequency and/or polynomial order) to generate a permutation-invariant input vector, and the processing of the permutation-invariant input vector by a graph neural network may result in invariant (e.g., identical or at least similar) output of the graph neural network for various permutations of the input graph data set.
SFT-106-A-PCT [01432] Alternatively or additionally, some techniques for achieving, improving, and/or promoting permutation invariance may relate to the structure of the graph neural network. For example, as an alternative or addition to reordering an input graph data set, a graph neural network may include one or more layers of neurons that process an input vector and generate permutation-invariant output. As one such example, a graph neural network may include a pooling layer that receives an input vector (e.g., an input vector corresponding to an input graph data set, and/or an input vector corresponding to an output of one or more previous layers of the graph neural network) and generates output that is pooled over the input, such as a minimum, maximum, or average of the units of the input. Because operations such as a minimum, maximum, and/or average over a data set are permutation-invariant mathematical operations, the graph neural network may therefore exhibit permutation-invariance of output based on the pooling operation for differently ordered but logically equivalent representations of a particular graph data set. As another such example, a graph neural network may include a filtering layer that receives an input vector (e.g., an input vector corresponding to an input graph data set, and/or an input vector corresponding to an output of one or more previous layers of the graph neural network) and generates output that is filtered based on certain permutation-invariant criteria. For example, in a graph representing a social network that includes nodes representing people, a layer of the graph neural network may filter the nodes to limit the input data set based on the top n nodes of the graph neural network that correspond to the most influential people in the social network. Such filtering may be based, e.g., on a count of the edges of each node (i.e., a count of the number of relationships of each person to other people of the social network), or a weighted calculation based on the influence of the nodes to which each node is related and/or the strength of each such relationship. Because such filtering operation are permutation-invariant logical operations, the graph neural network may therefore exhibit permutation-invariance of output based on the filtering operation for differently ordered but logically equivalent representations of the nodes (i.e., people) and edges (i.e., relationships) of the social network. As yet another example, some graph neural networks include an encoding or “bottleneck” layer, in which an output from N neurons of a preceding layer is received as input and processed by a following layer that includes fewer than N neurons. Due to the smaller number of neurons in the following layer, the volume of data that encodes features of the output of the preceding layer is compressed into a smaller volume of data that encodes features of the output of the following layer. This compression of features, based on learned parameters and training of the graph neural network to produce expected outputs, can cause the graph neural network to encode only more significant features of the processed data, and to discard less significant features of the processed data. The reduced-size output of the neurons of the following layer can be referred to as a latent space encoding of the input feature set. For example, whereas an input graph data set may
SFT-106-A-PCT include nodes that correspond to all pixels of an image of a cat, and an output of a previous layer of the graph neural network may include partially processed information about each node (i.e., each pixel) of the image of the cat, the output of the following layer of the graph neural network may include only features that correspond to visually significant features of the cat (e.g., features that correspond to the pixels that represent the distinctively shaped ears, eyes, nose, and mouth of the cat). Thus, the latent space encoding may reduce the processed input of the graph data set into a smaller encoding of nodes that represent significant visual features of the graph data set, and may exclude data about nodes that do not represent significant visual features of the graph data set. Many such graph neural networks include one or more “bottleneck” layers as one or more autoencoder layers, e.g., layers that automatically learn to generate latent space encodings of input data sets. As one such example, deep generative models may be used to generate output graph data that corresponds to various data types (e.g., images, text, video, scene graphs, or the like) based on an encoding, including an autoencoding, of an input such as a prompt or a random seed, Additional techniques for achieving or promoting permeation-invariance are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01433] In some scenarios, a graph data set may include a large number of nodes and/or a large number of edges. For example, a graph data set representing a social network may include thousands of nodes that represent people and millions of edges that represent relationships among the people. The size of the graph data set may result in an input vector that is very large (e.g., a very long input vector), and that might require a correspondingly large graph neural network to process (e.g., a graph neural network featuring millions of weights that connect the input graph data set to the nodes of a first layer of the graph neural network). The size of the input data set may result in large and perhaps prohibitive computational resources to receive and/or process the graph data set (e.g., large and costly storage and/or processing to store the input graph data set and/or the parameters and/or hyperparameters of the graph neural network, and/or a protracted delay in completing the processing of an input graph data set by the graph neural network). Further, the graph data set may exhibit properties of sparsity that cause a large portion of the input data set to be inconsequential. For example, a graph data set representing a social network may be encoded as a vector of N units respectively representing each node (i.e., each person) followed by a vector of NxN units that respectively represent a potential relationship between each node N1 and each node N2. Edges that represent a multidimensional mapping of connections between nodes (such as an NxN mapping of edges that represent possible connections between nodes) can be referred to as an adjacency matrix. However, in the social network, most people may have only a small number of relationships (i.e., far less than N-1 relationships with all other people of the social network). Thus, in the vector encoding of the input graph data set, a large majority of the NxN units that
SFT-106-A-PCT respectively represent potential relationships between each pair of nodes N1, N2 (i.e., the adjacency matrix) may be negative or empty (representing no relationship), and only a very small minority of the NxN units that respectively represent potential relationships between each pair of nodes N1, N2 may be positive or non-empty (representing a relationship). As another example, a graph data set representing a region may include N nodes representing cities and NxN edges representing possible roads between cities. However, if each city is only directly connected to a small number of neighboring cities, then a large majority of the NxN edges representing possible roads between cities (i.e., the adjacency matrix) may be negative or empty (representing no road connection), and only a very small minority of the NxN units that respectively represent potential roads between each pair of nodes N1, N2 may be positive or non-empty (representing an existing road). In such cases, the sparsity of an input vector representing the graph neural network may inefficiently consume computational resources (e.g., inefficiently applying storage and/or computation to large numbers of negative or empty units of the input vector) and/or may unproductively delay the completion of processing of the input graph data set. [01434] Various techniques can be applied to reduce the sparsity of graph data sets and the processing of such graph data sets by graph neural networks. As a first example, the graph neural network can be pruned to reduce the number of nodes and/or edges included as an input data set (e.g., filtering the nodes of a graph neural network to a small cluster of densely related nodes, such as a small number of highly interrelated nodes that represent the members of a social circle in a social network). As a second example, the graph neural network can be encoded in a way that reduces sparsity. For example, rather than encoding the input graph data set as an adjacency matrix, the graph neural network may be configured to receive an encoding of the input graph data set as an adjacency list, i.e., as a list of edges that respectively connect two or more nodes of the graph. Due to encoding only information about existing edges, an adjacency list can eliminate or at least reduce the encoding of nonexistent edges. As a result, the size of the adjacency list may therefore be much smaller than a size of a corresponding adjacency matrix., and can therefore eliminate or at least reduce the sparsity of the input graph data set. The adjacency list can include edge properties of the edges of the graph data set. The adjacency list can be limited to a particular size (e.g., the top N most influential connections in a social network). The nodes of the input graph data set can be limited based on the edges included in the adjacency list (e.g., excluding any nodes that are not connected to at least one of the edges included in the adjacency list). As yet another example, rather than encoding an entire set of nodes and edges, a graph neural network can be represented as an encoding of the nodes and edges. For example, a graph data set may include nodes that represent pixels of an image and edges that represent spatial representations of the pixels. However, if large areas of the image are inconsequential (e.g., dark, empty, or not associated
SFT-106-A-PCT with any notable objects in a segmented image), then large portions of the nodes and/or edges would be inconsequential. Instead, the image can be reencoded as a frequency-domain representation as coefficients associated with respective frequencies of visual features within the image. The frequency-domain representation may present greater information density than the adjacency matrix of pixels, and therefore may present an input to the graph neural network that encodes the visual features of the input graph data set with reduced sparsity. [01435] Other techniques for eliminating or reducing sparsity, and therefore increasing efficiency, involve the architecture of the graph neural network. For example, the input graph data set may encode edges as an adjacency matrix, and a first layer of the graph neural network may reencode the edges of the input graph data set as an adjacency list for further processing by the graph neural network. As another example, the graph neural network may include a first one or more layers that is configured to process an entirety or at least a large portion of the nodes and/or edges of an input graph data set, followed by a filtering layer that is configured to limit an output of the first one or more layers of the graph neural network. For example, in a graph data set that includes nodes that represent people and edges that represent connections, a first one or more layers may process all of the nodes and/or edges, and a filtering layer can limit the further processing of the output of the first one or more layers to the nodes and/or edges for which the outputs of the first one or more layers are above a threshold (e.g., an influence and/or relationship significance threshold). As still another example, the graph neural network may receive a sparse graph input data set but may only process a portion of the input graph data set (e.g., one or more random sampling of subsets of nodes and/or edges). In some cases, the graph neural network may compare results of the processing of subsets of the input graph data set (e.g., randomly sampled subsets of the nodes and/or edges) and may aggregate such results until the results appear to converge within a confidence threshold. In this manner, the graph neural network may generate an acceptable output within the confidence threshold while avoiding processing an entirety of the sparse input graph data set. Many such techniques for eliminating and/or reducing sparsity are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01436] GRAPH NEURAL NETWORKS – INPUT, PROCESSING, AND OUTPUT [01437] Graph data sets may represent a variety of data types, including (without limitation) maps of geographic regions, including nodes representing cities and edges representing roads that connect two or more cities; social networks, including nodes representing people and edges representing relationships between two or more people; communication networks, including nodes representing people or devices and edges representing communication connections between the nodes or edges; economies, including nodes representing companies and edges representing
SFT-106-A-PCT transactions between two or more companies; molecules, including nodes representing atoms and edges representing bonds between two or more atoms; collections of events, including nodes representing individual events and edges representing causal relationships among two or more events; and periods of time, including nodes representing events and edges representing chronological periods among two or more events. Graph data sets may also represent data types, such as passages of text, including nodes representing words and edges representing relationships among two or more words; images, including nodes representing pixels and edges representing spatial relationships among two or more pixels; object graphs, including nodes representing objects and edges representing dependencies among two or more objects; and three-dimensional spatial maps, including nodes representing three-dimensional objects and edges representing spatial relationships among two or more of the three-dimensional objects. Some graph data sets may include two or more subgraphs. In some such graph data sets, each node and/or each edge is exclusively included in one subgraph. In some other graph data sets, at least one node and/or at least one edge may be included in two or more subgraphs, or in zero subgraphs. Some graph data sets are associated with non-graph data that is also included as input to a graph neural network. For example, a graph neural network that evaluates traffic patterns within a geographic region may receive, as input, both an input graph data set that includes nodes that represent cities and edges that represent roads interconnecting the cities, and also non-graph data representing traffic and/or weather features within the geographic region (e.g., traffic volume estimates and current or forecasted weather conditions that affect the traffic patterns). [01438] As another example, some graph data sets may include an indication of zero or more cycles occurring among the nodes and/or edges of the graph data set. For example, a directed and/or undirected graph data set may include an indication that a particular cycle exists within the graph and includes a particular subset of nodes and/or edges. Alternatively, a directed and/or undirected graph data set may include an indication that the graph is acyclic and does not include any cycles. A graph neural network may be configured to receive, as input, and process a graph data set that includes an indication of zero or more cycles. [01439] As another example, some graph data sets may include nodes for which the edges provide spatial dimensions. As a first example, in a graph representing a geographic region, nodes that represent cities are related by edges that represent distances, wherein the nodes and interrelated edges can form a spatial map of the geographic region. As a second example, in a graph representing a molecule, nodes that represent atoms are related by edges that represent chemical bonds between the atoms, and the arrangement of atoms by the bonds forms a three-dimensional molecular structure. In some such scenarios, the spatial relationships are well-defined by the nodes and edges. In other such scenarios, the spatial relationships can be inferred based on semantic
SFT-106-A-PCT relationships among the nodes and/or edges of the graph data set. For example, in a graph representing a language, nodes that represent words are related by edges that represent semantic relatedness of the words within a high-dimensional language space. A language model can generate an embedding of the words of the language in a multidimensional embedding space, wherein nodes that are close together within the embedding space represent synonyms, closely related concepts, or words that frequently appear together in certain contexts, whereas nodes that are not close together within the embedding space represent unrelated concepts or words that do not commonly appear together in various contexts. A variety of graph embedding models may be applied to this task, including (without limitation) DeepWalk, node2vec, line, and/or GraphSAGE. A graph neural network can be configured to receive, as input, an embedding of a graph data set instead of representations of the nodes and/or edges of the graph data set. Alternatively, a graph neural network can be configured to receive an input graph data set including representations of the nodes and/or edges of the graph data set, generate an embedding based on the input graph data set, and apply further processing to the embedding instead of to the input graph data set. A graph neural network that is configured to process an embedding instead of an input graph data set may exhibit greater permutation invariance (e.g., due to the semantic associations represented by the embedding) and/or increased efficiency due to reduced sparsity of the input. [01440] Some graph data sets include representations of each of one or more nodes and each of one or more edges. Some graph neural networks are configured to receive and process such representations of graph neural networks. For example, the graph neural network may be configured to receive an input vector including an array of data representing each of the one or more nodes followed by an array of data representing each of the one or more edges, either as an adjacency matrix of possible edges between pairs of nodes or an adjacency list of existing edges. The input vector may encode the nodes and/or edges in a particular order (e.g., a priority order of nodes and/or a weight order of edges) or in an unordered manner. Alternatively or additionally, the graph data set may include and/or encode other types of information about each of one or more nodes and/or each of one or more edges of the graph data set. For example, the graph may include a hierarchical organization of nodes and/or edges relative to one another and/or to a fixed reference point. The graph neural network may be configured to receive and process an input graph data set that includes an indication of the arrangement of one or more nodes and/or one or more edges in the hierarchical organization. [01441] As another example, a graph may include an indication of a centrality of one or more nodes and/or edges within the graph (e.g., a graph of a social network including nodes that are ranked based on a centrality of each node to a cluster). The graph neural network may be configured
SFT-106-A-PCT to receive and process an input graph data set that includes an indication of a centrality of one or more nodes and/or one or more edges in the graph. [01442] As another example, a graph may include an indication of a degree of connectivity of one or more nodes and/or edges within the graph (e.g., a graph of a social network including nodes that are ranked according to a count of other nodes to which each node is connected by one or more edges, and/or a degree of significance of a relationship represented by an edge based on the nature of the relationship and/or the degrees of the nodes connected by the edge). The graph neural network may be configured to receive and process an input graph data set that includes an indication of a degree of one or more nodes and/or one or more edges in the graph. [01443] As another example, a graph may include an indication of one or more clusters occurring within the graph. For example, a graph may include a result of a clustering analysis of the graph, e.g., a determination of k clusters within the graph and an identification of the nodes and/or edges that are included in each cluster. The clusters may be determined by a k-means clustering analysis, a Gaussian mixture model of with variable numbers of clusters and variable Gaussian orders, or the like. A graph may include a clustering coefficient of one or more nodes and/or one or more edges (e.g., a measurement of a degree to which at least some of the nodes and/or edges of a subgraph of the graph are clustered based on similarity and/or activity). The graph neural network may be configured to receive and process an input graph data set that includes an indication of a clustering coefficient of one or more nodes and/or one or more edges in the graph or a subgraph thereof. [01444] As another example, a graph may include an indication of a graphlet degree vector that indicates a graphlet that is represented one or more times in the graph. For example, in a graph representing atoms in a regular structure such as a crystal, the graph may include a graphlet degree vector that indicates and/or describes a graphlet representing a recurring atomic structure, and an encoding of the regular structure that indicates each of one or more occurrences of a graphlet, including a location and/or orientation, and/or a count of occurrences of the graphlet. The graph neural network may be configured to receive and process an input graph data set that includes a graphlet degree vector, and, optionally, features of one or more occurrences of a graphlet in the graph and/or a count of the occurrences of the graphlet in the graph. [01445] As another example, a graph may include an indication of one or more paths and/or traversals of one or more nodes and/or one or more edges of the graph, optionally including additional details associated with a path or traversal such as a popularity, frequency, length, difficulty, cost, or the like. For example, in a graph representing a spatial arrangement of nodes, the graph may include a path or traversal of edges that connect a first node to a second node through zero or more other nodes, as well as properties of the path or traversal such as a total length,
SFT-106-A-PCT distance, time, and/or cost. The graph neural network may be configured to receive and process an input graph data set that includes additional details associated with one or more paths or traversals, including an indication (e.g., a list) of the associated nodes and/or edges and a list of one or more properties of the path and/or traversal. [01446] As another example, a graph may include an indication of metrics or properties that relate one or more nodes and/or one or more edges. For example, in a graph including a spatial arrangement of nodes, the graph may include an indication of a shortest distance between two nodes and/or an indication of a set of nodes and/or edges that are common to two nodes. As another example, a graph representing a network of communicating devices may include a routing table of one or more routes that respectively indicate, for a particular node and a particular edge connected to the node, a list of other nodes and/or edges that can be efficiently reached by traversing based on the particular edge. As yet another example, in a graph representing a social network including nodes that represent people, the graph may indicate, for at least one pair of nodes, a measurement of similarity of the nodes based on their node properties, edges, locations in the social network, connections to other nodes, or the like (e.g., a Katz index of node similarity) and/or, for at least one pair of edges, a measurement of similarity of the edges based on their edge properties, connected nodes, locations in the social network, or the like (e.g., a Katz index of edge similarity). The graph neural network may be configured to receive and process an input graph data set that includes one or more metrics or properties that relate one or more nodes and/or one or more edges (e.g., a routing table of routes within the graph, and/or a Katz index that indicates a measurement of similarity among at least two nodes and/or at least two edges). [01447] As another example, a graph may include an indication of various graph properties of the graph (e.g., a graph size, graph density, graph interconnectivity, graph chronological period, graph classification, a count of subgraphs within the graph, or the like). For example, in a graph including two or more subgraphs (e.g., a social network including two or more social circles), the graph data set may include a measurement of a similarity of each subset of at least two subgraphs of the graph. The measurement of the similarity may be determined based on one or more graph kernel methods (e.g., a Gaussian radial basis function that can be applied to the graph to identify one or more clusters of similar nodes that comprise a subgraph). As another example, a graph may include a measurement of similarity with respect to another graph (e.g., an indication of whether a particular social network graph resembles other social network graphs that have been classified as representing a genealogy or lineage, a set of friendships, and/or a set of professional relationships). The graph neural network may be configured to receive and process an input graph data set that includes measurements determined by one or more graph properties (e.g., one or more measurements of similarity of one or more nodes, edges, and/or subgraphs, and/or a measurement
SFT-106-A-PCT of similarity of the graph to other graphs). Further explanation and/or examples of various graph data sets that may be provided as input to graph neural networks are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01448] Graph neural networks may be configured to perform various types of processing over such graph data sets. As previously discussed, a graph neural network can be organized as a series of layers, each of which can include one or more nodes that receive input, apply an activation function, and generate output. The output of each node of a first layer can be multiplied by a weight of a connection between the node and a node of a second layer, and then added to a bias associated with the first layer, to generate an input to the node of the second layer. The graph neural network can include various additional layers that perform other types of processing, including (without limitation) pooling, filtering, and/or latent space encoding operations, memory or stateful features, and recurrent and/or reentrant processing. [01449] Some graph neural networks may perform label propagation among the nodes and/or edges of a graph data set. For example, in an input graph data set, one or more nodes and/or one or more edges may be associated with one or more labels of a label set, while one or more other nodes and/or one or more other edges may not be associated with any labels. A graph neural network may apply a label propagation algorithm (LPA) to assign labels to one or more unlabeled nodes and/or one or more unlabeled edges. For example, the graph neural network may assign a label to an unlabeled node based on labels associated with one or more edges connected to the node, and/or with one or more other nodes that are connected to the node by the one or more edges. The graph neural network may assign a label to an unlabeled edge based on labels associated with one or more nodes connected by the edge, and/or with one or more other edges that are also connected to the nodes connected by the edge. Some graph neural networks may perform label propagation based on a voting, consensus, weighting, and/or scoring determination. For example, a graph neural network may be unable to perform a classification of an unlabeled node and/or unlabeled edge based solely on the node properties and/or edge properties, but may be able to perform the classification based on a further consideration of the labels associated with other nodes and/or edges within a neighborhood of the unlabeled node and/or unlabeled edge. [01450] Some graph neural sets may perform a scoring and/or ranking of nodes and/or edges of a graph data set. As an example, in a graph data set that represents the World Wide Web and that includes nodes that represent web pages and directed edges that represent hyperlinks of linking web pages to linked web pages, a graph neural network may determine one or more scores of each node (i.e., each web page) based on the scores of other nodes that hyperlink to the node. Each score may further be based on the scores of the other nodes that include a directed edge to this node (e.g., the scores of other web pages that hyperlink to this page). Additionally, each score associated with
SFT-106-A-PCT a node may represent a weight of an association between the web page and a particular topic (e.g., a particular topic or keyword that is associated with the web page, hyperlinks, and/or other pages that hyperlink to this web page). In some cases, the scores may be personalized based on the activities of a particular user (e.g., based on the hyperlinks from pages that the user frequently visits). A search engine may use the scores as rankings in order to generate search results for web searches including various topics or keywords (e.g., in response to a web search for a particular search term, present search results that correspond to the nodes with the highest scores associated with the search term, and present the search results in ranked order based on the scores). As another example, for a graph data set representing a social network, a graph neural network may generate a reputation score for each node based on other nodes that are associated with the node and the reputation scores of such other nodes. The scores of the nodes may be used to recommend new connections in the social network (e.g., recommending a first person connect with a second person, based on a high reputation score of the second person by people who are closely associated with the first person). [01451] Some graph neural networks may perform a clustering analysis of the nodes and/or edges of a graph data set. As a first example, in a graph data set representing a social network, a graph neural network may perform a clustering analysis of the nodes representing the people of the social network, based on edges representing relationships among two or more nodes, in order to identify one or more clusters that represent social circles of highly interconnected people within the social network. Based on this clustering analysis, the graph neural network may partition the social network into subgraphs that respectively represent social circles, and may perform further, finer- grained evaluation of each social circle and the people represented by the nodes in each subgraph. As a second example, in a graph data set representing a social network, a graph neural network may perform a clustering analysis of the edges representing the relationships among people of the social network, in order to identify one or more clusters that represent different types of relationships, such as familial relationships, friendships, and professional relationships. Based on this clustering analysis, the graph neural network may partition the social network into subgraphs that respectively represent different types of social networks, and may perform further analysis of relationships among two or more individuals based on the type of relationship associated with the subgraph to which the relationship belongs. In these and other scenarios, in order to perform clustering analysis, a graph neural network may utilize a variety of clustering algorithms. As one such example, a graph neural network may apply spectral clustering techniques, wherein a similarity matrix that represents similarities among nodes and/or edges is evaluated to identify eigenvalues that indicate significant similarity relationships. Based on the similarity matrix, the graph neural network may perform a dimensionality reduction of the graph data set (e.g., reducing
SFT-106-A-PCT the features of the nodes and/or edges that are evaluated to determine clusters in order to focus on features that are highly correlated with and/or indicative of significant similarities). Dimensionality reduction of the graph data set based on the similarity matrix may enable the graph neural network to determine clusters more efficiently and/or rapidly, e.g., by reducing a high-dimensionality graph data set (wherein each node and/or edge is characterized by a multitude of node properties and/or edge properties) into a lower-dimensionality graph data set of a subset of features that are highly correlated with and/or indicative of similarity and clustering. [01452] Some graph neural networks may perform a centrality determination among nodes and/or edges of a graph data set. For example, for a graph data set representing a social network, a graph neural network may evaluate the graph data set to identify a subset of nodes based on a centrality among the edges representing the connections of the social network, e.g., people who are at the center of each of one or more social circles within the social network. Alternatively or additionally, some graph neural networks may perform a “betweenness” determination among the nodes and/or edges of the graph data set. For example, a node may be considered to be “between” two clusters of nodes, such as a member of two or more clusters representing two or more social circles. Such “between” nodes may represent a communication bridge that conducts information between clusters (e.g., a person who can convey ideas and/or influence from a first social circle to a second social circle and vice versa). Some such graph neural networks may perform “betweenness” determinations based on a betweenness centrality measurement, e.g., based on a measurement of a shortest path between all pairs of nodes in the graph data set. As another example, a graph data set may represent a collection of text documents, wherein each node represents a document and each edge represents a relationship between documents (e.g., a unidirectional or bidirectional citation between a first document and a second document). A graph neural network can perform a centrality determination and/or a betweenness determination to determine significant documents within the collection (e.g., a document that is heavily cited by one or more clusters of other documents, and/or a document that includes ideas or associations between the documents of a first cluster and the documents of a second cluster). [01453] Some graph neural networks may perform analyses of structures occurring within a graph neural network. As an example, for a graph data set that represents a social network, a graph neural network may determine a notable sequence of relationships, such as a first relationship between node N1 and node N2 based on a shared interest, a second relationship between node N2 and node N3 based on the same shared interest, and a third relationship between node N3 and node N4 based on the same shared interest. Based on this sequence or chain of relationships, the graph neural network may recommend to a person represented by node N1 some further relationships with the people represented by nodes N3 and N4, due to the combination of shared interests and mutual
SFT-106-A-PCT relationships. In some such cases, a graph neural network may perform such structural analysis based on a traversal algorithm that traverses a sequence of nodes connected by one or more edges, and/or that traverses a sequence of edges connected by one or more nodes. As an example, a graph neural network may perform a random walk within the graph data set, such as starting with a first node (e.g., a first person of a social network) and following a limited set of edges that connect the first node to other nodes. In some cases, the traversal may be random (e.g., traversing from a node based on a random selection among the edges that connect the node to other nodes). In some other cases, the traversal may be weighted (e.g., each edge may include an edge property including a weight that represents a strength of a relationship among two or more nodes, and the traversal may be based on a weighted random selection that preferentially selects higher-weighted connections over lower-weighted connections). In some cases, the traversal can include a restart probability, e.g., a probability of retrying the traversal beginning with the original node or another node, based on a score such as a distance of the traversal with respect to the original node. In these and other cases, the results of a random walk can be used in further analyses and/or activities of the graph neural network (e.g., presenting recommendations for new social connections among the nodes of a social network). [01454] Some graph neural networks may perform an analysis of a graph data set based on an attention model. For example, in a social network, the influence of a particular person P1 may not be determined by the connectedness of person P1 to other people in the social network, but based on a perception of person P1 by other people of the social network as being knowledgeable, skilled, influential, or the like. Thus, a graph neural network may be configured to evaluate a graph data set representing a social network in which nodes represent people and edges represent relationships, but may be unable to determine influence based only on graph concepts such as connectedness of the nodes based on the edges. Rather, the graph neural network might model influence as an attention of each node (i.e., a second person P2 of the social network) upon each other node (e.g., person P1 of the social network). Thus, a particular opinion of person P2 of the social network may depend not only on the connections of person P2 to other people of the social network (including person P1), but also upon the attention that person P2 accords to such other people of the social network (including person P1). That is, even though person P2 is closely connected to certain people of the social network by various edges, the opinion of person P2 may be heavily shaped by person P1 and other people to whom person P2 is only indirectly connected in the social network. As a second such example, in a graph data set that represents traffic flow within a region, an edge E1 (e.g., a first road) may be directly connected to other edges of the graph data set, but an edge property of the edge E1 (e.g., a traffic volume and/or congestion of the road) may be impacted more heavily by edge properties of other edges to which edge E1 is not directly
SFT-106-A-PCT connected (e.g., roads in other parts of the geographic region for which traffic volume and/or congestion is highly determinative of the traffic volume and/or congestion of this road). Thus, in order to predict and/or estimate a traffic volume and/or congestion of a particular road, a graph neural network may evaluate not only the traffic volume and/or congestion of other roads that are directly connected to the particular road, but also other roads for which traffic volume and/or congestion is highly determinative of corresponding conditions of this road. In these and other scenarios, a graph neural network may evaluate a graph data set based on an attention model, in which analyses and updates of the state of nodes and/or edges of the graph data set are based, at least in part, on an attention of each node and/or edge upon other nodes and/or edges of the graph data set. For example, the graph neural network may include an attention layer that determines, for a particular node and/or edge of an input graph data set, which other nodes and/or edges of the input graph data set are likely to be relevant to determining an updated state of the particular node and/or edge. Various attention models may be used by such graph neural networks, including multi-head attention models in which each node and/or edge is related to a plurality of other nodes and/or other edges with varying weighted attention values (e.g., by each of a plurality of attention layers). Multi-head attention models can allow a graph neural network to consider the influences upon a particular node and/or edge of a plurality of other nodes and/or edges, which may (or may not) be further related to one another by the graph structure and/or attention. Based on the attention model and the attention layers included in the graph neural network, the graph neural network can perform a more sophisticated graph analysis that is based on more than the structural relationships of the graph. [01455] Some graph neural networks may be configured to process a graph data set in order to determine, and optionally output, various types of data (e.g., measurements, calculations, inferences, explanations, or the like) that relate to one or more nodes, one or more edges, and/or one or more subgraphs of the input graph data set and/or to the input data graph set as a whole. Some graph neural networks are configured to generate, and optionally output, various types of representations of graph neural networks. For example, the graph neural network may be configured to determine, and optionally output, an output vector including an array of data representing each of the one or more nodes followed by an array of data representing each of the one or more edges, either as an adjacency matrix of possible edges between pairs of nodes or an adjacency list of existing edges. The output vector may encode the nodes and/or edges in a particular order (e.g., a priority order of nodes and/or a weight order of edges, or corresponding to a corresponding order of the nodes and/or edges in the input graph data set) or in an unordered manner. Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, other types of information about each of one or more nodes and/or each of
SFT-106-A-PCT one or more edges of the graph data set. For example, the graph neural network may be configured to determine, and optionally output, a hierarchical organization of nodes and/or edges relative to one another and/or to a fixed reference point. Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes an indication of the arrangement of one or more nodes and/or one or more edges in the hierarchical organization. [01456] As another example, a graph neural network may be configured to determine, and optionally output, an indication of a centrality of one or more nodes and/or edges within the input graph data set (e.g., a graph of a social network including nodes that are ranked based on a centrality of each node to a cluster). Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes an indication of a centrality of one or more nodes and/or one or more edges in the graph. [01457] As another example, a graph neural network may be configured to determine, and optionally output, an indication of a degree of connectivity of one or more nodes and/or edges of an input graph data set (e.g., a graph of a social network including nodes that are ranked according to a count of other nodes to which each node is connected by one or more edges, and/or a degree of significance of a relationship represented by an edge based on the nature of the relationship and/or the degrees of the nodes connected by the edge). Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes an indication of a degree of one or more nodes and/or one or more edges in the output graph data set. [01458] As another example, a graph neural network may be configured to detect, identify, and/or analyze one or more clusters occurring within an input graph data set. For example, a graph neural network may be configured to perform a clustering analysis of an input graph data set to determine, and optionally output, a determination of k clusters within the input graph data set and an identification of the nodes and/or edges that are included in each cluster. The graph neural network may be configured to determine clusters based on a k-means clustering analysis, a Gaussian mixture model of with variable numbers of clusters and variable Gaussian orders, or the like. The graph neural network may be configured to determine, and optionally output, an indication of a clustering coefficient of one or more nodes and/or one or more edges of an input graph data set (e.g., a measurement of a degree to which at least some of the nodes and/or edges of a subgraph of the graph are clustered based on similarity and/or activity). Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes an indication of one or more clusters including one or more nodes and/or one or more edges in the output graph data set or a subgraph thereof (e.g., a result of a k¬-means clustering
SFT-106-A-PCT analysis of an output graph data set, a Gaussian mixture model of an output graph data set, and/or one or more clustering coefficients of an output graph data set). [01459] As another example, a graph neural network may be configured to determine, and optionally output, an indication of a graphlet degree vector that indicates a graphlet that is represented one or more times in an input graph data set. For example, for a graph representing atoms in a regular structure such as a crystal, the graph neural network may be configured to determine, and optionally output, a graphlet degree vector that indicates and/or describes a graphlet representing a recurring atomic structure, and an encoding of the regular structure that indicates each of one or more occurrences of a graphlet, including a location and/or orientation, and/or a count of occurrences of the graphlet in the input graph data set. Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes a graphlet degree vector, and, optionally, features of one or more occurrences of a graphlet in the output graph data set and/or a count of the occurrences of the graphlet in the output graph data set. [01460] As another example, a graph neural network may be configured to determine, and optionally output, an indication of one or more paths and/or traversals of one or more nodes and/or one or more edges of the input graph data set, optionally including additional details associated with a path or traversal such as a popularity, frequency, length, difficulty, cost, or the like. For example, for an input graph data set representing a spatial arrangement of nodes, the graph neural network may be configured to determine, and optionally output, a path or traversal of edges that connect a first node to a second node through zero or more other nodes of the input graph data set, as well as properties of the path or traversal such as a total length, distance, time, and/or cost. Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes additional details associated with one or more paths or traversals, including an indication (e.g., a list) of the associated nodes and/or edges of the output graph data set and a list of one or more properties of each such path and/or traversal. [01461] As another example, a graph neural network may be configured to determine, and optionally output, an indication of metrics or properties that relate one or more nodes and/or one or more edges of an input graph data set. For example, for an input graph data set including a spatial arrangement of nodes, the graph neural network may be configured to determine, and optionally output, an indication of a shortest distance between two nodes and/or an indication of a set of nodes and/or edges that are common to two nodes of the input graph data set. As another example, for an input graph data set representing a network of communicating devices, the graph neural network may be configured to determine, and optionally output, a routing table of one or more routes that respectively indicate, for a particular node of the input graph data set and a
SFT-106-A-PCT particular edge connected to the node, a list of other nodes and/or edges of the input graph data set that can be efficiently reached by traversing based on the particular edge. As yet another example, for an input graph data set representing a social network including nodes that represent people, the graph neural network may be configured to determine, and optionally output, an indication for at least one pair of nodes of a measurement of similarity of the nodes of the input graph data set based on their node properties, edges, locations in the social network, connections to other nodes, or the like (e.g., a Katz index of node similarity) and/or, for at least one pair of edges of the input graph data set, a measurement of similarity of the edges based on their edge properties, connected nodes, locations in the social network, or the like (e.g., a Katz index of edge similarity). Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes one or more metrics or properties that relate one or more nodes and/or one or more edges (e.g., a routing table of routes within the graph, and/or a Katz index that indicates a measurement of similarity among at least two nodes and/or at least two edges). [01462] As another example, a graph neural network may be configured to determine, and optionally output, an indication of various graph properties of an input graph data set (e.g., a graph size, graph density, graph interconnectivity, graph chronological period, graph classification, a count of subgraphs within the graph, or the like). For example, for an input graph data set including two or more subgraphs (e.g., a social network including two or more social circles), the graph neural network may be configured to determine, and optionally output, a measurement of a similarity of each subset of at least two subgraphs of the input graph data set. The measurement of the similarity may be determined based on one or more graph kernel methods (e.g., a Gaussian radial basis function that can be applied to the input graph data set to identify one or more clusters of similar nodes that comprise a subgraph). As another example, a graph neural network may be configured to determine, and optionally output, a measurement of similarity of an input graph data set with respect to another graph data set (e.g., an indication of whether a particular social network graph resembles other social network graphs that have been classified as representing a genealogy or lineage, a set of friendships, and/or a set of professional relationships). Alternatively or additionally, the graph neural network may be configured to determine, and optionally output, an output graph data set that includes measurements determined by one or more graph properties of an output graph data set (e.g., one or more measurements of similarity of one or more nodes, edges, and/or subgraphs, and/or a measurement of similarity of the output graph data set to the input graph data set and/or other graph data sets). Further explanation and/or examples of various types of processing that graph neural networks can determine, and optionally output, for various input graph data sets and/or output graph data sets are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art.
SFT-106-A-PCT [01463] Graph neural networks may be configured to generate various forms of output that correspond to various tasks. For example, graph neural networks can generate output that represents node-level predictions that relate to one or more nodes of an input graph data set. The node-level predictions can include a discovery of a new node that was not included in the input graph data set. For example, in a graph data set including edges that represent travel of individuals in a region, the nodes can represent points of interest, and the graph neural network can discover a new node that corresponds to a new point of interest. The node-level predictions can include an exclusion of a node that is included in the input graph data set. For example, in a graph data set including edges that represent travel of individuals in a region, the nodes can represent points of interest, and the graph neural network can exclude an existing node that no longer represents a point of interest. The node-level predictions can include a classification of a node that is included in the input graph data set, or of a newly discovered node that was not included in the input graph data set (e.g., a classification of the node as being of a node type selected from a set of node types, as being associated with one or more labels of a classification label set, and/or as belonging to zero or more subgraphs of the graph data set). For example, in a graph data set representing locations within a geographic region, the graph neural network can generate a prediction of a classification of a location of interest as one or more particular types of locations of interest (e.g., a source of food, a source of fuel, a lodging location, and/or a tourist destination). The node-level predictions can include an identification of a node from among the nodes of the input graph data set based on various features, or of a newly discovered node. For example, in a graph data set representing a social network and including nodes that represent people, the graph neural network can identify a particular node that corresponds to a particular person, such as an influential person of the social network. The node-level predictions can include a determination and/or updating of one or more node properties of one or more existing and/or newly discovered nodes, such as a prediction of a demographic feature, opinion, or interest of a node representing a person in a social network. [01464] As another example, graph neural networks can generate output that represents edge-level predictions that relate to one or more edges of an input graph data set. The edge-level predictions can include a discovery of a new edge that was not included in the input graph data set. For example, in a graph data set representing a social network that includes nodes that represent people, a graph neural network can output a prediction (e.g., a recommendation) of a relationship between two nodes that correspond to two people in a small social circle of highly interconnected people. The node-level predictions can include an exclusion of a node that is included in the input graph data set. For example, in a graph data set representing a social network that includes nodes that represent people, a graph neural network can output a prediction of a no-longer-existing edge that corresponds to a relationship that no longer exists (e.g., a lost connection based on a splitting of a
SFT-106-A-PCT social circle). The edge-level predictions can include a classification of an edge that is included in the input graph data set, or of a newly discovered edge that was not included in the input graph data set (e.g., a classification of an edge as being a of an edge type selected from a set of edge types, as being associated with one or more labels of a classification label set, and/or as belonging to zero or more subgraphs of the graph data set). For example, in a graph data set representing a social network, a graph neural network can generate a predicted classification of an edge as representing a relationship between two people as of one or more relationship types (e.g., a familial relationship, a friendship, or a professional relationship). The edge-level predictions can include an identification of an edge from among the edges of the input graph data set based on various features, or of a newly discovered edge. For example, in a graph data set representing a social network and including edges that represent relationships, the graph neural network can identify a particular edge that corresponds to a potential relationship to be recommended to the associated people, such as two people of the social network who are not yet connected but who share common personal or professional interests. The edge-level predictions can include a determination and/or updating of one or more edge properties of one or more existing and/or newly discovered edges, such as a prediction of a demographic feature, opinion, or interest that serves as the basis for a relationship between two people of the social network. [01465] As another example, graph neural networks can generate output that represents graph- level predictions that relate to one or more graph properties of the input graph data set. The graph- level predictions can include a discovery of a new graph property that was not associated with the input graph data set. For example, in a graph data set representing a social network that includes nodes that represent people and edges that represent relationships, a graph neural network can output a prediction of a demographic trait, opinion, or interest that is common or popular among the people of the social network, or a relationship behavior that is exhibited in the relationships among the people of the social network. The graph-level predictions can include an exclusion of a graph property that was associated with the input graph data set. For example, in a graph data set representing a social network that includes a graph property based on a shared interest, a graph neural network can output a prediction that the interest no longer appears to be common and/or popular among the people of the social network, or of a relationship behavior that is no longer exhibited among the relationships of the people of the social network. The graph-level predictions can include a classification of the input graph data set (e.g., a classification of the graph data set, or at least a portion thereof, as being associated with one or more labels of a classification label set). For example, in a graph data set representing a social network, a graph neural network can generate a predicted classification of the graph as representing a familial social network, a friendship social network, and/or a professional social network. The graph-level predictions can
SFT-106-A-PCT include an identification of one or more subgraphs of the graph based on common features of the nodes and/or edges included in the subgraph. For example, in a graph data set representing a social network, the graph neural network can subgraphs that correspond to various social circles of highly interconnected people. The graph-level predictions can include a determination and/or updating of one or more graph properties of the graph, such as an updating of a frequency of communication and/or a strength of relationships among the people of a social network. [01466] As another example, graph neural networks can perform graph-to-graph translation by receiving an input graph data set and generating output that represents a different graph data set. For example, a graph neural network can receive an input graph data set and can generate an output graph data set that includes one or more newly discovered nodes and/or edges; an exclusion of one or more nodes and/or edges; a classification of one or more nodes and/or edges; an identification of one or more nodes and/or edges; and/or an update of one or more node properties, edge properties, and/or graph properties. A graph neural network can receive an input graph data set and can generate an output graph data set that shares various similarities with the input graph data set. For example, a graph neural network can receive, as input, a first graph representing a first geographic region (e.g., a real geographic region) and can generate, as output, a first graph representing a different geographic region (e.g., a fictitious geographic region) that shares similarities with the first graph and that has some dissimilarities with respect to the first graph. A graph neural network can receive, as input, an input graph data set and can generate, as output, a subgraph of the input graph data set. A graph neural network can receive, as input, an input graph data set and can generate, as output, an expanded graph including a first subgraph corresponding to the input graph data set and a second subgraph that is newly generated. A graph neural network can receive, as input, a first graph that corresponds to a first time and can generate, as output, a second graph that corresponds to a different time than the first time. For example, the graph neural network can receive, as input, a graph data set that corresponds to a state of a geographic region at a current time, and can generate, as output, a graph data set that predicts the state of the geographic region at a past time or a future time. [01467] As another example, graph neural networks can generate graphs from non-graph input data. For example, a graph neural network can receive, as input, locations of travelers within a geographic region over a period of time, and can generate, as output, graph data that includes one or more nodes that represent points of interest among the travelers and edges that represent paths between the points of interest (e.g., roads that connect the points of interest). As another example, a graph neural network can receive, as input, a description of a graph (e.g., a natural-language description of a geographic location) and can generate, as output, graph data that corresponds to the description of the graph (e.g., a graph of a region that includes one or more nodes representing
SFT-106-A-PCT locations and one or more edges representing roads that interconnect the locations). The graph neural network may receive both graph data and non-graph data (e.g., a graph representing a social network and an indication of a particular person in the social network) and can generate, as output, graph data based on the input (e.g., a subgraph of the people who consider the identified person to be influential). [01468] As another example, graph neural networks can receive an input graph data set and can generate, as output, non-graph data. For example, a graph neural network can receive, as input, a graph representing a social network including nodes that represent people and edges that represent relationships, and can generate, as output, one or more metrics of the social network (e.g., an average number of connections among the people of the social network, an identification of a person of high influence within the social network, or a description of a relationship behavior that commonly occurs within the social network). As another example, a graph neural network can receive, as input, a graph representing a geographic region including nodes that represent locations and edges that represent roads connecting the locations, and can generate, as output, one or more predictions and/or measurements of traffic within the geographic region. The graph neural network may receive both graph data and non-graph data (e.g., a graph representing a social network and an indication of a particular person in the social network) and can generate, as output, non-graph data based on the input (e.g., a summary and/or prediction of the social behaviors of the identified person). For example, a graph neural network that evaluates traffic patterns within a geographic region may process, and optionally output, both an output graph data set that includes nodes that represent cities and edges that represent roads interconnecting the cities, and also non-graph output data representing predictions and/or inferences of traffic and/or weather features within the geographic region (e.g., traffic volume estimates and current or forecasted weather conditions that affect the traffic patterns). [01469] As another example, some graph neural networks may be configured to determine, and optionally output, an indication of zero or more cycles occurring among the nodes and/or edges of an input graph data set. For example, for a directed and/or undirected input graph data set, a graph neural network may determine, and optionally output, an indication that a particular cycle exists within the input graph data set and includes a particular subset of nodes and/or edges. Alternatively, for a directed and/or undirected graph data set, a graph neural network may determine, and optionally output, an indication that the graph is acyclic and does not include any cycles. A graph neural network may be configured to determine, and optionally output, an output graph data set that includes an indication of zero or more cycles. [01470] As another example, graph neural networks can receive an input graph data set and can generate, as output, an interpretation and/or explanation of the input graph data set. For example,
SFT-106-A-PCT a graph neural network can receive, as input, a graph representing a collection of devices, including nodes that respectively represent a device and edges that respectively represent an instance of communication and/or interaction among two or more devices. The graph neural network can generate, as output, an interpretation and/or explanation of the communications and/or interactions represented in the graph, such as an explanation of a set of interactions as being part of a collective and/or collaborative effort among the two or more devices and/or a related series of interactions that are associated with a particular activity. The explanation and/or interpretation may include, for example, a classification of one or more nodes, edges, patterns of activity, and/or the graph; a natural-language summary or narrative explanation of one or more nodes, edges, patterns of activity, and/or the graph; a data set that characterizes one or more nodes, edges, patterns of activity, and/or the graph; and/or a presentation (e.g., a static or motion visualization) of one or more nodes, edges, patterns of activity, and/or the graph. As one such example, a graph neural network may identify, within an input graph data set, one or more subgraphs (e.g., one or more clusters of related nodes and/or edges), and may output an interpretation and/or explanation of the subgraph (e.g., a description of the set of features that characterize the subgraph or cluster). As another example, a graph neural network may generate a visualization of a subgraph of an input graph data set, wherein the visualization depicts, highlights, and/or illustrates a structure and/or an anomalous feature of the subgraph. Some such graph neural networks may be configured to generate interpretations and/or explanations of any input graph data set, e.g., based on an identification of features of an input data set that inform such interpretations and/or explanations, such as clusters, outliers, or determinations of apparent structure and/or data relationships. Other such graph neural networks may be configured to generate domain-specific interpretations and/or explanations of domain-specific graph data sets. For example, a graph neural network may be configured to analyze a graph data set representing a social network identify both a subset of the social network corresponding to an influential cluster of people of the social network and also an interpretation and/or explanation of why this cluster of people appears to be influential within the social network. Graph neural networks can generate interpretations and/or explanations using a variety of techniques, including “white-box” analysis techniques that can be applied to various properties of graph data sets and components thereof. Examples of graph neural networks that include instance-level explanations based on gradients and/or features include, without limitation, Guided BP, class activation mapping (CAM), and GradCAM. Examples of graph neural networks that include instance-level explanations based on perturbations include, without limitation, GNNExplainer, PGExplainer, ZORRO, and Graphmask. Examples of graph neural networks that include instance-level explanations based on decomposition include, without limitation, layer-wise relevance propagation (LRP), Excitation BP, and GNN LRP. Examples of graph neural networks
SFT-106-A-PCT that include instance-level explanations based on surrogate analysis include, without limitation, GraphLIME, RelEX, and PGMExplainer. Examples of graph neural networks that include model- level explanations include XGNN. Further explanation and/or examples of various interpretable and/or explainable features of graph data sets or components thereof that may be generated by graph neural networks are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01471] Graph neural networks – architectures and frameworks [01472] Graph neural networks may be designed and/or organized according to various architectures. For example, a multilayer graph neural network may include a number of layers, each layer including a number of neurons. In each layer of the graph neural network, the neurons may be configured to receive, as input, at least a portion of an input data set (e.g., an input graph data set) and/or at least a portion of an output of at least one neuron of one or more layers of the graph neural network. Additionally, in each layer of the graph neural network, the neurons may be configured to generate, as output, at least a portion of an output data set of the graph neural network (e.g., an output graph data set of graph neural network) and/or at least a portion of an input to at least one neuron of one or more layers of the graph neural network. [01473] In some graph neural networks, an architecture of the graph neural network is based on the input to the graph neural network. For example, a fixed-size graph of N nodes and E edges interconnecting the nodes may be received and processed by a graph neural network that includes an input layer featuring N neurons respectively configured to receive input from one of the N nodes and/or E neurons respectively configured to receive input from one of the E edges. A graph including an adjacency list having a maximum of E edges may be received and processed by a graph neural network that includes an input layer featuring E neurons respectively configured to receive and process one of the E edges represented in the adjacency list. A graph including two subgraphs may be received and processed by a graph neural network that includes an input layer featuring a first set of neurons that are configured to process the nodes and/or edges of the first subgraph and a second set of neurons that are configured to process the nodes and/or edges of the second subgraph. In some graph neural networks, an architecture of the graph neural network may be based on non-graph input data that is received and processed by the graph neural network. For example, a graph neural network may be configured to receive, as input, a description of a graph (e.g., a number of nodes and/or edges and one or more properties of the graph). The graph neural network may be further configured to generate a graph corresponding to the description, and to process and optionally output the graph according to various graph neural network processing techniques.
SFT-106-A-PCT [01474] In some graph neural networks, an architecture of the graph neural network is based on an output of the graph neural network. For example, a graph neural network may be configured to determine, and optionally output, a fixed-size output graph data set including N nodes and E edges. The graph neural network may therefore include an output layer featuring N neurons respectively configured to generate output corresponding to one of the N nodes and/or E neurons respectively configured to generate output corresponding to one of the E edges. A graph neural network may be configured to determine, and optionally output, an adjacency list having a maximum of E edges. The graph neural network may therefore include an output layer featuring E neurons that respectively generate output corresponding to one of the E edges represented in the adjacency list. A graph neural network may be configured to determine, and optionally output, an output graph data set including two subgraphs. The graph neural network may therefore include an output layer featuring a first set of neurons that are configured to generate output corresponding to the nodes and/or edges of the first subgraph and a second set of neurons that are configured to process the nodes and/or edges of the second subgraph. In some graph neural networks, an architecture of the graph neural network may be based on non-graph output data that is determined, and optionally output, by the graph neural network. For example, a graph neural network may be configured to determine, and optionally output, a description of an input graph data set and/or an output graph data set (e.g., a number of nodes and/or edges and one or more properties of the input graph data set and/or the output graph data set), according to various graph neural network processing techniques. [01475] In some graph neural networks, an architecture of the graph neural network may be based on a directionality of one or more edges included in an input data set and/or an output data set. For example, an input graph data set including a undirected edge that connects a first node N1 and a second node N2 may be received and processed by a graph neural network including a first neuron NN1 and a second neuron NN2 that are bidirectionally connected to one another, such that message passing can occur from the first node NN1 to the second node NN 2 and, concurrently or consecutively, from the second node NN2 to the first node NN1. An input graph data set including a unidirectional edge that connects a first node N1 to a second node N2 may be received and processed by a graph neural network including a first neuron NN1 (e.g., a neuron in a first layer of a feed-forward graph neural network) that is unidirectionally connected to a second neuron NN2 (e.g., a neuron in a second layer of a feed-forward graph neural network), such that message passing can occur from the first node NN1 to the second node NN 2 but not from the second node NN2 to the first node NN1. An input graph data set including an edge that connects three or more nodes may be received and processed by a graph neural network in which three or more nodes are correspondingly connected.
SFT-106-A-PCT [01476] Some graph neural networks may be configured to receive and process an input graph data set including a homogeneous set of nodes and/or a homogeneous set of edges. For example, a first neuron of the graph neural network that corresponds to a first node and/or edge of the input graph data set may include a same or similar number of inputs, a same or similar activation function, and/or a same or similar number of outputs as a second neuron of the graph neural network that corresponds to a second node and/or edge of the input graph data set. [01477] Some graph neural networks may be configured to receive and process an input graph data set including a heterogeneous set of nodes and/or a heterogeneous set of edges. For example, different nodes of an input graph data set may be associated with different labels that respectively indicate different classifications of the nodes, and/or different edges of the input graph data set may be associated with different labels that respectively indicate different classifications of the edges. An architecture of the graph neural network may exhibit variations corresponding to the heterogeneity of the nodes and/or edges. For example, a first neuron of the graph neural network that corresponds to a first node and/or edge of the input graph data set that is associated with a first label or classification may include a different number of inputs, a different activation function, and/or a different number of outputs as a second neuron of the graph neural network that corresponds to a second node and/or edge of the input graph data set that is associated with a second label or classification. As another example, a graph neural network may include a first layer that receives and processes, as input, a first portion of an input data set that includes a first subset of neurons and/or edges that are associated with a first label or classification, and a second layer that receives and processes, as input, a second portion of an input data set that includes a second subset of neurons and/or edges that are associated with a second label or classification. The first layer and the second layer may be processed concurrently or consecutively. The first layer and the second layer may be processed independently (e.g., each layer providing a different portion of an output graph data set). Alternatively, the first layer and the second layer may be processed together (e.g., an output of the first layer may be additionally provided as input to the second layer, and/or an output of the second layer may be additionally provided as input to the first layer). [01478] Some graph neural networks may include an architecture that is based on one or more node properties of one or more nodes of an input graph data set, one or more edge properties of one or more edges of the input graph data set, and/or one or more graph properties of the input graph data set. As an example, in some input graph data sets, one or more nodes may include a node property indicating a weight of the node (e.g., an indication of a centrality and/or betweenness of a node among at least a portion of the nodes of the input graph data set). The graph neural network may include a neuron that corresponds to the node, wherein one or more weights of synapses that connect the neuron to other neurons of the graph neural network is based on the
SFT-106-A-PCT weight of the node. As another example, in some input graph data sets, one or more edges may include an edge property indicating a weight of the edge (e.g., an indication of a significance and/or priority of a relationship among two or more nodes of the input graph data set). The graph neural network may include two or more nodes that are connected by a synapse, wherein a weight of the synapse connecting the two or more nodes is based on a weight of an edge of the input graph data set. Examples of node-based graph neural networks include, without limitation, GraphSAGE, PinSAGE, and VR-GCN. Examples of layer-based graph neural networks include, without limitation, FastGCN and LADIES. Examples of subgraph-based graph neural networks include, without limitation, ClusterGCN and GraphSAINT. [01479] Some graph neural networks may be configured to receive and process fixed input graph data sets, wherein a number and arrangement of nodes and edges of an input data set that is received and processed by the graph neural network does not vary for different instances of processing the input data set. The architecture of such graph neural networks may be configured based on the invariance of the input graph data set. For example, the graph neural network may feature a fixed number and/or arrangement of neurons and/or layers, wherein the fixed architecture of the graph neural network corresponds to the fixed nature of the input graph data set. [01480] Some graph neural networks may be configured to receive and process dynamic input graph data sets, wherein a number and arrangement of nodes and edges of an input data set that is received and processed by the graph neural network during a first instance of processing can differ from a number and arrangement of nodes and edges of an input data set that is received and processed by the graph neural network during a second instance of processing. As an example, a graph neural network may be configured to perform node and/or edge discovery of an input graph data set and to generate, as output, an output graph data set that includes at least one more node and/or at least one more edge than the input graph data set. Further, the graph neural network may be configured to receive the output graph data set from a first processing as input for a second processing, wherein a number of nodes and/or edges received as input during the second processing is greater than a corresponding number of nodes and/or edges received as input during the first processing. In such cases, an architecture of such graph neural networks may be fixed, but may be configured to receive and process a variety of different input graph data sets (e.g., input graph data sets with a variable number of nodes and/or connections). For example, the graph neural network may include an input layer featuring N input neurons, each corresponding to a node of an input graph data set. Such a graph neural network may be configured to use the fixed architecture to receive and process input graph data sets featuring a variable number of nodes up to, but not exceeding, N. For example, in order to receive and process an input graph data set featuring fewer than N nodes, the graph neural network may activate only a number of input neurons of the input
SFT-106-A-PCT layer that correspond to the number of nodes in the input graph data set, and to deactivate remaining neurons of the input layer that do not correspond to a node of the input graph data set (e.g., refraining from processing the remaining neurons, and/or processing the neurons but zeroing the weights of the synapses that connect the neurons to other neurons of the graph neural network). As another example, the graph neural network may perform a first processing of a first input graph data set including N nodes, and, accordingly, may deactivate one or more neurons of the input layer. The graph neural network may then perform a second processing of a second input graph data set including more than N nodes (e.g., an output of the first processing may include an output graph data set that includes one or more newly discovered nodes). During the second processing, the graph neural network may activate one or more of the previously deactivated neurons of the input layer in order to receive and process input from the additional nodes of the second input graph data set. For example, the graph neural network may enable or reenable the processing of one or more neurons of the input layer, and/or may reset (e.g., restore and/or initialize) the weights of one or more synapses that connect one or more neurons of the input layer to other neurons of the graph neural network. In some cases, an architecture of such graph neural networks may be dynamic, and may change in correspondence with a dynamic nature of the input graph data set. For example, a graph neural network may include an input layer with a variable number of neurons, and may select, adapt, and/or change the number of neurons in the input layer based on a dynamic property of an input graph data set (e.g., a number of nodes and/or edges in the input graph data set). Such graph neural networks may generate new neurons of the input layer (e.g., initializing and/or selecting weights of the synapses of the new neurons, such as copying the weights from the synapses of other neurons of the input layer) based on a larger number of nodes and/or edges of an input graph data set to be received and processed as input. Alternatively or additionally, such graph neural networks may be configured to eliminate and/or merge neurons of the input layer (e.g., initializing and/or selecting weights of the new neurons) based on a smaller number of nodes and/or edges of an input graph data set to be received and processed as input. [01481] In some graph neural networks, an architecture of the neural network may be selected and/or adapted based on a topology of one or more input graph data sets and/or output graph data sets. For example, a bipartite input graph data set may include two more subgraphs, and a graph neural network may include two or more distinct subsets of neurons that are respectively configured to receive and process data associated with the nodes and/or edges included in one of the subgraphs. As another example, a multigraph input graph data set may include a plurality of edges connecting two or more nodes. For example, a graph representing a social network may include various types of edges that represent various types of relationships (e.g., familial relationships, friendships, and/or professional relationships), and two or more nodes may be
SFT-106-A-PCT connected by a plurality of edges (e.g., a first edge indicating a friendship among the two or more nodes and a second edge indicating a professional relationship among the two or more nodes). An architecture of the graph neural network may correspond to the multigraph nature of the input graph data set. For example, a graph neural network may include two or more distinct subsets of neurons that are respectively configured to receive and process data associated with a subset of edges of the input graph data set that are of a particular edge type (e.g., a first subset of neurons that is configured to receive and process nodes connected by edges that represent friendships, and a second subset of neurons that is configured to receive and process nodes connected by edges representing professional relationships). As yet another example, an input hypergraph data set may include one or more hyperedges that interconnect three or more nodes. An architecture of a graph neural network that is configured to receive and process the input hypergraph data set may include one or more neurons with synapses that interconnect to two or more other neurons in correspondence with one or more hyperedges of the input hypergraph data set. [01482] As another example, an architecture of some graph neural networks include one or more layers that perform particular functions on the output of neurons of another layer, such as a pooling layer that performs a pooling operation (e.g., a minimum, a maximum, or an average) of the outputs of one or more neurons, and that generates output that is received by one or more other neurons (e.g., one or more neurons in a following layer of the graph neural network) and/or as an output of the graph neural network. Examples of graph neural networks that include one or more direct pooling layers include, without limitation, SimplePooling, Set2Set, and SortPooling. Examples of graph neural networks that include one or more hierarchical pooling layers include, without limitation, Coarsening, ECC, DiffPool, TopK, gPool, Eigenpooling, and SAGPool. [01483] As another example, some graph neural networks (e.g., graph convolution networks) include one or more convolutional layers, each of which performs a convolution operation to an output of neurons of a preceding layer of the graph neural network. [01484] As another example, an architecture of some graph neural networks include memory based on an internal state, wherein the processing of a first input data set causes the graph neural network to generate and/or alter an internal state, and the internal state resulting from the processing of one or more earlier input data sets affects the processing of second and later input data sets. That is, the internal state retains a memory of some aspects of earlier processing that contribute to later processing of the graph neural network. Examples of graph neural networks that include memory features and/or stateful features include graph neural networks featuring one or more gated recurrence units (GRUs) and/or one or more long-short-term-memory (LSTM) cells. In some graph neural networks, these features may be further adapted to accommodate graph processing, such as
SFT-106-A-PCT gated graph neural networks (GGRUs), tree LSTM networks, graph LSTM networks, and/or sentence LSTM networks. [01485] As another example, an architecture of some graph neural networks includes one or more recurrent and/or reentrant properties. For example, at least a portion of output of the graph neural network during a first processing is included as input to the graph neural network during a second or later processing, and/or at least a portion of an output from a layer is provided as input to the same layer or a preceding layer of the graph neural network. As another example, in some graph neural networks, an output of a neuron is also received as input by the same neuron during a same processing of an input and/or a subsequent processing of an input. The output of the neuron may be evaluated (e.g., weighted, such as decayed) before being provided to the neuron as input. [01486] As another example, an architecture of some graph neural networks includes two or more subnetworks (e.g., two or more graph neural networks that are configured to process graph data concurrently and/or consecutively). Some graph neural networks include, or are included in, an ensemble of two or more neural networks of the same, similar, or different types (e.g., a graph neural network that outputs data that is processed by a non-graph neural network, Gaussian classifier, random forest, or the like). For example, a random graph forest may include a multitude of graph neural networks, each configured to receive at least a portion of an input graph data set and to generate an output based on a different feature set, different architectures, and/or different forms of processing. The outputs of respective graphs of the random graph forest may be combined in various ways (e.g., a selection of an output based on a minimization and/or maximization of an objective function, or a sum and/or averaging of the outputs) to generate an output of the random graph forest. [01487] In some cases, an architecture of a graph neural network may be designed by a user. For example, a user may choose one or more hyperparameters of a graph neural network (e.g., a number of layers, a number of neurons in each layer, an activation function used by at least some neurons, and the like) in order to process an input graph data set. In some cases, the selected one or more hyperparameters may be based on domain-specific knowledge, e.g., a specific data type, internal organization or structure, and/or task associated with an input graph data set. [01488] Alternatively or additionally, in some cases, an architecture of a graph neural network may be selected by an automated process. For example, a hyperparameter search process may determine one or more hyperparameters of a graph neural network based on an analysis of an input graph data set to be received and processed by the graph neural network and/or an analysis of an output graph data set to be generated and provided as output by the graph neural network. The hyperparameter search process may determine various combinations of hyperparameters for variations of the graph neural network (e.g., graph neural networks with different numbers of
SFT-106-A-PCT layers, different numbers of neurons within each layer, graph neural networks including neurons with different activation functions, and/or graph neural networks with different sets of synapses interconnecting the neurons of various layers). The hyperparameter search process may process an input graph data set (e.g., a training input graph data set) using different graph neural networks that correspond to different sets of hyperparameters. The hyperparameter search process may compare the output of the different graph neural networks (e.g., determining a performance measurement for the output of each graph neural network, and comparing the performance measurements of the different graph neural networks) in order to determine and select a graph neural network that generates desirable output (e.g., output that most closely corresponds to a target output associated with the training input graph data set). The hyperparameter search process may discard the other graph neural networks and may use the selected graph neural network to process input graph data sets. In some cases, the hyperparameter search process may iteratively generate and test refined combinations of hyperparameters. For example, after selecting a graph neural network in a first hyperparameter search processing the hyperparameter search process may perform a second hyperparameter search processing by generating additional graph neural networks based on combinations of hyperparameters that are closer to the hyperparameters of the selected graph neural network, and evaluating the output of the additional graph neural networks. In some cases, the hyperparameter search process may perform a grid search over the set of valid hyperparameter combinations. Iterative refinement of the hyperparameters may enable the hyperparameter search process to determine an architecture of a graph neural network that is well-tuned to a particular task (e.g., an architecture of a graph neural network that demonstrates consistently high performance on input graph data sets within a particular domain of data and/or a particular task). In some cases, a hyperparameter search process may communicate with a user to determine combinations of hyperparameters to evaluate and/or to select for the graph neural network. For example, the hyperparameter search process may present, to a user, a result of a first hyperparameter evaluation (e.g., an output of a graph neural network that was selected through a first hyperparameter search processing). Based on an evaluation of the output by the user, the hyperparameter search process may perform a second or further hyperparameter search processing (e.g., choosing a small refinement of the hyperparameters based on a positive response of the user to the output of a selected graph neural network, and/or choosing a larger refinement of the hyperparameters based on a negative response of the user to the output of the selected graph neural network). [01489] As another example, some graph neural networks include architectures based on graph convolutional networks (GCNs), wherein a convolutional layer applies a convolution operation to outputs of one or more filters of a previous filter layer of the graph convolutional network. Graph
SFT-106-A-PCT convolutional networks may include spectral convolutional networks that are configured to receive, as input, a spectral representation of an input graph data set, and to apply processing (including one or more convolutional operations) to various spectral components of the spectral representation of the input graph data set. Examples of spectral convolutional networks include, without limitation ChebNet and diversified graph convolutional networks (DGCNs). As another example, some graph convolutional networks include architectures based on spatial convolutional networks (SCNs) that are configured to receive, as input, spatial representations of an input graph data set (e.g., spatial information that represents one or more neighborhoods of nodes and/or edges of the input graph data set), and to apply processing (including one or more convolutional operations) to various spatial components of the spatial representation of the input graph data set. Examples of spatial convolutional networks include, without limitation, spatial convolutional neural networks (SCNNs), spatial and/or spatial-temporal GraphSAGE networks, and some deep convolutional neural networks (DCNNs). [01490] Graph neural networks can be generated by a variety of machine learning platforms, frameworks, and/or tools, including, without limitation, PyTorch Geometroc, Deep Graph Library, TensorFlow GNN, Graph Nets, Spektral, and Jraph. Frameworks for graph convolutional networks include, without limitation, message passing neural networks (MPNNs), non-local neural networks (NLNNs), mixture model neural networks (MoNet), and Graph Networks (GN). [01491] Further explanation and/or examples of various architectures of graph neural networks, including the design and implementation of designs and architectures of such graph neural networks, are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01492] Graph neural networks – training and performance evaluation [01493] Like other types of neural networks, graph neural networks are typically generated with arbitrarily selected parameters (e.g., synaptic weights that are initially set to randomized values). Also, like other types of neural networks, an initialized graph neural network to evaluate input graph data sets through training, in which the parameters of the graph neural network are adjusted to promote desirable processing that produces expected and/or desirable outputs. [01494] The training of graph neural networks may involve one or more training data sets. For graph neural networks that receive and process input graph data sets, the training data may include one or more training input graph data sets. Alternatively or additionally, for graph neural networks that receive and process input non-graph data, the training data may include one or more sets of training non-graph data. [01495] The training data for a graph neural network may be based on authentic input data that was previously collected and/or analyzed, or that was collected and analyzed for the purpose of
SFT-106-A-PCT training the graph neural network. For example, in order to process graphs that represent an industrial environment, the training data may include sensor data that was previously and/or is currently received from one or more sensors associated with the industrial environment. Alternatively or additionally, the training data may include partially and/or fully synthetic data. For example, a first portion of training data may include data derived from an analysis of authentic data; authentic data that has been supplemented with synthetic data (e.g., an image of a real-world scene including an inserted artificial object); authentic data that has been modified by a suer (e.g., an image of a real-world scene that has been modified by a user); and/or data generated by one or more algorithms (e.g., other machine learning models and/or simulations of real-world processes). In some cases, the training data set may include both authentic training data and synthetic training data that is based on the authentic training data (e.g., both a real-world image and a modified version of the real-world image that has been adjusted in brightness, contrast, size, resolution, scale, shape, aspect ratio, color depth, or the like). [01496] The training data for a graph neural network may be limited to a selected data domain. For example, training data for a graph neural network that analyzes social networks may include one or more samples of individuals from within one or more selected social networks. In other cases, the training data for a graph neural network may be generated from a variety of data domains. For example, training data for a graph neural network that analyzes geographic data may include one or more samples of locations of interest and interconnecting pathways from natural outdoor geographic regions (e.g., forests), artificial outdoor geographic regions (e.g., road networks), indoor geographic regions (e.g., caves or shopping malls), historic geographic regions (e.g., maps from ancestral eras and/or civilizations), and/or synthetic geographic regions (e.g., geographic maps from videogames). [01497] The training data for a graph neural network may be wholly or partially unlabeled. For example, the training data set for an industrial environment may include sensor measurements collected from the industrial environment, but may not include any data indicating an analysis, classification, metadata, interpolations, extrapolations, interpretation, explanation, and/or user reaction associated with the sensor measurements. Alternatively or additionally, the training data for a graph neural network may be wholly or partially labeled. For example, the training data set for an industrial environment may include sensor measurements collected from the industrial environment, and one or more subsets of sensor measurements may be associated with one or more analyses, classification labels, metadata, interpolations, extrapolations, determinations, interpretations, explanations, and/or user reactions associated with the subset of sensor measurements. Training data may associate labels, metadata, or the like with one or more nodes and/or node properties of a training input graph data set; one or more edges and/or edge properties
SFT-106-A-PCT of a training input graph data set; one or more graph properties of the training input graph data set; and/or one or more portions of non-graph data of a training input data set. In some cases, the labels, data, metadata, or the like associated with at least a portion of a training input data set are selected by one or more users (e.g., a human classification of at least a portion of the training data set). In some cases, the labels, data, metadata, or the like associated with at least a portion of a training input data set are selected by another algorithm (e.g., a simulation or another machine learning model). In some cases, the labels, data, metadata, or the like associated with at least a portion of a training input data set are selected by a cooperation of a human and an algorithm (e.g., a determination by a simulation or another machine learning model that is verified by a reviewing human user). [01498] Graph neural networks can be trained based on one or more training data sets and one or more learning techniques. As an example, some graph neural networks are trained through an unsupervised learning technique. For example, a training input data set may not include any labels, data, metadata, or the like associated with various portions of the training input data set. The graph neural network may be trained to identify patterns arising within the training input data sets. For example, a training input data set may include data that indicates one or more anomalies (e.g., nodes and/or edges that appear to represent outliers in a data distribution of the nodes and/or edges of the graph) and/or distinctive patterns or structures arising in the data (e.g., cycles arising in a directed and/or undirected graph). The graph neural network may be trained to detect such anomalies, patterns, and/or structure in the training input data sets. The results of unsupervised learning of a graph neural network may be evaluated based on an evaluation of the output of the graph neural network (e.g., a confusion matrix that includes determinations of true positive determinations, true negative determinations, false positive determinations, and/or false negative determinations) and/or performance scores (e.g., an F1 performance score based on ratios of true positives, false positives, true negatives, and false negatives). The weights of various parameters of the graph neural network can be automatically adjusted, corrected, refined, or the like, such that subsequent processing of the same input training data set and/or other input training data sets generates improved evaluations and/or performance scores. [01499] As another example, some graph neural networks are trained through a supervised learning technique. For example, a training input data set may associate respective portions (e.g., respective training data samples, such as different training input graph data sets) with one or more labeled outputs that are expected and/or desirable of the trained graph neural network. As an example, a graph neural network may be trained to output a classification of a training input graph data set and/or one or more nodes and/or edges thereof. During a supervised learning process, the training input graph data set may be provided as input to the graph neural network and processed by the
SFT-106-A-PCT graph neural network to generate a predicted classification of a training input graph data set and/or one or more nodes and/or edges thereof. The predicted classifications may be compared with one or more labeled outputs associated with the training input graph data set (e.g., one or more labels associated with an expected and/or desirable classification of the training input graph data set and/or one or more nodes and/or edges thereof). Based on the comparison, the weights of various parameters of the graph neural network can be automatically adjusted, corrected, refined, or the like, such that subsequent processing of the same input training data set and/or other input training data sets generates improved evaluations and/or performance scores (e.g., more accurate predictions of one or more labels associated with an expected and/or desirable classification of the training input graph data set and/or one or more nodes and/or edges thereof).. As another example, a graph neural network may be trained to generate, as output, an output graph data set that is based on a processing of a training input graph data set. During a supervised learning process, the training input graph data set may be provided as input to the graph neural network and processed by the graph neural network to generate an output graph data set. The output graph data set generated by the graph neural network may be compared with one or more expected and/or desirable output graph data sets corresponding to the training input graph data set (e.g., one or more output graph data sets that are expected and/or desired as output when the graph neural network processes the training input graph data set). Based on the comparison, the weights of various parameters of the graph neural network can be automatically adjusted, corrected, refined, or the like, such that subsequent processing of the same input training data set and/or other input training data sets generates improved evaluations and/or performance scores (e.g., more desirable and/or expected output graph data sets). [01500] As another example, some graph neural networks are trained through a blended training process that includes both supervised and unsupervised learning. For example, a blended training process may evaluate the performance of a graph neural network in training based on both a comparison of predicted outputs of the graph neural network to expected and/or desirable outputs corresponding to an input training data set, and based on one or more automatically determined performance metrics, such as a confusion matrix and/or F1 scores. Some blended training processes may include a round of supervised learning following by a round of unsupervised learning, or may perform rounds of training that include both supervised and unsupervised learning techniques (e.g., optionally with different weights and/or performance thresholds associated with the evaluation of the graph neural network and the updating of the parameters). [01501] As another example, some graph neural networks are trained through a semi-supervised learning process. For example, a training data set may include a large number of samples, of which only a small number of samples are labeled (e.g., associated with expected and/or desirable
SFT-106-A-PCT outputs) and a large remainder of the samples are unlabeled (e.g., not associated with expected and/or desirable outputs). The graph neural network may be trained based on the labeled and/or unlabeled training data, and a performance of the graph neural network may be evaluated based on the labels and/or other metrics. In particular, some unlabeled portions of the input training data may be identified as being incorrectly evaluated by the graph neural network (e.g., the graph neural network may generate incorrect outputs such as predictions or classifications, incorrect and/or malformed output graph data sets, or the like). At least a portion of such unlabeled portions of the input training data (e.g., training data samples that appear to be difficult to classify correctly and/or with high confidence) may be submitted to a human reviewer, and the semi-supervised learning process may receive, from the human reviewer, one or more labels that correspond to an expected and/or desirable output of the graph neural network for such portions of the input training data. Training or retraining of the graph neural network may involve the newly labeled portions of the input training data, as well as other portions of the input training data. Semi-supervised learning may enable graph neural networks to be trained based on a smaller degree of human involvement (e.g., a smaller number of labels associated with portions of the input training data set by human reviewers), and may therefore improve a speed, cost, and/or performance of training the graph neural network. [01502] A training of a graph neural network may occur in one or more epochs. For example, for each epoch, the graph neural network may be provided with input comprising each portion of a training data set, a performance of the graph neural network may be determined based on the output of the graph neural network for each portion of the training data set. Based on the determined performance, and one or more parameters of the graph neural network may be updated. For example, weights of the synapses between neurons of the graph neural network may be adjusted such that a performance of the graph neural network over each portion of the training data set. During the training of a graph neural network, various techniques may be used to evaluate the performance of the graph neural network. As a first example, outputs of the graph neural network (e.g., output graph data sets and/or predictions, such as classifications of the graph, one or more nodes, and/or one or more edges) may be compared with expected and/or desirable outputs. Differences between the outputs and the expected and/or desirable outputs may be used to determine an entropy and/or loss of the output of the graph neural network as compared with corresponding expected and/or desirable outputs. In some variations, the entropy or loss of the graph neural network determined during or after a current epoch may be compared with an entropy or loss of the graph neural network determined during or after a previous epoch to determine a differential and/or marginal entropy or loss. A negative differential and/or marginal entropy or loss may indicate that the training of the graph neural network is productive (e.g., the performance of
SFT-106-A-PCT the graph neural network improved in the current epoch as compared with a previous epoch). A zero or positive differential and/or marginal entropy or loss may indicate that the training of the graph neural network is unproductive (e.g., the performance of the graph neural network did not improve, or diminished, in the current epoch as compared with a previous epoch). Training of the graph neural network may therefore continue as long as the differential and/or marginal entropy or loss remains negative and, optionally, exceeds a threshold magnitude that indicates significant training progress. [01503] As another example, outputs of the graph neural network (e.g., output graph data sets and/or predictions, such as classifications of the graph, one or more nodes, and/or one or more edges) may be classified as one of a true positive, a false positive, a true negative, or a false negative. The performance of the graph neural network may be evaluated as a confusion matrix, e.g., based on a calculation of the performance over the incidence of true positive, false positive, true negative and false negative outputs. In some cases, the calculation may be weighted based on a risk matrix that applies different weights to each classification of the output. For example, in a graph neural network that generates classifications of graphs that correspond to diagnoses of medical conditions, it may be determined false negatives (e.g., missed diagnoses) are very harmful or costly, while false positives (e.g., misdiagnoses that can be corrected by further evaluation) may be determined to be comparatively harmless. Accordingly, the performance of the graph neural network may be determined based on a weighted calculation over the confusion matrix that more severely penalizes the performance based on false negatives than false positives. [01504] As another example, the training of a graph neural network may involve an improvement of an objective function that serves as a basis for measuring the performance of the graph neural network. For example, the objective function may include (without limitation) a loss minimization, an entropy minimization, a precision maximization, a recall maximization, an error minimization, or a consistency maximization. The objective function may include a comparison of the performance of the graph neural network over various distributions of the input data set (e.g., a minimax optimization, such as minimizing a maximum loss over any portion of the input data set, or a maximin optimization, such as maximizing a minimum loss over any portion of the input data set). In some training scenarios that involve reinforcement learning, the output of a graph neural network may include and/or may be interpreted as a policy, e.g., a set of responses of an agent based on respective conditions. The performance of the graph neural network may be based on various objective functions that evaluate various properties of the generated and/or interpreted policy. For example, in a q-learning reinforcement learning process, the objective function applied to the policy may include a maximization of an action value of each behavior that may be performed in response to various conditions.
SFT-106-A-PCT [01505] As another example, the training of graph neural networks may occur concurrently with the hyperparameter search and/or selection. For example, a hyperparameter search process may initially identify a first set of combinations of hyperparameters of graph neural networks to be evaluated using a training data set. Based on each such combination of hyperparameters, a graph neural network may be generated and at least partially trained to determine its performance. Based on the evaluation of the outputs of the graph neural networks corresponding to respective combinations of hyperparameters, the hyperparameter search process may identify a candidate graph neural network with the highest performance. The hyperparameter search process may then generate a second set of combinations of hyperparameters based on the hyperparameters of the candidate graph neural network, and may further (at least partially) train and evaluate the performance of additional graph neural networks based on the second set of combinations of hyperparameters. A comparison of the performance of the additional graph neural networks may cause the hyperparameter search process to retain the candidate graph neural network or to choose a new candidate graph neural network from among the additional graph neural networks. The hyperparameter search process may continue until additional improvements in the performance of candidate graph neural networks are not achievable and/or are below a threshold performance improvement. In this selection process, a variety of performance metrics may be used. As previously discussed, the performance metrics may include an evaluation of the outputs of the graph neural networks (e.g., a loss or entropy, a differential or marginal loss or entropy, a confusion matrix, an F1 score, or the like). Alternatively or additionally, the performance metrics may include other features of the output, such as a consistency of the output of the graph neural network over the distribution of data in the training data set and/or a bias in the performance the output of the graph neural network for selected data distributions of the training data set, and/or a smoothness or oversmoothness of the graph nodes represented in the graph neural network. Alternatively or additionally, the performance metrics may include one or more measurements of computational resource expenditures to perform training and/or inference of input data sets with the graph neural network (e.g., CPU and/or GPU utilization, memory usage, training time and/or complexity, processing latency between receiving input and generating output, or the like). Aggregate performance measurements may be based on a variety of such considerations, and may enable a human designer and/or a hyperparameter search process to perform a selection of a graph neural network based on various performance tradeoffs (e.g., a preference for a first graph neural network that produces high-accuracy, high-consistency, and/or high-confidence results but that requires a large amount of computational resources, time, and/or cost, vs. a preference for a second graph neural network that produces reasonable-accuracy, reasonable-consistency, and/or reasonable- confidence results using a smaller amount of computational resources, time, and/or cost). For
SFT-106-A-PCT example, a measurement of computational resource utilization by a particular graph neural network may correspond to a numeric penalty in various measurement of the performance of the graph neural network (e.g., a loss, entropy, and/or objective function output). [01506] In various forms of graph neural network training based on these and other learning techniques, various training methods can be used to update the parameters of a graph neural network in training and/or to evaluate the performance of a graph neural network in training. For example, optimizers that may be used during the training of graph neural networks may include (without limitation) linear regression; root mean squared propagation (RMSprop); stochastic gradient descent; adaptive stochastic gradient descent (Adagrad); adaptive stochastic gradient descent with adaptive learning (Adadelta); adaptive moment estimation (Adam); Nesterov accelerated adaptive moment estimation (Nadam); Nesterov accelerated gradient and momentum (NAG); Monte Carlo simulations involving various variance reduction techniques, such as control variates; or the like, including variations and/or combinations thereof. Training techniques for particular types of graph neural networks may include optimizers that are specialized for such particular types of graph neural networks (e.g., graph convolutional networks may be trained using a FastGCN optimizer and/or receptive field control (RFC) optimizers). [01507] As further examples, graph neural network training may include a variety of techniques that are also applicable to non-graph machine learning models, including non-graph neural networks. As a first such example, training may occur in batches and/or mini-batches of the training data set, wherein the graph neural network evaluates a batch (e.g., plurality of input data sets) of an input training data set, and the parameters of the graph neural network are updated based on an aggregation of the evaluation of the outputs of the graph neural network for the batch of input data sets. In various training techniques, batches may be selected at random from the training input data set or may be selected in an organized manner, e.g., as various subsets that are representative of one or more data distributions of the training input data set. For example, if the graph neural network in training exhibits good performance over some data distributions of the training input data and poor performance over other data distributions of the training input data, the continued training of the graph neural network may focus on, prioritize, and/or overweight the training based on batches of training input data that reflect the data distributions associated with poor performance. In various training techniques, a batch size of batches of training input data sets may be fixed, or the batch size may vary based on a progress of the training of the graph neural network. [01508] As another example, in various training techniques for graph neural networks, an entire set of training input data may be partitioned into a training data set that is used only to train the graph neural network and update its parameters; a validation data set that is used only to evaluate a prospective and/or in-training graph neural network; and/or a test data set that is used to only
SFT-106-A-PCT evaluate a final performance of the fully trained graph neural network. The partitioning of the training input data may be based on one or more ratios (e.g., a 90/5/5 partitioning of the training input data into a training data set, a validation data set, and a test data set, or a 98/1/1 partitioning of the training input data into a training data set, a validation data set, and a test data set). For example, during an epoch, the performance of the graph neural network may be evaluated based on various portions of the training data set, and the parameters of the graph neural network may be adjusted based on the determined performance. However, continued training and updating of the graph neural network based on the training data set may result in overfitting, e.g., “memoization” of correct outputs that correspond to various portions of the training data set. Due to such overfitting, the performance of the graph neural network in evaluating previously evaluated input data sets may improve, but performance of the graph neural network on previously unevaluated input data sets may decline. Instead, at the conclusion of an epoch, the performance of the graph neural network may instead be evaluated based on various portions of the validation data set, which is not otherwise used to update the parameters of the graph neural network. Evaluation of the performance of the graph neural network on previously unseen data can indicate that the performance of the graph neural network is genuinely improving (e.g., based on learned principles of data evaluation that apply consistently to both previously seen and previously unseen input data sets), resulting in a continuation of training. Alternatively, evaluation of the performance of the graph neural network on previously unseen data can indicate that the performance of the graph neural network is resulting in overfitting to the training data set (e.g., based on “memoization” of correct outputs for previously seen input data sets that do not inform the correct evaluation of previously unseen input data sets), resulting in a conclusion of training. Such conclusion may be referred to as “early stopping” of training to reduce overfitting of the graph neural network to the training data set and to preserve the performance of the graph neural network on previously unsee input data sets. [01509] As another example, various training techniques for graph neural networks may include one or more regularization techniques, in which the inputs to the graph neural network and/or the processing of the input are adjusted to reduce overfitting. As a first example, the training of a graph neural network may include a dropout regularization technique, in which some neurons of the graph neural network are disabled for some instances of processing input data sets. In various regularization techniques, neurons to be disabled are selected randomly (e.g., 5% of the neurons during each epoch) and/or can be selected in a sequence (e.g., a round-robin selection of deactivated neurons). The selected neurons may be disabled by refraining from processing the inputs of the neurons and setting the outputs of the selected neurons to zero, and/or by processing the selected neurons but temporarily setting the weights of the synapses of the neurons to zero. As
SFT-106-A-PCT a second example, the training of a graph neural network may include a dropnode and/or dropedge regularization technique, in which portions of an input graph data set that include some nodes and/or some edges of the input graph data set are disabled. In various regularization techniques, nodes and/or edges to be disabled for an instance of processing are selected randomly (e.g., 5% of the nodes and/or edges during each epoch) and/or can be selected in a sequence (e.g., a round-robin selection of deactivated nodes and/or edges). The selected nodes and/or edges may be disabled by refraining from processing portions of the input data set that correspond to the selected nodes and/or edges, and/or by deactivating neurons of an input layer of the graph neural network that are configured to receive input data from the selected nodes and/or edges. As a third example, the performance of a graph neural network may be subjected to various forms of regularization, including L1 (“lasso”) regularization and/or L2 (“ridge”) regularization. These and other forms of regularization may be used, alone or in combination, to reduce overfitting of a graph neural network to an input training data set. For example, regularization may reduce an overweighting of a subset of nodes, edges, and/or neurons in the processing of various input data sets (e.g., by reducing and/or penalizing neurons having synaptic weights with magnitudes that are disproportionately large compared to the synaptic weights of other neurons of the graph neural network). [01510] As another example, various training techniques for graph neural networks may combine a graph neural network with one or more other machine learning models, including one or more other graph neural networks and/or one or more non-graph neural networks. For example, a bootstrap aggregation (“bagging”) training technique involves a determination of a decision tree as an ensemble of machine learning models based on different bootstrap samples of the training input data set. Each machine learning model, including one or more graph neural networks, may be trained based on a random subsample of the training input data set. For a particular input data set, many of the trained machine learning models of the ensemble, including one or more graph neural networks, may present poor or only adequate performance. However, one or a few of the trained machine learning models may generate high-performance output for the particular input data set and others like it (e.g., for input data sets that share one or more properties, such as a select graph property, a select node property, and/or a select edge property). Thus, for any particular input data set, an evaluation of the specific properties of the particular input data set may enable a selection among the available models of the ensemble that may be used to evaluate the particular input data set. That is, a machine learning model (e.g., a graph neural network) that is generally a poorly performing model on most input data sets may exhibit good performance over a small neighborhood of input data sets that includes the particular data set, and may therefore be selected to evaluate the particular data set. Alternatively or additionally, the bootstrap aggregation may
SFT-106-A-PCT involve an evaluation of an input data set by a plurality of machine learning models (optionally including one or more graph neural networks of the ensemble) and a combination of the outputs of the selected machine learning models. In such scenarios, it is possible the individual outputs of the individual machine learning models exhibit poor performance (e.g., incorrect and/or low- confidence classifications of an input data set), but a determination of a consensus over the outputs of the multiple machine learning models may exhibit high performance (e.g., accurate and/or high- confidence classifications of the input data set). [01511] As another example, various training techniques for graph neural networks may include a boosting ensemble technique, in which an output of a first trained machine learning model (e.g., a first graph neural network) is evaluated by a second trained machine learning model (e.g., a second graph neural network) to predict an accuracy and/or confidence of the prediction of the first trained machine learning model. For example, a first trained graph neural network may be evaluated to determine that it generates accurate and/or high-confidence output for a first group of input data sets (e.g., input graph data sets that include a first graph property, a first node property, and/or a first edge property), but inaccurate and/or low-confidence output for a second group of input data sets (e.g., input graph data sets that include a second graph property, a second node property, and/or a second edge property). A particular input data set may initially be processed by the first trained graph neural network to determine a first output (e.g., an output graph neural network or a prediction, such as a classification). A second trained graph neural network may evaluate the input data set and/or the output of the first graph neural network to predict an accuracy and/or confidence of the first graph neural network over input data sets that resemble the particular input data set. If the second trained graph neural network predicts that the output of the first graph neural network is likely to be of high accuracy and/or confidence, then the second trained graph neural network may provide the output of the first trained graph neural network as its output. However, if the second trained graph neural network predicts that the output of the first graph neural network is likely to be of low accuracy and/or confidence, then the second trained graph neural network may adjust, correct, and/or discard the output of the first trained graph neural network, or preferentially select an output of a different machine learning model (e.g., a third trained graph neural network) to be provided as output instead of the output of the first trained graph neural network. In such scenarios, it is possible that the individual outputs of the individual machine learning models exhibit poor performance (e.g., incorrect and/or low-confidence classifications of an input data set), but the review and validation of the output of some machine learning models by other machine learning models may enable a determination of a consensus over the outputs of the multiple machine learning models that exhibits high performance (e.g., accurate and/or high-confidence classifications of the input data set).
SFT-106-A-PCT [01512] As another example, following conclusion of training a graph neural network, the graph neural network may be deployed for use (e.g., transferred to one or more devices, deployed into a production environment, and/or connected to a source of production input data). The performance of the graph neural network over input data sets may continue to be evaluated and monitored to verify that the graph neural network continues to perform well over various inputs. In some cases, the performance of the graph neural network may change between training and deployment. For example, a distribution of production input data processed by the graph neural network may differ from the distribution of training input data that was used to train the graph neural network. Alternatively or additionally, a distribution of production input data may change over time, e.g., between a time of deploying the graph neural network and a later time after such deployment. Such instances of changes in the performance of a fully trained and deployed graph neural network may be referred to as “drift.” In some such cases, “drift” may be reduced or eliminated by retraining or continuing training of the graph neural network, e.g., using additional training input data that corresponds to an actual or current distribution of the production input data. Alternatively or additionally, “drift” may be reduced or eliminated by training a substitute graph neural network to replace the initially deployed graph neural network. For example, the substitute graph neural network may include a different set of hyperparameters than the initially deployed graph neural network (e.g., additional layers and/or neurons to provide greater learning capacity; additional regularization techniques to reduce overfitting to the training data set; and/or the inclusion of specialized layers, such as pooling, filtering, memory, and/or attention layers). As another example, the initially deployed graph neural network may be added to an ensemble of other machine learning models, optionally including other graph neural networks, to generate improved outputs (e.g., higher-accuracy predictions) based on a consensus determined over the outputs of a number of machine learning models. [01513] As another example, the training and/or use of graph neural networks may be susceptible to various forms of adversarial attack. For example, in an adversarial attack scenario, a particularly designed and/or selected input to a graph neural network (an “adversarial input,” such as an unusual, malformed, and/or anomalous) may cause the graph neural network to generate output that is incorrect, inconsistent with other outputs, and/or surprising. As an example, in a form of graph modification adversarial attack that may be referred to as node injection poisoning adversarial attack (NIPA), one or more nodes of an input graph data set are selected and/or altered to shift an output of the graph neural network based on the adversarial input (e.g., altering a classification and/or prediction of the input graph data set, or altering an output graph data set based on the adversarial input graph data set). As another example, in a form of graph modification adversarial attack that may be referred to as an edge perturbing adversarial attack (NIPA), one or
SFT-106-A-PCT more edges of an input graph data set are selected and/or altered to shift an output of the graph neural network based on the adversarial input (e.g., altering a classification and/or prediction of the input graph data set, or altering an output graph data set based on the adversarial input graph data set). As another example, in a training data injection attack, one or more portions of training input data on which a graph neural network is trained are designed and/or altered to alter the training of the graph neural network (e.g., a mislabeling of a particular training data input that causes the graph neural network to misclassify other inputs that correspond to the mislabeled training data input, and/or an injection of data samples into a training data set that alter a data distribution of the training data set upon which the graph neural network is trained). As another example, in a membership inference adversarial attack, properties and/or outputs of a graph data set are evaluated to identify properties of one or more training data inputs on which the graph data set was trained (e.g., an influential property of an input data set that causes the graph data set to select a particular classification for the any input data sets that include the property). As another example, in a property inference adversarial attack, properties and/or outputs of a graph data set are evaluated to identify general properties of training data inputs on which the graph data set was trained (e.g., a distribution of data included in the training data set, which may indicate particular distributions of input data over which the graph neural network was not trained, or over which the graph neural network was incompletely and/or incorrectly trained). As another example, in a model inversion adversarial attack, outputs of a graph neural network are examined to identify properties of corresponding input data sets that cause the graph neural network to generate such outputs. [01514] Based on these and other forms of adversarial attack, the training and/or evaluation of a graph neural network may be adjusted to protect the graph neural network from such adversarial attack. For example, before an input to a graph neural network is processed, the input may be evaluated and/or classified (e.g., by another machine learning model, including another graph neural network) in order to determine whether the input is adversarial. If so, the graph neural network may refrain from processing the adversarial input, may process the adversarial input in more limited conditions (e.g., processing only a portion of the adversarial input, and/or replacing a malformed or anomalous portion of the adversarial input with a corresponding non-malformed and/or non-anomalous portion). As another example, during processing of an input data set, the internal behavior of the graph neural network may be evaluated and/or classified (e.g., by another machine learning model, including another graph neural network) to determine whether the behavior indicates a processing of adversarial input (e.g., unusual neuron activations, unusual outputs of one or more neurons, and/or updates of internal states of memory units). If so, the processing of the adversarial input may be halted and/or an internal state of the graph neural network may be restored to a time before the adversarial input was processed. As another example,
SFT-106-A-PCT before output of a graph neural network is provided in response to an input data set, the output may be examined and/or classified (e.g., by another machine learning model, including another graph neural network) to determine whether it is incorrect, inconsistent with other inputs, and/or surprising. If so, the output of the graph neural network may be discarded and/or altered before being provided in response to the input data set. Further explanation and/or examples of various techniques for training and performance evaluation of graph neural networks are presented elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01515] Graph neural networks – applications [01516] Graph neural networks can be applied to input data sets (including input graph data sets and/or input non-graph data sets) in various applications, and can be configured and/or trained to generate outputs (including output graph data sets and/or output predictions, such as classifications) that are relevant to various tasks within such applications. [01517] For example, in the field of social networking, a graph data set may represent at least a portion of a social network, including nodes that represent people and that are connected by edges that represent relationships among two or more people. The graph data set representing a social network may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered people within the social network, and/or one or more new edges that correspond to one or more newly discovered relationships that connect two or more people of the social network. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected people of the social network, e.g., a social circle. The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more people of the social network who share common personal traits, interests, and/or connections to other people. The output graph data set may include a prediction of a classification of a node corresponding to a person of the social network, e.g., a prediction of a personal interest of the person or a demographic trait of the person. The output graph data set may include a prediction of a classification of an edge that connects nodes representing two or more people of the social network, e.g., a prediction of a criminal association among two or more people of the social network. The output graph data set may include a determination of a relationship within the social network based on an attention model, e.g., an identification of a first node corresponding to a first person of the social network that appears to be influential to a second person of the social network represented by a second node of the graph. The output graph data set may include a prediction of a graph property of the graph, e.g., a
SFT-106-A-PCT classification of the social network as one or more types (e.g., a genealogy or familial social network, a friendship social network, and/or a professional relationship social network). [01518] As another example, in the field of pharmaceuticals, a graph data set may represent at least a portion of a molecule (e.g., a protein or a DNA sequence), including nodes that represent atoms of the molecule and that are connected by edges that represent bonds and/or spatial relationships among two or more atoms. The graph data set representing a molecule may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered atoms that may be added to the molecule, and/or one or more new edges that correspond to one or more newly discovered atoms of the molecule. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected subregions of the molecule, such as carbon atoms that form a benzene ring or a binding site for a protein. The output graph data set may include a prediction of a classification of one or more nodes corresponding to one or more atoms of the molecule, e.g., a prediction that a subset of atoms of the molecule include a binding site for an enzyme that may active and/or deactivate a protein. The output graph data set may include a prediction of a classification of an edge that connects nodes representing atoms of the molecule, e.g., a prediction of a chemically reactive bond that can be altered to alter a property of the molecule. The output graph data set may include a prediction of a graph property of the graph, e.g., a prediction of a shape or organization of the molecule, a classification of the molecule as an enzyme, and/or a prediction of a potential side-effect of a drug due to an undesirable interaction with another drug. [01519] As another example, in the field of software, a graph data set may represent at least a portion of a marketplace, including nodes that represent products and that are connected by edges that represent relationships between products. The graph data set representing a marketplace may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered product, and/or one or more new edges that correspond to one or more newly discovered products. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected products (e.g., two or more products that are often purchased and/or used together, or that compete in a particular market sector). The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more products. The output graph data set may include a prediction of a classification of a node corresponding to a product, e.g., a prediction of an appeal, value, and/or
SFT-106-A-PCT demand of a product in a particular market segment, such as a particular subset of users. The output graph data set may include a prediction of a classification of an edge that connects nodes representing products, e.g., a prediction of a functional relationship between two or more products. The output graph data set may include a prediction of a graph property of the graph, e.g., a classification of the marketplace as increasing and/or decreasing in terms of supply, demand, size, prognosis, and/or public interest. [01520] As another example, in the field of logistics, a graph data set may represent at least a portion of a supply chain, including nodes that represent locations where resources are generated, manufactured, stored, exchanged, and/or consumed and that are connected by edges that represent means of transport of resources between two or more locations. The graph data set representing a supply chain may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered location of interest, and/or one or more new edges that correspond to one or more newly discovered locations of interest. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected locations of interest, such as locations between which certain resources are frequently transported. The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more locations of interest. The output graph data set may include a prediction of a classification of a node corresponding to a location of interest, e.g., a prediction of an availability, supply, demand, value, and/or appeal of a resource in the location of interest. The output graph data set may include a prediction of a classification of an edge that connects nodes representing locations of interest, e.g., a prediction of a volume of utilization of a mode of transport between two locations of interest. The output graph data set may include a prediction of a graph property of the graph, e.g., a classification of a stability of the supply chain based on social, economic, political, and/or environmental changes. [01521] As another example, in the field of energy, a graph data set may represent at least a portion of an energy grid, including nodes that represent energy generators, stores, distributors, and/or consumers, and that are connected by edges that represent relationships among energy generators, stores, distributors, and/or consumers. The graph data set representing an energy grid may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered energy generators, stores, distributors, and/or consumers, and/or one or more new edges that correspond to one or more newly discovered energy generators, stores,
SFT-106-A-PCT distributors, and/or consumers. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected energy generators, stores, distributors, and/or consumers. The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more energy generators, stores, distributors, and/or consumers. The output graph data set may include a prediction of a classification of a node corresponding to energy generators, stores, distributors, and/or consumers, e.g., a prediction of a current or future state or property of the energy generator, store, distributor, and/or consumer. The output graph data set may include a prediction of a classification of an edge that connects nodes representing energy generators, stores, distributors, and/or consumers, e.g., a prediction of a transaction between two or more energy generators, stores, distributors, and/or consumers. The output graph data set may include a prediction of a graph property of the graph, e.g., a classification of a stability of the energy grid to sustain energy generation and to support energy demands based on social, economic, political, and/or environmental changes. [01522] As another example, in the field of civil engineering, a graph data set may represent at least a portion of a geographic region, including nodes that represent locations of interest and that are connected by edges that represent roads. The graph data set representing a geographic region may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered locations of interest, and/or one or more new edges that correspond to one or more newly discovered locations of interest. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected locations of interest. The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more locations of interest. The output graph data set may include a prediction of a classification of a node corresponding to location of interest, e.g., a prediction of a current or future volume of visitors to a location of interest and/or a volume of traffic at or through the location of interest. The output graph data set may include a prediction of a classification of an edge that connects nodes representing locations of interest, e.g., a prediction of a volume of traffic on a road that connects two or more locations of interest. The output graph data set may include a prediction of a graph property of the graph, e.g., a classification of a sufficiency of a road network of the geographic region to support a current or future volume of traffic. [01523] As another example, in the field of industrial systems, a graph data set may represent at least a portion of an industrial plant, including nodes that represent machines of the industrial plant and that are connected by edges that represent functional relationships among the machines. The
SFT-106-A-PCT graph data set representing the industrial plant may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered machines, and/or one or more new edges that correspond to one or more newly discovered machines. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected machines. The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more machines. The output graph data set may include a prediction of a classification of a node corresponding to a machine, e.g., a prediction of a current or future maintenance state of a machine. The output graph data set may include a prediction of a classification of an edge that connects nodes representing machines, e.g., a prediction of a functional relationship between a first machine and a second machine that may significantly impact an efficiency, output, cost, or the like of the industrial plant. The output graph data set may include a prediction of a graph property of the graph, e.g., a classification of the industrial plant as belonging to a particular industry, such as raw material processing, semiconductor fabrication, tool manufacturing, vehicle manufacturing, textile manufacturing, and/or pharmaceuticals manufacturing. The output graph data set may include a prediction of a future and/or optimized state of the industrial plant, e.g., a reorganization of the machines of the industrial plant to optimize machine placement and/or floor planning. [01524] As another example, in the field of cybersecurity, a graph data set may represent at least a portion of a device network, including nodes that represent devices and that are connected by edges that represent communication and/or interactions among two or more devices. The graph data set representing the device network may be provided, as input, to a graph neural network that is configured to receive and process the input graph data set. The graph neural network may generate, as output, an output graph data set. For example, the output graph data set may include one or more new nodes that correspond to one or more newly discovered devices, and/or one or more new edges that correspond to one or more newly discovered devices. The output graph data set may include one or more subgraphs and/or clusters that represent highly interconnected devices. The output graph data set may include a prediction of a recommendation of a relationship among two or more nodes corresponding to two or more devices. The output graph data set may include a prediction of a classification of a node corresponding to a device, e.g., a prediction of a security status of the device as being safe, vulnerable, or corrupted. The output graph data set may include a prediction of an activity occurring among the nodes of the graph data set, e.g., an occurrence of an intrusion or an attack based on anomalous activities represented by the edges of the graph data set. The output graph data set may include a prediction of a classification of an edge that connects
SFT-106-A-PCT nodes representing devices, e.g., a prediction that a particular interaction between two or more devices is associated with a security vulnerability or attack. The output graph data set may include a prediction of a graph property of the graph, e.g., a classification of the set of devices as safe from security flaws or vulnerable to one or more attack mechanisms, such as denial-of-service (DoS) attacks, distributed-denial-of-service (DDoS) attacks, social engineering attacks such as phishing, eavesdropping attacks such as man-in-the-middle attacks, or the like. The output graph data set may include a prediction of a theoretical state of the graph data set, e.g., a security state of the device network in response to a particular type of attack, and/or a security state of the device network based on the inclusion of additional devices in the future. The output graph data set may include a recommendation to modify the graph neural network based on one or more security considerations, e.g., a recommendation to reorganize the device network to reduce susceptibilities to one or more security risks. The output graph data set may include a technique to defend the graph neural network from various types of adversarial attack, e.g., training-time attacks that affect the manner in which the graph neural network learns to evaluate and/or classify the graph data set, one or more nodes, and/or one or more edges. For example, the message passing operations of the graph neural network may be modified to reduce a susceptibility of the graph neural network to adversarial perturbation during training, while preserving the learning capabilities of the graph neural network. [01525] Examples of additional applications of various graph neural networks to various graph data sets include, without limitation: graph mining applications (e.g., graph matching and/or clustering); physics (e.g., physical systems modeling and/or evolution over time); chemistry (e.g., molecular fingerprints and/or chemical reaction predictions); biology (e.g., protein interface predictions, side effects predictions, and/or disease classification); knowledge graphs (e.g., knowledge graph completion and/or knowledge graph alignment); generation (e.g., output graph data set generation that corresponds to an expression, an image, a video, a music sample, or a scene graph); combinatorial optimization; traffic networks (e.g., traffic state prediction); recommendation systems (e.g., user-item interaction predictions and/or social recommendations); economic networks (e.g., stock markets); software and information technology (e.g., software defined networks, AMR graph-to-text tasks, and program verification); text processing (e.g., text classification, sequence labeling, machine translation, relation extraction, event extraction, fact verification, question answering, and/or relational reasoning); and image processing (e.g., social relationship understanding, image classification, visual question answering, object detection, interaction detection, region classification, and/or semantic segmentation). Further examples of applications for processing various graph data sets by various graph neural networks are presented
SFT-106-A-PCT elsewhere in this disclosure and/or will be known to or appreciated by persons of ordinary skill in the art. [01526] Attention [01527] In embodiments, an artificial intelligence system, machine learning model, or the like, of any of the types disclosed herein, may comprise, integrate, link to, or include an attention feature. Attention may be generally described as a determination, among a set of inputs, of the relatedness of each input to the other inputs in the set of inputs. In “self-attention,” the input includes a sequence of elements, and attention is determined between each pair of elements in the sequence. As a first example, the set of inputs includes a sequence of words in a language, and attention is applied to determine, for each word in the sequence, the relatedness of the word to each other word in the sequence. As a second example, an input includes an image comprising a set of pixels, and attention is applied to determine, for each group of pixels in the image, the relatedness of the group of pixels to each other group of pixels in the image. Attention can also be applied between sets of input, wherein attention is determined between each element of a first set of input and each element of a second set of input. For example, the set of inputs can include a first sequence of words in a first language and a second sequence of words in a second language, and attention can be determined to indicate how each word in the first sequence is related to each word in the second sequence. [01528] Fig.100 presents an example of a determination of attention by a machine learning model 10000. In the example of Fig. 100, an input sequence 10008 includes a set of tokens, each representing a word (“The”, “Furry”, “Dog”, “Chased”, “The”, “Cat”). Each token includes an indicator of a position of the token in the sequence. In various embodiments, the tokens of the input sequence 10008 may include complete words, portions of words (e.g., a first token indicating a word root and a second token indicating a modifier of the word root), punctuation, or the like. Some tokens may indicate metadata, such as a start-of-sequence token, an end-of-sequence token, or a null token indicating a padding of the sequence or a mask that hides a token of the sequence. [01529] The input sequence 10008 is processed by a position encoder that determines, for each token, an encoding of the position. In some embodiments, the position encoding may include an ordinal numerical value that indices the ordinal position of each token in the sequence, such as an index beginning at zero or one. In some embodiments, the position encoding may include a relative numerical value that indicates a position of each token in the sequence relative to a fixed position, such as a current word (encoded position 0), an immediately preceding word (encoded position - 1), or an immediately following word (encoded position 1). In some embodiments, the position encoding may include non-integer values and/or multiple values, such as a first index indicating a
SFT-106-A-PCT sine calculation (with a given frequency) of the position of each token and a second index indicating a cosine calculation (with a same or different frequency) of the position of each token. [01530] The input sequence 10008 is also processed by an embedding model. The embedding model determines, for each token in the input sequence 10008, a mapping of the token into a latent space representation of the input (e.g., a latent space representation of a language). The latent space may position each token along a plurality of n dimensions, wherein each dimension represents a distinct type of relationship among the elements of the language. The embedding model clusters the tokens such that related tokens are positioned closer to each other within the latent space. For example, along one dimension of the latent space, the words “Cat” and “Dog” may be positioned close together as being words that describe animals, while also being positioned apart from words that do not describe animals, such as “Baseball” and “School.” Along another dimension of the latent space, the words “Dog” and “Furry” may be positioned close together as words that commonly occur in the context of dogs, while also being positioned apart from words that do not describe dogs, including “Cat.” For each token of the input sequence 10008, the embedding model generates one or more values that indicate the position of the token within the latent space. In some embodiments, the values are encoded as a vector, and the proximity of two tokens within the latent space may be determined based on vector proximity calculations, such as cosine similarity. [01531] Based on the positions encoded by the position encoder and the embeddings determined by the embedding model, a model input 10006 can be generated for the input sequence 10008. As shown in Fig. 100, the model input 10006 includes a query, a set of keys, and a set of values. As an example, the query may include an indicator of a particular token in the input sequence 10008, such as the sixth token (“Cat”). The keys may include the position encodings of respective tokens of the input sequence 10008, as determined by the position encoder, and a corresponding embedding of the respective token as determined by the embedding model. The values may indicate additional data features of the tokens. As an example, the values may indicate, for each token of the input sequence 10008, a determined sentiment (e.g., a ranking between -1, indicating very negative words, and +1, indicating very positive words). In some embodiments, no additional data features are available, and the values are identical to the keys. [01532] The model input 10006 is received and processed by an attention layer. In Fig. 100, the attention layer 10004 first includes a set of fully connected layers: a first fully connected layer processes the query of the model input 10006; a second fully connected layer processes the keys of the model input 10006; and a third fully connected layer processes the values of the model input 10006. Each fully-connected layer includes a bias and a set of weights that adjust the values of the query, key, or value, respectively. The bias and weights of each fully-connected layer are model
SFT-106-A-PCT parameters that are initialized (e.g., to random values) and then incrementally adjusted during training. [01533] Optionally, in some embodiments, the outputs of the fully-connected layers are further processed by a masking layer. The masking layer removes one or more values from the model input 10006 adjusted by the fully connected layers. As a first example, the masking layer can reduce to zero the values of the key and/or value at a given position, such as a token at a current position to be predicted, or a token at a position following the current position that is to be hidden from the model. As a second example, the masking layer can reduce to zero the values of particular keys and/or values, such as padding values that are provided to adapt the size of the model input 10006 to a size of input that the attention layer 10004 is configured to receive and process. The masking layer can produce output for certain tokens (e.g., reduced to zero) for the indicated tokens (e.g., the current token, future tokens, and/or padding tokens) and that is the same as the input for the remaining tokens. [01534] Optionally, in some embodiments, the outputs of the masking layer are further processed by a multi-head reshaping layer. The multi-head reshaping layer can reshape an input vector comprising the weighted and/or masked model input 10006 such that subsets of the input can be processed in parallel by different attention heads. As an example, an attention layer 10004 may include two attention heads, and the input can be reshaped such that each attention head is applied to only half of the inputs. The multi-head attention model can enable attention determinations over different subsets of the input (e.g., a first attention head can determine the relatedness of a first token to a first subset of tokens of the input sequence 10008, and a second attention head can determine the relatedness of the same first token to a second subset of tokens of the input sequence 10008). Alternatively or additionally, the multi-head attention model can enable different types of attention determinations among the tokens of the input sequence 10008 (e.g., a first attention head can determine a first type of relatedness of a first token to a subset of tokens of the input sequence 10008, and a second attention head can determine a second type of relatedness of the same first token to the same or different subset of tokens of the input sequence 10008). The multi-head attention model may enable parallel processing of the input sequence 10008 (e.g., the input for each attention head can be processed by a different processing core). [01535] The attention layer 10004 includes an attention calculation that determines, based on the model input 10006, the attention of a token of the input sequence 10008 with respect to other tokens of the input sequence 10008. In some embodiments, the attention calculation includes an additive attention (“Bahdanau Attention”) calculation, in which attention is determined as a sum of weighted calculations of the distances of the tokens along each dimension of the latent space. In some embodiments, the attention calculation includes a dot product determination, as a comparison
SFT-106-A-PCT of the distances between the vectors of the tokens within the latent space. In some embodiments, the attention calculation is performed over the query, keys, and values of the model input 10006, optionally after processing with a masking layer. In some embodiments, the attention calculation is performed for each of a plurality of attention heads, each of which processes a particular subset of the tokens of the input sequence 10008. [01536] In embodiments that include multi-head reshaping, the output of the attention calculation is further processed by a merge operation that merges the attention calculations for the respective attention heads. In some embodiments, the merge operation includes a concatenation and/or interleaving of the attention calculations of the attention heads. In some embodiments, the merge operation includes an arithmetic operation applied to the attention calculations of the attention heads, such as an arithmetic mean, median, min, and/or max calculation. [01537] The attention layer 10004 outputs, for at least one token of the input sequence 10008, a determination of attention between the token and at least one other token of the input sequence 10008. The output of the attention calculation may include a vector that indicates, for at least one token of the input sequence 10008, the determinations of attention between the token and a set of other tokens of the input sequence 10008. The output of the attention calculation may include a set of vectors that indicate, for respective tokens of the input sequence 10008, the determinations of attention between the respective token and at least one other token of the input sequence 10008. The output of the attention calculation may indicate, for a token of a first sequence, the attention of the token to one or more tokens of a second sequence. As shown in Fig.100, the output of the attention layer 10004 includes pairwise determinations of relatedness between pairs of tokens (e.g., each pair including a current token in an input sequence 10008 and each preceding token in the input sequence 10008). In some embodiments, the pairwise determinations may be further processed. For example, a softmax calculation can be applied to normalize the pairwise attention 10002 determinations based on a desired range of output values (e.g., probability values between 0.0 and 1.0, with a 1.0 sum overall output values). [01538] The attention layer 10004 may be trained by providing sets of training input sequence 10008s and comparing the outputs of the attention layer 10004 with expected outputs. Alternatively or additionally, the attention layer 10004 may be trained by incorporating the attention layer 10004 into a larger model (e.g., a transformer model) and adjusting the parameters of the attention layer 10004 (e.g., the parameters of the fully-connected layers) for a given training input sequence 10008 in order to adjust the output of the attention layer 10004 toward a desired output for the training input sequence 10008. As an example, in a backpropagation training process, the output of the attention layer 10004 is provided as input to a succeeding layer. The output of the model including the attention layer 10004 and the succeeding layer may be compared with a desired output for the
SFT-106-A-PCT training input sequence 10008. Based on this comparison, adjustments of the output of the succeeding layer (e.g., based on an error calculation) may inform a determination of desired adjustments of the input of the succeeding layer, which correspond to adjustments of the output of the attention layer. The adjustments of the output may be achieved by internally adjusting the parameters of the attention layer 10004 (e.g., the weights and/or biases of the fully connected layers shown in Fig. 100) such that the attention layer 10004 subsequently generates output for the training input sequence 10008 that more closely corresponds to the desired input for the succeeding layer. Incremental training over a set of training input sequence 10008s can cause the attention layer 10004 to generate output that corresponds to the desired output for the training input sequence 10008s. As an example, if the input sequence 10008s are sentences in a language and the desired output of the model includes the probabilities of words in the language that could follow a given set of input words, the attention layer 10004 can be incrementally adjusted to indicate the attention (e.g., relatedness) between the next word in the input sequence 10008 and the preceding words in the input sequence 10008. [01539] It is to be appreciated that the attention layer 10004 shown in Fig. 100 presents only one example, and that attention layers may include a variety of variations with respect to the example of Fig.100. For example, attention layers may include, without exception, additional layers or sub- layers that perform one or more of: normalization; randomization; regularization (e.g., dropout); one or more sparsely-connected layers; one or more additional fully-connected layers; additional masking; additional reshaping and/or merging; pooling; sampling; recurrent or reentrant features, such as gated recurrence units (GRUs), long short-term memory (LSTM) units, or the like; and/or alternative layers, such as skip layers. Alternatively or additionally, the architecture of the attention layer 10004 shown in Fig. 100 may vary in numerous respects. For example, masking may be applied to the model input 10006 instead of to the outputs of the fully connected layers. One or more fully-connected layers may be omitted, replaced with a sparsely-connected layer, and/or provided as multiple fully-connected layers, including a sequence of two or more fully-connected layers; or the like. Model parameters (e.g., weights and biases) and/or hyperparameters (e.g., layer counts, sizes, and/or embedded calculations) may be modified and/or replaced with variant parameters and/or hyperparameters. Many such variations may be included in attention layers that are incorporated in a variety of machine learning models to process a variety of types of input sequence 10008s. [01540] Transformer models [01541] In embodiments, an artificial intelligence system, machine learning model, or the like, of any of the types disclosed herein, may comprise, integrate, link to, or include a transformer model, that is, a neural network that learns context and meaning by tracking relationships in a set of
SFT-106-A-PCT sequential data inputs. Transformer models may include one or more attention layers, including (but not limited to) the attention layer 10004 shown in Fig.100. [01542] Fig.101 presents an example of a transformer model. The transformer model of Fig.101 is based on an encoder-decoder architecture in which an encoder 10110 processes an input sequence 10102 and a decoder processes an output sequence to generate output probabilities. As a first example, the input sequence may include a sequence of words in a first language; the output sequence may include a sequence of words in a second language corresponding to a translation of the input sequence; and the output probabilities may include the probabilities of words in the second language for a particular position in the translation. As a second example, the input sequence may include a sequence of words in a language that represent a query or prompt; the output sequence may include a sequence of words in the same language that represent a response to the query or prompt; and the output probabilities may include the probabilities of words in the second language for a particular position in the response. In some cases, the output sequence includes only the tokens up to a particular position (e.g., the first n-1 tokens of the output sequence), and the output probabilities represent the probabilities of tokens in the language of the output sequence that could follow the output sequence (e.g., the nth token in the output sequence). In some cases, the output sequence includes all of the tokens except the token a particular position (e.g., all of the tokens except the nth token of the output sequence), and the output probabilities represent the probabilities of tokens in the language of the output sequence that could represent the missing token in the output sequence (e.g., the nth token in the output sequence). [01543] The encoder 10110 receives an input sequence comprising a set of tokens. The input sequence may be padded to a given length corresponding to a configured input size for the encoder 10110. The input sequence is processed by a position encoder 10106 to encode the positions of the respective tokens of the input sequence. The input sequence is also processed by an embedding model 10104 to determine the embeddings of the tokens of the input sequence 10112. The encoded positions and embeddings are used to generate an encoder model input 10108, including a query (e.g., a position of one or more tokens in the input sequence), a set of keys (e.g., the encoded positions and embeddings for each token of the input sequence), and a set of values (e.g., additional language features of the tokens such as outputs of sentiment analysis). The set of values may be a copy of the set of keys if no additional data features are available. The encoder model input 10108 is processed by a multi-head attention layer, such as an instance of the attention layer shown in FIG. 1. The multi-head attention layer determines self-attention within the input sequence (e.g., the relatedness of a respective token of the input sequence to each other token of the input sequence). The output of the multi-head attention layer is received and processed by a layer normalization component. Additionally, a skip layer is provided that passes the encoder model
SFT-106-A-PCT input 10108 through to the layer normalization component. The layer normalization component combines the output of the multi-head attention layer with the encoder model input 10108 (e.g., via arithmetic mean, median, min, max, addition, multiplication, or the like) and normalizes the combined output to within a desired range. In some embodiments, the encoder includes a sequence of two or more instances of this combination of multi-head attention layers, skip layer, and layer normalization components. The encoder also includes a feed-forward layer (e.g., a fully-connected layer and/or a sparsely-connected layer) including a set of trainable parameters. The output of the feed-forward layer is provided to another layer normalization component, along with the output of the preceding layer normalization component via a skip layer. The encoder outputs an input sequence attention, which indicates, for each of one or more tokens of the input sequence, the relatedness of each other token of the input sequence. [01544] The decoder 10122 features an architecture that is similar to the encoder, but that includes additional components to incorporate the input sequence attention generated by the encoder. The decoder receives an output sequence 10114 comprising a set of tokens. The output sequence may be padded to a given length corresponding to a configured input size for the decoder. The output sequence is processed by a position encoder to encode the positions of the respective tokens of the output sequence. The output sequence is also processed by an embedding model 10116 to determine the embeddings of the tokens of the output sequence. The encoded positions and embeddings are used to generate a decoder model input 10120, including a query (e.g., a position of one or more tokens in the output sequence), a set of keys (e.g., the encoded positions and embeddings for each token of the output sequence), and a set of values (e.g., additional language features of the tokens such as outputs of sentiment analysis). The set of values may be a copy of the set of keys if no additional data features are available. The decoder model input 10006 is processed by a masked multi-head attention layer, such as an instance of the attention layer shown in FIG. 1. In addition to determining attention, the masked multi-head attention layer masks the input values of a current token of the output sequence and any tokens of the output sequence that follow the current token. The masked multi-head attention layer determines self-attention within the output sequence (e.g., the relatedness of a respective token of the output sequence to each preceding token of the output sequence). The output of the multi-head attention layer is received and processed by a layer normalization component. Additionally, a skip layer is provided that passes the encoder model input 10108 through to the layer normalization component. The layer normalization component combines the output of the multi-head attention layer with the encoder model input 10108 (e.g., via arithmetic mean, median, min, max, addition, multiplication, or the like) and normalizes the combined output to within a desired range. In some embodiments, the encoder includes a sequence of two or more instances of this combination of multi-head attention
SFT-106-A-PCT layers, skip layer, and layer normalization components. The decoder further includes an encoder- decoder multi-head attention layer that receives both the output of the preceding layer normalization component and the input sequence attention generated by the encoder. The encoder- decoder multi-head attention layer does not determine self-attention within the output sequence, but, rather, determines the attention between the tokens of the output sequence and the corresponding tokens of the input sequence. The output of the encoder-decoder multi-head attention unit is also received and processed by a second layer normalization component. Additionally, a skip layer is provided that passes the input to the encoder-decoder multi-head attention layer through to the second layer normalization component. The second layer normalization component combines the output of the multi-head attention layer with the input to the encoder-decoder multi-head attention unit (e.g., via arithmetic mean, median, min, max, addition, multiplication, or the like) and normalizes the combined output to within a desired range. The decoder also includes a feed-forward layer (e.g., a fully-connected layer and/or a sparsely- connected layer) including a set of trainable parameters. The output of the feed-forward layer is provided to a third layer normalization component, along with the output of the preceding layer normalization component via a skip layer. The output of the decoder is processed by a fully connected layer and a softmax normalization layer 10126 based on a cross-entropy determination. [01545] The output of the softmax normalization layer 10126 includes a set of probabilities for each possible token of a language of the output sequence for the current token. As a first example, the input sequence may include a sequence of words in a first language; the output sequence may include a sequence of words in a second language corresponding to a translation of the input sequence, up to a current (nth) word in the translation; the output probabilities 10128 may include the probabilities of words in the second language for the nth word in the translation. As a second example, the input sequence may include a sequence of words in a language that represent a query or prompt; the output sequence may include a sequence of words in the same language that represent a response to the query or prompt, up to a current (nth) word in the response; and the output probabilities may include the probabilities of words in the language for the nth word in the response. [01546] During training, the transformer model may be provided with a set of input sequences and complete corresponding output sequences. As a first example involving language translation, the transformer model may be provided with a training data set including a first corpus of sentences in a first language and a second corpus of sentences in a second language that respectively correspond to the sentences in the first language. As a second example involving a generative model, the transformer model may be provided with a training data set including a first corpus of queries or prompts in a language and a second corpus of responses in the language that correspond to the
SFT-106-A-PCT respective queries or prompts. For each training data input, a pair of sentences of the first corpus and second corpus are selected. The encoder is provided with the first (input) sentence, and the model is processed to determine the first word in the second (output) sentence. In this case, the output sequence provided to the decoder is completely masked so that the decoder cannot make predictions based on the expected words in the second sentence. The word probabilities determined by the decoder are compared with the actual first word in the output sequence, and backpropagation is applied through the decoder and encoder to increase the likelihood of outputting the expected word. The backpropagation includes adjusting the parameters of the attention layers to increase the attention between the first word and related words of the input sequence. The encoder is then provided again with the first (input) sentence, and the model is processed to determine the second word in the second (output) sentence. In this case, the output sequence provided to the decoder includes the unmasked first word, but masks all words after the first word. The word probabilities determined by the decoder are compared with the actual second word in the output sequence, and backpropagation is applied through the decoder and encoder to increase the likelihood of outputting the expected word. The backpropagation includes adjusting the parameters of the attention layers to increase the attention between the second word, the known first word of the output sequence, and related words of the input sequence. In this manner, the transformer model performs an autoregressive prediction, wherein the output probability of each nth token of the output sequence is based on the input sequence, the previously predicted tokens of the output sequence, and the encoder-decoder attention therebetween. Training continues over the entirety of the first and second corpora to improve the output predictions. [01547] In many cases, the training of the transformer model occurs in batches. For example, the previous (simplified) training example described an incremental training of the transformer model over each corresponding pair of sentences of the first and second corpora, wherein the parameters of the transformer model are adjusted via backpropagation after each instance of processing. In batch training, the input and output sequences are vectorized, as are the layers of the transformer model, such that predictions over each word of the output sequence are predicted in parallel. Backpropagation parameter adjustment is performed for each batch of the training data set, based on the outputs for all of the pairwise inputs of each batch of the training data set. [01548] After training, the transformer model can be used to predict an output sequence based on an input sequence. First, the input sequence is processed by the encoder, while the decoder processes a null output sequence (e.g., an output sequence in which all outputs are initially nulled and/or masked by the masked multi-head attention layer). The output probability of the decoder is used to determine a first token of the output sequence. In some embodiments, the first token is chosen as the token having the highest probability. In other embodiments, the first token is chosen
SFT-106-A-PCT based on a random sampling over the output probabilities. In either case, the transformer is then applied to the same input sequence and an output sequence including only the determined first token of the output sequence, and the output of the decoder determines the second token of the output sequence. This process continues until reaching an output token cap and/or upon determining, as the output of the decoder, an end-of-sequence token. In this manner, the transformer model is applied over the input sequence to determine, in serial and autoregressive manner, the tokens of the output sequence. [01549] It is to be appreciated that the transformer model shown in FIG. 2 presents only one example, and that transformer models may include a variety of variations with respect to the example of FIG. 2. For example, the architecture of the encoder and/or decoder may include, without exception, additional layers or sub-layers that perform one or more of: normalization; randomization; regularization (e.g., dropout); one or more sparsely-connected layers; one or more additional fully-connected layers; additional masking; additional reshaping and/or merging; pooling; sampling; recurrent or reentrant features, such as gated recurrence units (GRUs), long short-term memory (LSTM) units, or the like; and/or alternative layers, such as skip layers. Alternatively or additionally, the architecture of the encoder and/or decoder shown in FIG.2 may vary in numerous respects. For example, masking may be applied directly to the output sequence instead of within the multi-head attention models. One or more fully-connected layers may be omitted, replaced with a sparsely-connected layer, and/or provided as multiple fully-connected layers, including a sequence of two or more fully-connected layers; or the like. Model parameters (e.g., weights and biases) and/or hyperparameters (e.g., layer counts, sizes, and/or embedded calculations) may be modified and/or replaced with variant parameters and/or hyperparameters. Many such variations may be included in transformer models to process a variety of types of input and output sequences. [01550] Transformer models, including the example shown in FIG.2, may be applied in a variety of circumstances. As an example, transformer models may be trained on and/or configured to process a variety of types of input sequences and/or output sequences. Sequential data inputs and/or outputs can include a wide variety of types described herein, such as strings of text, sequences of sensor data from or about an entity, sequences of steps in a process (e.g., chemical, physical, biological, and many others) or flow (e.g., a human workflow, information technology traffic flow, physical traffic flow, sequences of user behavior (e.g., attention to content, clickstream behavior, shopping behavior (digital and real world), and many others. Any of these, and others can be provided as inputs to train a transformer model, which may be alternatively described herein as a self-attention model, a foundation model, or the like. A range of mathematical self-attention techniques can be applied to detect how data elements in sequential data mutually affect each other
SFT-106-A-PCT (such as in feed forward, feedback, and other forms of influence and dependency). In various embodiments described herein and in the documents incorporated by reference herein, a set of transformer models may be deployed for a wide range of use cases, including for predictive text applications (e.g., generating a next token of text based on a previous set of tokens, such as for intelligent agent dialog, responses to queries, and the like); for extraction of information (such as extraction of meaningful elements from sensor data, signal data, and the like, such as analog signal data from sensors on machines, wearable devices, infrastructure sensors, edge and IoT devices, and many others); for analysis of human factors, such as emotional response, sentiment, satisfaction, opinion, and the like; for summarizing data (such as providing summaries of text, images, video, sensor data, and many other streams of data of the type collected and processed as described herein); for trend detection, prediction and forecasting (and hence also for anomaly detection, such as fraud in financial transactions), including for a wide range of trends, including health (human, animal, mental, financial, machine condition, and others), performance (wellness, financial, physical, and many others), and many others; for recognition of entities and behaviors (such as objects appearing in video or image data, objects captured in LIDAR and other point- cloud rendering systems, objects located by SLAM systems, and many others); for generation and execution of instructions (e.g., recipes, control instructions, rules, regulations, governance instructions, and many others); and for many other uses. [01551] In embodiments, an input data set, such as an analog or digital sensor data stream, a body of text, a set of images, a set of structured data (such as data from a graph database or other form of database noted herein, a sequence of blockchain or distributed ledger entries (or other ledger data, such as accounting, financial, health or other data), a set of signals (of the various types noted herein), is provided in order to train a transformer model. In embodiments, initial training may include a step of facilitating compression of the input data, such as by constraining the size of the transformer neural network and/or its outputs, to dimensionality that is significantly smaller (or less granular, etc.) than that of the input data. By requiring the output of the constrained transformer model to match, within a required metric of fidelity, the input data, the transformer model is caused to generate an “embedding” of the input data into a more compressed, efficient format. A decoding neural network may then be trained to operate on the output of the constrained, embedding transformer model, such that it can reproduce the input data from the output of the constrained model within the required metric, thereby assuring that the data is compressed without losing critical meaning. [01552] Once the embedding transformer model is so trained, the decoding neural network can be removed and replaced by one or more of a set of use-case driven decoding models, each of which is trained to operate on the output of the embedding model to produce a target outcome, such as
SFT-106-A-PCT performing any of the use cases noted above to a satisfactory degree. These use-case decoding models can be fine-tuned iteratively over time with feedback from users, outcomes, or the like. Thus, a trained embedding foundation/transformer model, once created, can be used across many different use cases that may benefit from understanding the meaning of the input data set. [01553] In embodiments, one type of use-case decoder can be trained to allow the embedding transformer model to operate on lower quality data than was originally supplied to train the model. To accomplish this, both low quality and high quality data (such as high granularity sensor data and low granularity sensor data, or high dimensionality signal data and low dimensionality signal data, or noisy acoustic data and filtered acoustic data, or the like) can be simultaneously fed to a pair or more of instances of the trained, embedding transformer model, and a decoder for the instance of low quality data can be trained to generate an output that matches, within a metric of fidelity, the output of the instance of the embedding transformer model that is fed the high quality data. As an example, gap-free analog waveform data from a three-axis vibration sensor on a machine component can be captured simultaneously with less granular data from a single- or two- axis accelerometer on the same component, and a decoder, operating on the output of the instance of the embedding transformer model that takes the single- or two-axis input, can be trained to match (within a tolerance) the output of the instance of the embedding transformer model that takes the more granular data as an input. Once created, the resulting decoder, coupled with the embedding transformer model, serves as a projection transformer model, effectively projecting lower quality data into higher quality data, which can then be used by other decoders to enable use cases. This class of projecting transformer models can be applied to a wide range of use cases where high quality data can be obtained during a training phase (often at higher expense), but lower quality data can be used as an input during a deployment phase (such as where lower quality data is more widely or cheaply available, such as in the case of vibration data noted above). Among other things, these projecting transformer models allow powerful, real-time, low latency use cases for AI even when input data is sparse, noisy, of low dimensionality, or the like. [01554] In embodiments, feedback from various decoder models can be used to improve instances of an embedding foundational or transformer model. In embodiments, a set of transformer models, a set of decoders, or both, can be arranged in a workflow, which may be directed/acyclical or with processing loops, to create higher-level use cases that benefit from multiple applications of AI. For example, one model may be used to classify a condition, another used to generate a recommendation, and another used to generate a control instruction, among a huge range of possible embodiments. This may include serial, parallel, iterative, feed forward, feedback and other configurations.
SFT-106-A-PCT [01555] In embodiments, a set of models may be trained to generate instructions for configuration of other models. [01556] In embodiments, transformer models may be deep learning, self-learning, self-organizing, or the like, and may be used for any of the embodiments of self-learning, self-organization, or other self-referential capabilities noted throughout this disclosure or the documents incorporated by reference herein. They may also be supervised, semi-supervised, or the like. Transformer models may be coupled with, integrated with, linked to, or the like, in series, parallel or other more complex workflows, with other AI types, such as other neural network types (e.g., CNNs, RNNs, and others). For example, in embodiments, a transformer model operating on sequential data may be coupled with a model suited to operate on non-sequential data (e.g., for pattern recognition) to achieve a use case. [01557] In embodiments, transformer models discover patterns in large bodies of data by application of a set of mathematical functions, optionally operating in parallel processing configurations, thereby eliminating or reducing the need for human labeling (and thereby greatly expanding the set of available data that can be used to train a model). [01558] Self-attention may be accomplished in a transformer model by introducing a set of positional encoders that tag data elements entering and exiting a neural network and inserting a set of attention units at appropriate places in the encoding and decoding framework of an AI system. The attention units generate a mathematical map of interrelationships among data elements. In embodiments multi-headed attention units are deployed, executing a matrix of equations in parallel to determine the interrelationships. Transformer models, using self-attention, have displayed strong capabilities to provide outputs that are consistent with how humans find patterns and meaning in data. [01559] In embodiments, transformer models may be embodied with very large numbers of parameters (e.g., hundreds of millions, billions, trillions, or more) operating on very large sets of parallel processors. For example, the Megatron-Turing Natural Language Generation Model by NVIDIA and Microsoft is reported to have 530 billion parameters. As noted above, from a foundational model, various use-case specific models (decoders, projections, and the like) can be purpose-built for specific applications. Accordingly, in embodiments a set of transformer models may be deployed using advanced computational techniques and/or processing architectures, such as ones that simplify or converge processors, simplify I/O, and the like. For example, 3D chipset or chiplet architectures may facilitate much higher density, faster computation, making transformer models more cost-effective. Quantum computation may also facilitate massively parallel processing in form factors that are faster, more energy efficient, or the like. Similarly, embodiments may use a tensor-engine GPU chip with a specific transformer engine, such as the NVIDIA H100
SFT-106-A-PCT Tensor Core GPU. Another example of a transformer model is Google’s switch transformer model, a trillion-parameter model that uses sparsity and a mixture-of-experts architecture to enable gains in performance and reductions in training speed. [01560] As noted above, in embodiments smaller or more constrained transformer models may be trained to generate embeddings, particularly for very complex data sets, such as granular analog data. [01561] In embodiments a set of transformer models may be configured to operate on structured data processing systems, such as on results from queries that are directed to a database, results of inputs directed to a set of APIs, or the like. This may facilitate better understanding of what meaning a transformer model is recognizing in a data pattern, which can be critical to ensuring quality (e.g., where a model may, due to flaws in underlying data, generate poor conclusion, such as replicating historical racial bias, missing critical balancing information, failing to understand formal logical constructs, or the like). As noted elsewhere in this disclosure and the documents incorporated herein, governance of AI in general, is a need, and the scale and complexity of transformer models likely compounds problems recognized with other neural networks, including their “black box” nature, uncertainty about input quality, and the like. Thus, governance concepts disclosed herein and in documents incorporated by reference should be understood to apply to various embodiments that use transformer models, as with other types of AI. One example is in the training of models, where models may be trained, in embodiments, in various disciplines, optionally similar to the educational frameworks by which humans are trained not just to sense pattern meaning, but also how to test and govern those abilities with formal reasoning and logic, mathematics, probability, and frameworks of ethics and morality. [01562] Throughout this disclosure and the documents incorporated by reference herein, various embodiments are provided of digital twin platforms and systems that leverage sensors and other information sources, robust connectivity, and intelligence systems to allow users to experience accurate representations of the states and activities of the many entities that are involved in the workflows of an individual, group or enterprise (e.g., a company, business unit, department, government, household, non-profit, or other enterprise). These include, among others, role-based digital twins that can be configured, or that self-configure, how data is collected, stored, processed and/or presented in ways that account for the roles of respective users (e.g., ones directed to financial users, strategic users, operational users, many others). Digital twins also include adaptive digital twins that leverage intelligence systems, such as artificial intelligence, analytic systems, expert systems, or hybrids involving various permutations and combinations thereof, to adapt how data is collected, stored, processed and/or presented based on context, such as based on the content of the information that is collected and processed. Among these adaptive digital twins, a subset,
SFT-106-A-PCT AI-driven digital twins, may use component artificial intelligence systems at any of the stages of the pipeline of information from a sensor or other basic information source to the presentation of content to a user. Artificial intelligence systems can provide outputs along a spectrum of autonomy, including: a) presenting reports, alerts, classifications, predictions, recommendations, analyses and other outputs for human review and action; b) undertaking human-supervised control of one or more aspects of a workflow, where the artificial intelligence system (acting as a co-pilot, assistant, intelligent agent, or the like to a human user) outputs a prediction and/or a recommendation for an action (which may be embodied in an instruction that can be processed by a system), and a human confirms, or adjusts, the nature of the action before it is implemented; and c) undertaking autonomous control of one or more systems or subsystems, where the artificial intelligence processes relevant information, completes necessary classifications, predictions and other decision-making steps (including where applicable, confirming risk management, governance, self-reflection or other steps) and triggers an action (such as by providing an instruction, control signal, or the like) to a system or component. [01563] Along this spectrum from basic reporting to full autonomy, there is a varying extent to which there is a human being in the decision-making loop. Artificial intelligence systems have enormous promise to automate individual, group and enterprise workflows, providing more reliable, seamless monitoring of ongoing activities, faster response times to unexpected events, and more efficient execution of many types of tasks, among many other benefits. However, the extent to which AI systems can be trusted can vary widely, such as based on: a) the stakes involved (e.g., are there risks to human life, health or property in making a wrong decision?); b) how the AI system was trained (e.g., are there reasons to believe that there is bias in the training data that will be carried forward into the outputs of the AI system); c) how the AI system is governed or supervised (e.g., is in compliance with regulatory, legal or other governance frameworks embedded in the workloads of the system as contemplated in this disclosure and the documents incorporated by reference herein, or is there a need for separate governance or supervision?); d) how the AI system performs, in absolute terms and relative to other available systems, including other AI systems, analytic systems, or human beings (experts, other individuals, groups, or crowds, for example); e) the quality, type and/or availability of data (which can include factoring in the cost of the data); f) the quality, type and/or availability of connectivity (which can include factoring in the cost of the connectivity); g) the quality, type and/or availability of computational resources (which can include factoring in the cost of the computational resources); h) the quality, type and/or availability of other necessary resources (e.g., energy); i) environmental or contextual parameters of the workflow to which the AI system will be applied (such as the complexity of a physical environment, the presence of human beings in proximity, or the like); and other factors. As an example among many,
SFT-106-A-PCT a well-trained AI system may be highly capable of driving a vehicle down a highway during daylight, doing so more safely than a human being who could be prone to sleepiness or distraction, but if computational, networking, or other resources are uncertain, it may be decided that the AI system should not be trusted to perform the task. Decisions about the extent to which one can trust an artificial intelligence system can be very difficult, and the extent of trust is a major factor in determining whether or not AI systems, and their many benefits, can be unlocked. [01564] A major benefit of the various digital twin systems disclosed herein is that a digital twin platform, particularly one that is adaptive and self-configuring, can provide an ideal environment for decision making. Good decisions usually need a combination of a) fresh, accurate, relevant information; b) application of some degree of expertise; c) use of judgment to consider tradeoffs of risks; and/or d) leadership to cause implementation (which may encompass, among other things, taking on the consequences when outcomes are unfavorable). Each of these attributes can be supported by the data collection, storage and processing systems, connectivity systems, computational systems and intelligence systems described throughout this disclosure and the documents incorporated by reference herein. For example, a pipeline of IoT and edge devices and high QoS networks (adaptive networking) can pass granular, real-time sensor data about all entities in an operating environment (machines, components, humans, infrastructure, etc.), which manifests in visibility, such as for big data analytics and consulting use cases, as well as data for machine learning. Data architectures (e.g., adaptive sensing and data collection; edge query language; intelligent data storage and processing layers) and advanced computational architectures (hybrid cloud, edge, and quantum computing) can be used to optimize conditions for application of human and machine intelligence. For example, data architectures can adapt visual representations for human cognitive processing (including based on the skillset, role, experience, expertise, or cognitive parameters of a human, such measured by neurometrics, psychometrics, or other systems) and/or prepare and stage data for artificial intelligence systems (including by cleansing, deduplication, generation of synthetic data, entity resolution, normalization and other processing, generation of various embeddings that use one AI system (e.g., a trained neural network) to transform and/or compressed input data into outputs that are suitable for efficient processing by AI systems, and the like). Computational architectures can be adapted to use an appropriate mix of cloud, edge and on-device processing systems (including various chipsets described herein, such as AI chipsets and hybrid chipsets involving AI and other functions integrated together). This can include quantum computing where beneficial. Systems integration, including various configurations of systems, systems-of-systems and the like into platform and infrastructure solutions (including PaaS and IaaS configurations), with various architectures, including service-oriented architectures, microservices architectures, and the like, can integrate all
SFT-106-A-PCT relevant systems across an individual’s or group’s set of activities, an enterprise, or an entire business ecosystem, such that a digital twin can have access to current state and activity parameters for many relevant entities. Data and sensor fusion, such as involving sensor data and many other data sets can assist in tracking or predicting outcomes (environmental data, market data, transaction data and many others). Advanced artificial intelligence techniques can combine with human insight to generate expert systems and models that understand and predict operational states, flows and more complex behaviors reflected in data (big data analytics and advanced artificial intelligence techniques), including compact models that are capable of operating effectively on sparse data and/or constrained computational architectures at the edge, as well as generative AI outputs that provide summaries, creative content, instructions, reports, and many other outputs from various inputs, including text, image, video, audio and multi-modal generative AI systems. AI-driven stakeholder interfaces can adaptively filter, prioritize, and distribute information for planning, simulation, decision making and operational action to appropriate stakeholders across an entire enterprise. [01565] With the above stack of integrated systems, taking the form of the many embodiments disclosed herein and in the document incorporated herein by reference, distribution of control, of the various disparate systems, systems-of-systems, components, platforms, infrastructure elements, and other entities involved in the activities of an individual, group, or various levels and roles of enterprise can be enabled, with consideration being given, as noted above, to factors such as latency, security, safety, operational efficiency, reliability and trust. [01566] The untapped power of the digital twin is its ability to enable collaboration, and to distribute decision making, to the right mix of human beings, artificial intelligence systems and other systems, whether in an enterprise or in the activities of an individual or group. The digital twin allows incremental change, experimentation, and easily reversible implementation, removing fear of closing the loop. For example, a digital twin can include a simulation environment that simulates, based on actual historical and current data, what outcomes most likely ensue from deployment of particular configurations of human and artificial intelligence elements in decision making and control loops. The digital twin can also provide a planning environment for deployment of artificial intelligence systems, as well as the data collection, data processing, networking, connectivity, energy, and computational systems and architectures that support them; for example, an enterprise can, using the digital twin, plan various scenarios for obtaining resources to enable more powerful AI systems, comparing projected outcomes at various resource levels and mixes. The digital twin can enable graceful migration among humans, human-AI mixes (e.g., with agents and co-pilots) and autonomous AI systems, including deploying them in ways that can be rapidly adjusted, or reversed, such as based on changing performance capabilities, contextual and
SFT-106-A-PCT environmental factors, and events; for example, an AI system might be permitted to control transactions during normal daily operations, but it could be removed from the loop, or provided with greater human supervision, in the case of major shifts in an environment, such as during severe swings in the market, after an environmental catastrophe, or the like. Thus, provided herein is a digital twin having a simulation environment that simulates, based on data for the entities, states and activities processed and displayed by the digital twin, a simulation of a set of outcomes predicted to result upon deployment of a set of human and artificial intelligence systems in the decision making or control workflows of an entity. Also provided herein is a digital twin that includes a planning system for deployment of a set of artificial intelligence systems and the resources required to enable the artificial intelligence systems, wherein the planning system provides a prediction of the outcomes of deployment of at least a plurality of distinctive sets of artificial intelligence systems or a plurality of distinctive sets of resource configurations. The resources may include ones for data collection, data processing, networking, connectivity, energy, and computational systems and architectures that support them. [01567] As noted throughout this disclosure and the documents incorporated by reference herein disclose the training of machine learning systems in various configurations, including deep learning, supervised learning, semi-supervised learning, and the like. In many such embodiments, a human expert provides input data that is used to train the AI system, such as by tagging data sets to assist in AI classification systems; by producing code, text, images, audio, video and other inputs that are used to train generative AI systems to predict a set of outputs from a prompt; by undertaking activities and behaviors that indicate preferences to train AI systems to generate recommendations; by configuring systems, platforms and architectures that can be used to train AI systems to generate similar configurations and recommendations; by undertaking decisions in various contexts and environments that are used to train AI systems to produce recommendations, make decisions and/or output control signals in similar contexts and environments, among others. In embodiments, any of the machine learning systems or other artificial intelligence systems disclosed herein or in the documents incorporated by reference herein may be embedded in a digital twin for the purpose of enabling training of the digital twin. This may include designating a set of users (such as domain experts, managers, operational supervisors, executives, or the like) as trainers for the AI system, including based on the role, core competency, expertise, experience, or other aspects of the set of users. In embodiments, this may include designating one set of users to provide input data for initial training of a particular AI system and another set of users (possibly overlapping in part with the first set) to supervise the outputs of the AI system. Thus, disclosed herein is a digital twin system with an interface system for designating a set of users as trainers for the creation of an artificial intelligence system that is represented in the digital twin, wherein the
SFT-106-A-PCT artificial intelligence system operates on the data that is used to populate the digital twin. Also disclosed herein is a digital twin system with an interface system for designating a set of users as supervisors for an artificial intelligence system that is represented in the digital twin, wherein the artificial intelligence system operates on the data that is used to populate the digital twin. [01568] In embodiments, provided herein is a digital twin system that displays a set of options for distribution of decision-making authority or control across various systems, systems-of-systems or components across a set of entities and workflows. This may be implemented at the level of an individual component or system, such as by allowing a user to configure, within the digital twin, the conditions under which a human (or group of humans, a combined system (human and AI co- pilot, for example), or an AI system alone will be designated as the authority for making a decision. This can include permanent settings (“never” or “always”), semi-permanent settings (e.g., ones that are reviewed on a systematic or episodic basis) and dynamic settings (i.e., where contextual conditions, environmental conditions, or other factors are processed in real time to determine what set of entities will be designed at the decision-making authority). In embodiments, decision making authority may be distributed within the digital twin based on a broader decision making framework, such as a hierarchical framework (such as an enterprise hierarchy in which individual workers are organized into groups, with lines of reporting and supervision), a rules-based framework (such as a set of voting rules by which disputes about a decision are resolved), a collaborative framework (such as where prospective alternatives are presented and discussed by a set of contributors seeking a consensus recommendation, optionally involving simulations and planning capabilities noted above), a simulation framework (where, as noted above, the digital twin can simulate outcomes based on historical and real-time data), an enterprise planning framework (which may include integration of the digital twin with enterprise planning systems and dashboard), a competitive framework (such as where alternative options are configured to compete with each other to determine which is superior, optionally involving genetic programming or other evolutionary computing systems, as noted elsewhere in this disclosure), a peer-to-peer framework (e.g., where decisions are negotiated bi-laterally or multi-laterally among entities involved); an algorithmic framework (such as where decisions are reached by a defined set of stages, possibly involving various other frameworks as hybrids in parallel, in series, in iterative loops, or the like), a principles-based framework (such as where a set of personal, group, or enterprise principles (e.g., ethical principles, values, mission statements, or the like) are codified for use when certain contextual decisions are presented), or others. As an example, many daily operational decisions might be configured to be executed autonomously by AI systems, but if those decisions are determined to have an aggregate impact on an element of the mission of an enterprise (such as a mission to continue to develop and employ a set of individuals), then the digital twin can highlight
SFT-106-A-PCT the need to bring an appropriate set of human decision makers into the loop. Thus, in embodiments, provided herein is a digital twin wherein authority for decision making about what set of entities among human beings, human-AI systems, or autonomous AI systems is distributed within the digital twin based on a decision making framework selected from a hierarchical framework, a rules- based framework, a simulation framework, an enterprise planning framework, an algorithmic framework, a principles-based framework, a collaborative framework, a peer-to-peer framework, or a competitive framework. [01569] In embodiments, a digital twin may display various metrics to facilitate decisions about what sets and configurations of human and AI systems should be selected for a given system, system-of-systems, component, or the like. These metrics may be calculated using the data available to the digital twin through its processing functions (such as real time information about the user, available human workers, resource availability for AI systems, contextual and environmental information, information about states and activities of the entities involved in various workflows and the like) as well as by using external data sources, such as data indicating performance metrics for individuals (which may include the general population or domain experts, groups (which may include expert groups, crowds (including by crowdsourcing as disclosed elsewhere herein)) or for AI systems (including how the systems perform under various resource conditions). Metrics may include indicators of training history, such as educational experience, work experience, and job reviews, among many others, for humans. Metrics may include training data or metadata for artificial intelligence systems, including the size of the training data set, the vintage of the training data set, the configuration of neural networks used in training, the presence or absence of synthetic data in the training data set, metrics indicating the quality of the training data set (including indicators of bias, autocorrelation, heteroskedasticity or other statistical or econometric indicators), metrics about the expertise or capabilities of the human beings used to seed and/or supervise the AI systems, and many others. As an example, two similar generative AI systems may be presented along with indicators of the time period over which they were trained, allowing a user to evaluate whether the AI systems are likely to miss important context (e.g., where training data from a time period outside one of the sets is likely to have a major influence on the outcome) or generate hallucinations (such as where a spike in unusual training data during a training period may have unduly swayed results). Thus, in embodiments, provided herein is a digital twin system that displays a set of training metrics for a set of humans that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. Also provided herein, in embodiments, is a digital twin system that displays a set of training metrics for a set of hybrid human-AI systems that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. Also provided herein, in embodiments, is a digital
SFT-106-A-PCT twin system that displays a set of training metrics for a set of AI systems that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. [01570] In embodiments, metrics provided within the digital twin may include performance metrics for one or more sets of human and AI entities. Human performance metrics may include metrics for individuals, experts, and groups (including crowds) of the many types, and collected by the many methods and systems, disclosed herein and in the documents incorporated by reference. For individuals, these may include various cognitive, neurometric, psychometric, emotional, attention and other metrics, including ones disclosed herein and in the documents incorporated herein, including collected by neuro-metric measurement systems (e.g., EEG, fMRI and other scanning systems), genomic and transportomics systems, psychometric testing, physiological monitoring systems (including wearable sensors, cameras, IoT systems and many others as disclosed throughout this disclosure), systems for detecting emotional state (e.g., anxiety, distress, anger, calm and others, and others), systems for measuring attention, and others. Human performance metrics may also include metrics of task performance, including quality metrics, output metrics, job assessment metrics, indicators of specific skills, expertise or competencies (e.g., educational credentials, publications and certifications), indicators of success (e.g., rates of return of a business led by the individual) and many others. Human metrics can include metrics for groups, including crowds; for example, outcome metrics from a crowdsourcing system that accumulates the “wisdom of the crowd” for a set of predictions, ideas, solutions, or the like can be compared to those of individual experts and those of various AI systems (including standalone systems and human-AI combinations). AI performance metrics may include a wide range of metrics, including metrics of predictive accuracy, metrics indicating outcomes from use of the systems (including those accounting for context of use), metrics of computational speed and latency, metrics of resource utilization (e.g., computation, energy, network resources, and the like), and many others. In embodiments, the digital twin may present comparative metrics across human, human-AI combinations and autonomous AI systems (including multiple options for each), so that a user of the digital twin can select, accounting for context of use, an appropriate configuration. Thus, provided herein, in embodiments, is a digital twin system that displays a set of performance metrics for a set of humans that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. Also provided herein, in embodiments, is a digital twin system that displays a set of performance metrics for a set of hybrid human-AI systems that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. Also provided herein, in embodiments, is a digital twin system that displays a set of performance metrics for a set of AI systems that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. Also provided herein, in embodiments, is a digital twin system
SFT-106-A-PCT that displays a set of comparative performance metrics among one or more of a set of individual humans, a human-led enterprise, a crowd of humans, a human-AI combination system, or a set of AI systems that are available to perform a set of decision-making tasks for a set of entities presented in the digital twin. [01571] It should be noted that an intelligent agent (referred to in some cases herein as an opportunity miner) can be used to discover sets of available sets of humans, human-AI combinations, and/or artificial intelligence systems that may be capable of improving outcomes of one or more systems, systems-of-systems or other entities represented in the digital twin. In embodiments, a user of a digital twin may configure a set of intelligent agents to undertake discovery based on prioritization by the user, such as where a user flags elements of an enterprise workflow that are perceived (or determined by metrics) to be most in need of improvement, or most likely to benefit from artificial intelligence. Thus, a digital twin system may be used as a tool for exploration of applications of artificial intelligence as well as for configuring decisions around deployment of artificial intelligence once discovered. Thus, provided herein is an intelligent agent that automatically discovers sets of available systems, among human systems, combined human- AI systems and standalone artificial intelligence systems that are capable of performing a desired function. Also provided herein is a digital twin system having an embedded intelligent agent system that automatically discovers sets of available systems, among human systems, combined human-AI systems and standalone artificial intelligence systems that are capable of performing a desired function. [01572] In embodiments, a user of a digital twin may configure a set of intelligent agents to undertake discovery based on prioritization by the user, such as where a user flags elements of an enterprise workflow that are perceived (or determined by metrics) to be most in need of improvement, or most likely to benefit from artificial intelligence. [01573] In embodiments, an artificial intelligence system may be trained, such as based on a set of human decisions, a set of outcomes, or the like, and used to configure (or recommend configuration of) deployment of one or more appropriate sets of human, human-AI combinations and autonomous AI systems. This deployment configuration system for artificial intelligence may be integrated or embedded as an enabling service or utility within a digital twin, or it may be a standalone system used to determine what options are to be presented to a user of a digital twin. Thus, provided herein is an intelligent agent that operates as a deployment configuration system to at least one of recommend a set of deployment configuration parameters for or automatically configure a set of deployment parameters for a set of human, human-AI combinations or autonomous AI systems. Also provided herein is a digital twin system having an integrated intelligent agent that operates as a deployment configuration system to at least one of recommend
SFT-106-A-PCT a set of deployment configuration parameters for or automatically configure a set of deployment parameters for a set of human, human-AI combinations or autonomous AI systems. [01574] In embodiments, the various discovery and configuration systems noted above may be based on general metrics of performance (e.g., comparing how humans in general perform relative to artificial intelligence systems at performing particular tasks). In other embodiments, the comparison may be highly contextual and situational, such as comparing the capabilities of specific sets of individuals that will be available at the time of decision making and the capabilities of artificial intelligence systems that will be available, either to operate as standalone systems or to work in human-AI combinations (including the general performance capabilities of the AI systems, and also the available computational, networking, energy and other resources that will be needed to run them). Situational comparison can include contextual factors, such as time of day, workforce availability, market and environmental data, and many others, such that a discovery or configuration system can recommend an appropriate configuration for a particular system, at a given place and time. For example, a moderately expensive AI system that performs well autonomously may be recommended or selected for an overnight control task, where human experts are not likely to be available and where computational, energy, or other resources are less constrained (and less expensive). An intelligent agent, as noted above, may be configured to generate a recommendation, or a configuration, based on multi-factor optimization, using various decision frameworks noted above, including being trained on a set of human decisions and/or on feedback of outcomes from past configuration actions. [01575] Thus, provided herein is an intelligent agent that operates as a deployment configuration system to at least one of recommend a set of deployment configuration parameters for or automatically configure a set of deployment parameters for a set of human, human-AI combinations or autonomous AI systems, wherein a deployment configuration is determined at least in part on a set of situational factors, the situational factors being among one or more of a set of availability factors for human resources or artificial intelligence resources, a set of capability factors for human resources or artificial intelligence resources, a set of resource availability factors for enabling a set of human resources or artificial intelligence resources, or a set of contextual or environmental factors for the system to which the configuration is to be deployed. [01576] In embodiments, as noted throughout this disclosure, smart contracts can be integrated with one or more systems of a digital twin, such that they operate on input data (e.g., detection of various events) and produce outputs that reflect transactional terms embedded in the smart contract. This may include, for example, a set of smart contracts that set liability terms relating to the consequences of the outputs of a system to which the smart contract relates. For example, a smart contract may be configured such that the provider of an AI system embedded in the digital twin
SFT-106-A-PCT (or in a system to which the digital twin has access) agrees to provide indemnification, proof of insurance, acceptance of liability, or the like for some set of consequences of using the AI system. Such terms can also include various limitations (e.g., constraints on permitted use; limitations of liability), exclusions, restrictions, and the like, thus providing, in the digital twin, a mechanism for expressly allocating the consequences of selection of a particular mix of human and AI systems to the appropriate person or organization. By referencing the performance metrics noted above, operating in real time on operational and other data, a decision maker can be aided to make a rational decision about implementation of AI, guided by a more reliable estimation of the expected value of implementing a particular mix of human and AI systems, rather than being governed by emotional factors. Thus, provided herein are methods and systems having a smart contract system embedded in a digital twin that sets terms and conditions for liability resulting from the output of a system that is displayed in the digital twin. The system displayed in the digital twin may be an artificial intelligence system. [01577] As good decisions made within a digital twin environment or other decision-making platform lead to better outcomes, trust in the reliability of machine recommendations increases, which can be tested and validated by outcome metrics. Individuals, groups and enterprises can gradually migrate decision making from centralized authority to the operating environment and from human action through various degrees of supervision to autonomy, with appropriate checks and balances at all stages (automated governance), reversing direction when the situation calls for it. Over time as trust builds in the capabilities of artificial intelligence, or human-AI combinations, to outperform humans alone, an enterprise can increasingly close loops among information technology, operations technology, and artificial intelligence technologies, enabling low-latency, highly efficient, well governed autonomous response to changing conditions at the operational level. Operators that don’t ultimately close the loop will be slower than their competitors to respond to changing conditions, less efficient in their operations, and less effective in their decision making. [01578] Individuals, groups and enterprises can unlock great benefits by properly distributing decision making and control where and when it is most effective throughout an enterprise, enabled by the integrated stack of technologies that enable a digital twin, each enhanced by artificial intelligence as described throughout this disclosure and the documents incorporated by reference herein. The true power of the convergence of information technology and operations technology (IT/OT convergence) can be realized when decision making is optimally distributed across humans and machines, an outcome that is achieved by progressively building trust in intelligence technologies and the people who supervise them. Organizations that progress more rapidly to a more optimal state will have significant competitive advantages in operational efficiency
SFT-106-A-PCT (particularly through closed loop automation) and agility to respond to shifting market and competitive dynamics. [01579] In embodiments, various embodiments of digital twins, intelligent agents for discovery and configuration, decision making frameworks and other methods and systems disclosed herein may be used to facilitate configuration and allocation of decision making across the various resources, entities, activities, operations, transactions, offerings and workflows involved in various vehicle and transportation environments, including software-defined vehicles, that comprise the value chain networks described throughout this disclosure and the documents incorporated by reference herein. [01580] Special-purpose systems include hardware and/or software and may be described in terms of an apparatus, a method, or a computer-readable medium. In various embodiments, functionality may be apportioned differently between software and hardware. For example, some functionality may be implemented by hardware in one embodiment and by software in another embodiment. Further, software may be encoded by hardware structures, and hardware may be defined by software, such as in software-defined networking or software-defined radio. [01581] The methods and/or processes described in the disclosure, and steps associated therewith, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general-purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine-readable medium. [01582] In this application, including the claims, the term module refers to a special-purpose system. The module may be implemented by one or more special-purpose systems. The one or more special-purpose systems may also implement some or all of the other modules. In this application, including the claims, the term module may be replaced with the terms “controller” or “circuit.” In this application, including the claims, the term platform refers to one or more modules that offer a set of functions. In this application, including the claims, the term system may be used interchangeably with module or with the term special-purpose system.
SFT-106-A-PCT [01583] The special-purpose system may be directed or controlled by an operator. The special- purpose system may be hosted by one or more of assets owned by the operator, assets leased by the operator, and third-party assets. The assets may be referred to as a private, community, or hybrid cloud computing network or cloud computing environment. For example, the special- purpose system may be partially or fully hosted by a third party offering software as a service (SaaS), platform as a service (PaaS), and/or infrastructure as a service (IaaS). The special-purpose system may be implemented using agile development and operations (DevOps) principles. In example embodiments, some or all of the special-purpose system may be implemented in a multiple-environment architecture. For example, the multiple environments may include one or more production environments, one or more integration environments, one or more development environments, etc. [01584] A special-purpose system may be partially or fully implemented using or by a mobile device. Examples of mobile devices include navigation devices, cell phones, smart phones, mobile phones, mobile personal digital assistants, palmtops, netbooks, pagers, electronic book readers, tablets, music players, etc. A special-purpose system may be partially or fully implemented using or by a network device. Examples of network devices include switches, routers, firewalls, gateways, hubs, base stations, access points, repeaters, head-ends, user equipment, cell sites, antennas, towers, etc. [01585] A special-purpose system may be partially or fully implemented using a computer having a variety of form factors and other characteristics. For example, the computer may be characterized as a personal computer, as a server, etc. The computer may be portable, as in the case of a laptop, netbook, etc. The computer may or may not have any output device, such as a monitor, line printer, liquid crystal display (LCD), light emitting diodes (LEDs), etc. The computer may or may not have any input device, such as a keyboard, mouse, touchpad, trackpad, computer vision system, barcode scanner, button array, etc. The computer may run a general-purpose operating system, such as the WINDOWS operating system from Microsoft Corporation, the MACOS operating system from Apple, Inc., or a variant of the LINUX operating system. Examples of servers include a file server, print server, domain server, internet server, intranet server, cloud server, infrastructure-as-a-service server, platform-as-a-service server, web server, secondary server, host server, distributed server, failover server, and backup server. [01586] The term hardware encompasses components such as processing hardware, storage hardware, networking hardware, and other general-purpose and special-purpose components. Note that these are not mutually-exclusive categories. For example, processing hardware may integrate storage hardware and vice versa.
SFT-106-A-PCT [01587] Examples of a component are integrated circuits (ICs), application specific integrated circuit (ASICs), digital circuit elements, analog circuit elements, combinational logic circuits, gate arrays such as field programmable gate arrays (FPGAs), digital signal processors (DSPs), complex programmable logic devices (CPLDs), etc. [01588] Multiple components of the hardware may be integrated, such as on a single die, in a single package, or on a single printed circuit board or logic board. For example, multiple components of the hardware may be implemented as a system-on-chip. A component, or a set of integrated components, may be referred to as a chip, chipset, chiplet, or chip stack. Examples of a system-on- chip include a radio frequency (RF) system-on-chip, an artificial intelligence (AI) system-on-chip, a video processing system-on-chip, an organ-on-chip, a quantum algorithm system-on-chip, etc. [01589] The hardware may integrate and/or receive signals from sensors. The sensors may allow observation and measurement of conditions including temperature, pressure, wear, light, humidity, deformation, expansion, contraction, deflection, bending, stress, strain, load-bearing, shrinkage, power, energy, mass, location, temperature, humidity, pressure, viscosity, liquid flow, chemical/gas presence, sound, and air quality. A sensor may include image and/or video capture in visible and/or non-visible (such as thermal) wavelengths, such as a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) sensor. [01590] The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. The present disclosure may be implemented as a method on the machine, as a system or apparatus as part of or in relation to the machine, or as a computer program product embodied in a computer readable medium executing on one or more of the machines. In example embodiments, the processor may be part of a server, cloud server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platforms. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like, including a central processing unit (CPU), a general processing unit (GPU), a logic board, a chip (e.g., a graphics chip, a video processing chip, a data compression chip, or the like), a chipset, a controller, a system-on-chip (e.g., an RF system on chip, an AI system on chip, a video processing system on chip, or others), an integrated circuit, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), an approximate computing processor, a quantum computing processor, a parallel computing processor, a neural network processor, an approximate computing processor, a quantum computing processor, a parallel computing processor, a neural network processor, a signal processor, a digital processor, a data processor, an embedded processor, a microprocessor, and a co-processor. The co-processor may provide additional processing functions and/or optimizations, such as for speed or power
SFT-106-A-PCT consumption, or other type of processor. Examples of co-processors include a math co-processor, a graphics co-processor, a communication co-processor, a video co-processor, and an artificial intelligence (AI) co-processor. The processor may be or may include a signal processor, digital processor, data processor, embedded processor, microprocessor or any variant such as a co- processor (math co-processor, graphic co-processor, communication co-processor, video co- processor, AI co-processor, and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more threads. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. [01591] The processor, or any machine utilizing one, may include non-transitory memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a non-transitory storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache, network-attached storage, server-based storage, and the like. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g., USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, network- attached storage, network storage, NVME-accessible storage, PCIE connected storage, distributed storage, and the like.
SFT-106-A-PCT [01592] A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In example embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). [01593] The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, internet server, intranet server, cloud server, and other variants such as secondary server, host server, distributed server, and the like. The server may include one or more of memories, processors, computer readable transitory and/or non-transitory media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs, or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. [01594] The processor may enable execution of multiple threads. These multiple threads may correspond to different programs. In various embodiments, a single program may be implemented as multiple threads by the programmer or may be decomposed into multiple threads by the processing hardware. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. A processor may be implemented as a packaged semiconductor die. The die includes one or more processing cores and may include additional functional blocks, such as cache. In various embodiments, the processor may be implemented by multiple dies, which may be combined in a single package or packaged separately. [01595] The networking hardware may include one or more interface circuits. In some examples, the interface circuit(s) may implement wired or wireless interfaces that connect, directly or indirectly, to one or more networks. Examples of networks include a cellular network, a local area network (LAN), a wireless personal area network (WPAN), a metropolitan area network (MAN), and/or a wide area network (WAN). The networks may include one or more of point-to-point and mesh technologies. Data transmitted or received by the networking components may traverse the same or different networks. Networks may be connected to each other over a WAN or point-to- point leased lines using technologies such as Multiprotocol Label Switching (MPLS) and virtual private networks (VPNs).
SFT-106-A-PCT [01596] Examples of cellular networks include GSM, GPRS, 3G, 4G, 5G, LTE, and EVDO. The cellular network may be implemented using frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. Examples of a LAN are Institute of Electrical and Electronics Engineers (IEEE) Standard 802.11-2020 (also known as the WIFI wireless networking standard) and IEEE Standard 802.3-2018 (also known as the ETHERNET wired networking standard). Examples of a WPAN include IEEE Standard 802.15.4, including the ZIGBEE standard from the ZigBee Alliance. Further examples of a WPAN include the BLUETOOTH™ wireless networking standard, including Core Specification versions 3.0, 4.0, 4.1, 4.2, 5.0, and 5.1 from the Bluetooth Special Interest Group (SIG). A WAN may also be referred to as a distributed communications system (DCS). One example of a WAN is the internet. [01597] Storage hardware is or includes a computer-readable medium. The term computer- readable medium, as used in this disclosure, encompasses both nonvolatile storage and volatile storage, such as dynamic random access memory (DRAM). The term computer-readable medium only excludes transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave). A computer-readable medium in this disclosure is therefore non-transitory, and may also be considered to be tangible. [01598] Examples of storage implemented by the storage hardware include a database (such as a relational database or a NoSQL database), a data store, a data lake, a column store, and a data warehouse. Examples of storage hardware include nonvolatile memory devices, volatile memory devices, magnetic storage media, a storage area network (SAN), network-attached storage (NAS), optical storage media, printed media (such as bar codes and magnetic ink), and paper media (such as punch cards and paper tape). The storage hardware may include cache memory, which may be collocated with or integrated with processing hardware. Storage hardware may have read-only, write-once, or read/write properties. Storage hardware may be random access or sequential access. Storage hardware may be location-addressable, file-addressable, and/or content-addressable. [01599] Examples of nonvolatile memory devices include flash memory (including NAND and NOR technologies), solid state drives (SSDs), an erasable programmable read-only memory device such as an electrically erasable programmable read-only memory (EEPROM) device, and a mask read-only memory device (ROM). Examples of volatile memory devices include processor registers and random access memory (RAM), such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), synchronous graphics RAM (SGRAM), and video RAM (VRAM). Examples of magnetic storage media include analog magnetic tape, digital magnetic tape, and rotating hard disk drive (HDDs). Examples of optical storage media include a CD (such as a CD-R, CD-RW, or CD-ROM), a DVD, a Blu-ray disc, and an Ultra HD Blu-ray disc.
SFT-106-A-PCT [01600] Examples of storage implemented by the storage hardware include a distributed ledger, such as a permissioned or permissionless blockchain. Entities recording transactions, such as in a blockchain, may reach consensus using an algorithm such as proof-of-stake, proof-of-work, and proof-of-storage. Elements of the present disclosure may be represented by or encoded as non- fungible tokens (NFTs). Ownership rights related to the non-fungible tokens may be recorded in or referenced by a distributed ledger. Transactions initiated by or relevant to the present disclosure may use one or both of fiat currency and cryptocurrencies, examples of which include bitcoin and ether. Some or all features of hardware may be defined using a language for hardware description, such as IEEE Standard 1364-2005 (commonly called “Verilog”) and IEEE Standard 1076-2008 (commonly called “VHDL”). The hardware description language may be used to manufacture and/or program hardware. [01601] A special-purpose system may be distributed across multiple different software and hardware entities. Communication within a special-purpose system and between special-purpose systems may be performed using networking hardware. The distribution may vary across embodiments and may vary over time. For example, the distribution may vary based on demand, with additional hardware and/or software entities invoked to handle higher demand. In various embodiments, a load balancer may direct requests to one of multiple instantiations of the special purpose system. The hardware and/or software entities may be physically distinct and/or may share some hardware and/or software, such as in a virtualized environment. Multiple hardware entities may be referred to as a server rack, server farm, data center, etc. [01602] Software includes instructions that are machine-readable and/or executable. Instructions may be logically grouped into programs, codes, methods, steps, actions, routines, functions, libraries, objects, classes, etc. Software may be stored by storage hardware or encoded in other hardware. Software encompasses (i) descriptive text to be parsed, such as HTML (hypertext markup language), XML (extensible markup language), and JSON (JavaScript Object Notation), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) bytecode, (vi) source code for compilation and execution by a just- in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, JavaScript, Java, Python, R, etc. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the devices described in the disclosure, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing
SFT-106-A-PCT program instructions. Computer software may employ virtualization, virtual machines, containers, dock facilities, portainers, and other capabilities. In example embodiments, methods described in the disclosure and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described in the disclosure may include any of the hardware and/or software described in the disclosure. All such permutations and combinations are intended to fall within the scope of the disclosure. [01603] The elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described in the disclosure may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. [01604] Software also includes data. However, data and instructions are not mutually-exclusive categories. In various embodiments, the instructions may be used as data in one or more operations. As another example, instructions may be derived from data. The functional blocks and flowchart elements in this disclosure serve as software specifications, which can be translated into software by the routine work of a skilled technician or programmer. Software may include and/or rely on firmware, processor microcode, an operating system (OS), a basic input/output system (BIOS), application programming interfaces (APIs), libraries such as dynamic-link libraries (DLLs), device drivers, hypervisors, user applications, background services, background applications, etc. Software includes native applications and web applications. For example, a web application may be served to a device through a browser using hypertext markup language 5th revision (HTML5). [01605] Software may include artificial intelligence systems, which may include machine learning or other computational intelligence. For example, artificial intelligence may include one or more models used for one or more problem domains. When presented with many data features, identification of a subset of features that are relevant to a problem domain may improve prediction
SFT-106-A-PCT accuracy, reduce storage space, and increase processing speed. This identification may be referred to as feature engineering. Feature engineering may be performed by users or may only be guided by users. In various implementations, a machine learning system may computationally identify relevant features, such as by performing singular value decomposition on the contributions of different features to outputs. [01606] Examples of the models include recurrent neural networks (RNNs) such as long short- term memory (LSTM), deep learning models such as transformers, decision trees, support-vector machines, genetic algorithms, Bayesian networks, and regression analysis. Examples of systems based on a transformer model include bidirectional encoder representations from transformers (BERT) and generative pre-trained transformers (GPT). Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. In various embodiments, a machine-learning model may be pre-trained by their operator or by a third party. Problem domains include nearly any situation where structured data can be collected, and includes natural language processing (NLP), computer vision (CV), classification, image recognition, etc. [01607] Some or all of the software may run in a virtual environment rather than directly on hardware. The virtual environment may include a hypervisor, emulator, sandbox, container engine, etc. The software may be built as a virtual machine, a container, etc. Virtualized resources may be controlled using, for example, a DOCKER container platform, a pivotal cloud foundry (PCF) platform, etc. [01608] In a client-server model, some of the software executes on first hardware identified functionally as a server, while other of the software executes on second hardware identified functionally as a client. The identity of the client and server is not fixed: for some functionality, the first hardware may act as the server while for other functionality, the first hardware may act as the client. In different embodiments and in different scenarios, functionality may be shifted between the client and the server. In one dynamic example, some functionality normally performed by the second hardware is shifted to the first hardware when the second hardware has less capability. In various embodiments, the term “local” may be used in place of “client,” and the term “remote” may be used in place of “server.” [01609] Some or all of the software may be logically partitioned into microservices. Each microservice offers a reduced subset of functionality. In various embodiments, each microservice may be scaled independently depending on load, either by devoting more resources to the microservice or by instantiating more instances of the microservice. In various embodiments, functionality offered by one or more microservices may be combined with each other and/or with other software not adhering to a microservices model.
SFT-106-A-PCT [01610] Some or all of the software may be arranged logically into layers. In a layered architecture, a second layer may be logically placed between a first layer and a third layer. The first layer and the third layer would then generally interact with the second layer and not with each other. In various embodiments, this is not strictly enforced – that is, some direct communication may occur between the first and third layers. FURTHER INFORMATION AND USE OF TERMS [01611] The background description is presented simply for context, and is not necessarily well- understood, routine, or conventional. Further, the background description is not an admission of what does or does not qualify as prior art. In fact, some or all of the background description may be work attributable to the named inventors that is otherwise unknown in the art. [01612] While only a few embodiments of the disclosure have been shown and described, it will be obvious to those skilled in the art that many changes and modifications may be made thereunto without departing from the spirit and scope of the disclosure as described in the following claims. All patent applications and patents, both foreign and domestic, and all other publications referenced herein are incorporated herein in their entireties to the full extent permitted by law. While the disclosure has been disclosed in connection with the preferred embodiments shown and described in detail, various modifications and improvements thereon will become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the disclosure is not to be limited by the foregoing examples, but is to be understood in the broadest sense allowable by law. [01613] The detailed description includes specific examples for illustration only, and not to limit the disclosure or its applicability. The examples are not intended to be an exhaustive list, but instead simply demonstrate possession by the inventors of the full scope of the currently presented and envisioned future claims. Variations, combinations, and equivalents of the examples are within the scope of the disclosure. No language in the specification should be construed as indicating that any non-claimed element is essential or critical to the practice of the disclosure. [01614] While the foregoing written description enables one skilled to make and use what is considered presently to be the best mode thereof, those skilled in the art will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The disclosure should therefore not be limited by the above- described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the disclosure. [01615] Each publication referenced in this disclosure, including foreign and domestic patent applications and patents, is hereby incorporated by reference in its entirety as if fully set forth herein.
SFT-106-A-PCT [01616] The terms “comprising,” “with,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to”) unless otherwise noted. Unless otherwise specified, the terms “comprising,” “having,” “with,” “including,” and “containing,” and their variants, are open-ended terms, meaning “including, but not limited to.” [01617] The term “exemplary” simply means “example” and does not indicate a best or preferred example. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. [01618] The use of the terms “a,” “an,” “the,” and similar referents in the context of describing the disclosure (especially in the context of the following claims) is to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. [01619] Recitations of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. [01620] The phrase “at least one of A, B, and C” should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” [01621] The term “set” may include a set with a single member. The term “set” does not necessarily exclude the empty set — in other words, in some circumstances a “set” may have zero elements. The term “non-empty set” may be used to indicate exclusion of the empty set — that is, a non-empty set must have one or more elements. The term “subset” does not necessarily require a proper subset. In other words, a “subset” of a first set may be coextensive with (equal to) the first set. Further, the term “subset” does not necessarily exclude the empty set — in some circumstances a “subset” may have zero elements. [01622] Physical (such as spatial and/or electrical) and functional relationships between elements (for example, between modules, circuit elements, semiconductor layers, etc.) are described using various terms. Unless explicitly described as being “direct,” when a relationship between first and second elements is described, that relationship encompasses both (i) a direct relationship where no other intervening elements are present between the first and second elements and (ii) an indirect relationship where one or more intervening elements are present between the first and second elements. Example relationship terms include “adjoining,” “transmitting,” “receiving,” “connected,” “engaged,” “coupled,” “adjacent,” “next to,” “on top of,” “above,” “below,” “abutting,” and “disposed.”
SFT-106-A-PCT [01623] Although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure can be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of multiple embodiments remain within the scope of this disclosure. [01624] One or more elements (for example, steps within a method, instructions, actions, or operations) may be executed in a different order (and/or concurrently) without altering the principles of the present disclosure. Unless technically infeasible, elements described as being in series may be implemented partially or fully in parallel. Similarly, unless technically infeasible, elements described as being in parallel may be implemented partially or fully in series. [01625] While the disclosure describes structures corresponding to claimed elements, those elements do not necessarily invoke a means plus function interpretation unless they explicitly use the signifier “means for.” Unless otherwise indicated, recitations of ranges of values are merely intended to serve as a shorthand way of referring individually to each separate value falling within the range, and each separate value is hereby incorporated into the specification as if it were individually recited. [01626] While the drawings divide elements of the disclosure into different functional blocks or action blocks, these divisions are for illustration only. According to the principles of the present disclosure, functionality can be combined in other ways such that some or all functionality from multiple separately-depicted blocks can be implemented in a single functional block; similarly, functionality depicted in a single block may be separated into multiple blocks. Unless explicitly stated as mutually exclusive, features depicted in different drawings can be combined consistent with the principles of the present disclosure. [01627] In the drawings, reference numbers may be reused to identify identical elements or may simply identify elements that implement similar functionality. Numbering or other labeling of instructions or method steps is done for convenient reference, not to indicate a fixed order. In the drawings, the direction of an arrow, as indicated by the arrowhead, generally demonstrates the flow of information (such as data or instructions) that is of interest to the illustration. For example, when element A and element B exchange a variety of information but information transmitted from element A to element B is relevant to the illustration, the arrow may point from element A to element B. This unidirectional arrow does not imply that no other information is transmitted from element B to element A. As one example, for information sent from element A to element B, element B may send requests and/or acknowledgements to element A.