WO2019126755A1 - Création et classification de données de formation destinées à des fonctions d'apprentissage machine - Google Patents

Création et classification de données de formation destinées à des fonctions d'apprentissage machine Download PDF

Info

Publication number
WO2019126755A1
WO2019126755A1 PCT/US2018/067298 US2018067298W WO2019126755A1 WO 2019126755 A1 WO2019126755 A1 WO 2019126755A1 US 2018067298 W US2018067298 W US 2018067298W WO 2019126755 A1 WO2019126755 A1 WO 2019126755A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
vessel
training
machine learning
environment
Prior art date
Application number
PCT/US2018/067298
Other languages
English (en)
Inventor
Dickie Andrew MARTIN
Chase John GAUDET
Original Assignee
Fugro N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fugro N.V. filed Critical Fugro N.V.
Publication of WO2019126755A1 publication Critical patent/WO2019126755A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0206Control of position or course in two dimensions specially adapted to water vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/048Control of altitude or depth specially adapted for water vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming

Definitions

  • the present technology pertains to computer learning systems, and more specifically to systems and methods for generating and applying training data for autonomous surface/sub surface vessels.
  • Machine learning is capable of analyzing tremendously large data sets at a scale that continues to increase. Using various machine learning techniques and frameworks, it is possible to analyze data sets to extract patterns and correlations that may otherwise have never been noticed when subject to human analysis alone. Using carefully tailored data inputs a machine learning system can be manipulated to learn a desired operation, function, or pattern. However, this training process is complicated by the fact that the machine learning system’s inner functionality remains largely opaque to the human observer and the fact that the training data set can easily be biased or too small, both of which result in faulty or otherwise insufficient training.
  • Machine learning is of particular interest in the field of controls, particularly autonomous and semi-autonomous controls. While self-driving car technology is becoming increasingly prevalent, corresponding technology for maritime vessel operations has proven to be far more difficult to solve, due at least in part to a number of fundamental operational differences that prevent crossover from the automotive world to the maritime world. Maritime vessel operations are characterized by extremely slow response times to control commands (on the order of minutes as compared to seconds for automobiles) and far more complex vessel dynamics (e.g., multiple deployed payloads or instrumentation packages, combined surface and sub-surface operations, etc.) that pose significant challenges when it comes to predicting future state or other dynamic change.
  • complex vessel dynamics e.g., multiple deployed payloads or instrumentation packages, combined surface and sub-surface operations, etc.
  • FIG. 1 illustrates an example data collection system of the present disclosure
  • FIG. 2 illustrates an example single machine learning network training process of the present disclosure
  • FIG. 3 illustrates an example modular vessel command system and a modular autonomous vessel control system
  • FIG. 4 illustrates an example trained and deployed machine learning based maritime control system of the present disclosure
  • FIG. 5 illustrates an example trained and deployed machine learning based maritime control system of the present disclosure
  • FIG. 6 illustrates an example system for training and autonomous operation of a sub surface vessel
  • FIG. 7 illustrates an example of a system for implementing certain aspects of the present technology.
  • autonomous land-going vehicles can be characterized by the fact that they generally experience little in the way of latency between receiving a commanded state or position and achieving the commanded state or position.
  • this latency might be on the order of single-digit seconds for almost all land vehicle operations, including steering, accelerating, braking, parking, etc.
  • autonomous control systems for ocean-going vessels are designed not only subject to entirely different operational constraints as compared to land vehicles, but must be designed with fundamentally different control and sensing criteria as compared to land vehicles. As such, it would be highly desirable to provide an autonomous or semi-autonomous control system for ocean-going vessels that fully accounts for the control of any deployed payloads.
  • a computer learning system that can be integrated with sensor, data access, and/or control systems already present on a given vehicle or vessel in order to thereby generate a plurality of training data sets for training one or more machine learning algorithms or networks in the autonomous or semi-autonomous operation of a vehicle or vessel.
  • a vehicle or vessel can include ocean-going vessels, freshwater vessels, sub-surface vessels, and various other forms of floating or displacement watercraft as would be appreciated by one of ordinary skill in the art.
  • aspects of the present disclosure can be applied to additional vehicle types, environments, and control systems and techniques, such that a given vehicle can be provided with machine learning to train and implement one or more obstacle detection and avoidance systems, systems which can function in either an independent fashion or as an underlying component of an overall vehicle navigation and control system.
  • the disclosed training data generation and machine learning process can be applied to train vehicles adapted for one or more of a sub- sea environment, a sea environment, a surface environment, a sub surface environment, an airborne environment, an atmospheric environment, and an outer space environment.
  • both an oil tanker and a robotic vacuum cleaner could be trained to perform a process of obstacle or risk detection to thereby map their surroundings for subsequent navigation.
  • a navigational system can learn the corresponding control and response dynamics for each vehicle type, and based on the combination of the obstacle/risk detection and the learned control and response dynamics, the same system can be conditioned or trained to implement a suitable control and navigation system for both an oil tanker and a robotic vacuum cleaner, or a variety of other locomotion devices, machines, etc.
  • machine learning algorithms or networks can take various forms and implementations as would be appreciated by one of ordinary skill in the art and are not restricted to a singular type or construction.
  • the instant disclosure refers primarily to convolutional neural networks (CNNs), artificial neural networks (ANNs), and long short-term memory networks (LTSMs), although again, it is contemplated that the various machine learning networks as are known in the art may be utilized without departing from the scope of the instant disclosure.
  • machine learning techniques can be broadly viewed as employing feedback techniques to minimize a cost function, such that the cost function minimization causes the machine learning to converge on a final trained state.
  • Minimization and feedback techniques include backward propagation, which evaluates the output of a machine learning system for a given training data input, and uses this evaluation as an input to the next training cycle of the machine learning system. Additional techniques beyond backward propagation can be employed in machine learning and training applications, including evolutionary strategy and genetic algorithms, which can be viewed as likewise performing a cost function minimization in order to train the machine learning system.
  • the training data used to train a machine learning system can be adjusted or adapted for various categories of machine learning, including supervised learning, unsupervised learning, semi- supervised learning, reinforcement learning, etc. While general reference is made to autonomous operation or autonomous vessels, this term is not to be construed as limiting and instead is intended to encompass various lesser degrees of autonomy.
  • these lesser degrees of autonomy include remote control, wherein a vessel has supplemental or secondary (but not primary) autonomous decision making capabilities, and supervised autonomy, wherein a vessel has primary autonomous decision making capabilities but is subject to monitoring and intervention by a human operator (e.g., a captain) based on his or her own human judgment.
  • a human operator e.g., a captain
  • FIG. 1 depicts an example architecture 100 for acquiring and generating training data for use with one or more computer or machine learning networks.
  • architecture 100 illustrates a system and method for generating or constructing training data for the autonomous or semi-autonomous operation of ocean-going vessels such as survey ships or oil field service ships, although it is appreciated that the training data generation may be tailored for other types of vessels without departing from the scope of the instant disclosure.
  • the primary training data inputs are provided by an a priori data source 102, a sensor layer/acquisition system 104, and a metadata generation and classification system 120.
  • training database 190 which contains temporally indexed data 192, non- temporally indexed data 194, and metadata 196.
  • Training database 190 might provide separate logical partitions for one or more of the three datasets 192, 194, and 196, or can store the datasets on a shared partition.
  • a system or apparatus for training data generation can be installed in one or more vessels for generating training data as will be described below.
  • an installed system might include a supervisor UI 140 provided in the control center of the vessel in order to receive inputs from the vessel’s captain or a training data supervisor.
  • One or more of metadata generation and classification system 120 and mission plan 130 can be likewise provided in situ on the vessel alongside supervisor UI 140, or can be provided remotely, such as at a headquarters, operating base, or dock operation associated with the individual or company responsible for the vessel.
  • a priori data source 102 can be provided in situ on the vessel or remotely, as is desired - the nature of a priori data makes it such that the data source 102 can function normally as long as its communicative links and data channels are active.
  • all of the components seen in architecture 100 might be provided as a single unit or otherwise all provided on the vessel.
  • training data database 190 might be unique to just that given vessel, i.e. contains only training data generated in the course of operating the given vessel with or without payloads.
  • a master training data database (not shown) could then be provided, either remotely (e.g., at a headquarters, operating base, dock operation, etc.) or in the cloud, such that the training data collected in training data database 190 could be streamed to the master training data database as it is generated, or could be offloaded in bulk to the master training data database at some later time after its collection, such as when the vessel returns to port.
  • a priori data provided in architecture 100 by a priori data source 102, is data that is generally static and can be characterized in advance. As such, a priori data commonly exhibits limited to no temporally dependency (and can thus be stored in training data database 190 as non-temporally indexed data 194) and is oftentimes data which can be obtained without deploying a vessel into the field.
  • a partial listing of the a priori data pertaining to the operation of an ocean floor survey vessel that may be collected by a priori data source 102 can include:
  • ROV Remote Operated Vehicle
  • Detection capabilities of payload devices e.g., High quality if instrument is dOm above ocean floor, Medium quality if instrument is >10m above ocean floor, Low quality if instrument is >50m above ocean floor), pay in time, time to configure for storage on vessel, time to configure for deployment from vessel, running time, etc.
  • measurement quality windows e.g., High quality if instrument is dOm above ocean floor, Medium quality if instrument is >10m above ocean floor, Low quality if instrument is >50m above ocean floor
  • pay in time time to configure for storage on vessel, time to configure for deployment from vessel, running time, etc.
  • Atmospheric weather forecast (visibility, wind speed, precipitation, storm activity, temperature, humidity, etc.) , etc.
  • o Charts - give safe navigation corridors, define‘no-go zones’, define shoal areas, define boundaries (regulator, ownership, etc.), locate infrastructure (rigs, platforms, buoys, etc.) etc.
  • Third parties and/or governmental hydrographic/charting offices provide navigational and bathymetry data, point cloud, raster, or TIN (triangular irregular network) format, etc.
  • GIS Global Information System
  • a priori data is often spatial or quantitative
  • some a priori data might have a temporal component.
  • Met Ocean forecasts have a temporal component in addition to a spatial component, indicating a time at which a given weather condition is expected in a given spatial location.
  • some spatial a priori data may gain a temporal component as it evolves or changes.
  • bathymetry data alone is spatial, viewed in a larger context such bathymetry data has a temporal component as it changes over time (e.g., year to year).
  • a priori data source 102 can collect, retrieve, generate or otherwise acquire any a priori data pertaining to a vessel and its operation in or around various bodies of water as is desired and geographic/environmental data from a plurality of external sources where the vessel does or may operate. Furthermore, the fact that a priori data has a weak temporal dependence does not imply that the a priori data is inherently scalar data. Indeed, many a priori data factors can be characterized as functions of numerous other variables besides time. For example, the power output of a vessel can be a function of the number of engines operable, each engine’s operating power, fuel type, ambient temperature, etc.
  • training data database 190 categorizes and indexes the received training data from a priori data source 102 as either temporally based or non-temporally based, it is contemplated that categorizations on other variables or factors can be employed without departing from the scope of the instant disclosure.
  • a priori data source 102 can perform normalization operations prior to transmitting training data to training data database 190.
  • training data database 190 may itself perform this normalization operation prior to or as a part of the write operation (e.g., database 190 is accessible through and receives data through one or more servers (e.g., cloud-based, etc.) over a communication network).
  • servers e.g., cloud-based, etc.
  • the second category of input training data is real-time data, which is provided in architecture 100 by sensor layer/acquisition system 104.
  • sensor layer/ acquisition system 104 acquires the real-time data from one or more sensors (or the like) attached to the vessel in a geographic area.
  • Real-time data is generally dynamic and is generally collected or refreshed at some relatively small periodic interval, i.e. in substantially real-time. Accordingly, real-time data commonly exhibits a strong temporal dependency (and can thus be stored in training data database 190 as temporally indexed data 192) and is oftentimes data which can only be obtained from a vessel or sensor deployed into the field or area of interest.
  • a partial listing of real-time data pertaining to the operation of an ocean floor survey vessel can include:
  • ⁇ Location and depth Relative to surface vessel(s) or other deployed payload(s), absolute, changes over time, surface estimated position vs. payload estimated position, etc.
  • o AIS Automatic Identification System
  • ⁇ IMO International Maritime Organization
  • o Radar readout(s) can be correlated with AIS data to tag radar blips with ship’s info, etc.
  • Can be integrated with computer vision for correlation with other in situ data and/or dynamic target identification, etc.
  • VHF Very High Frequency radio transmission
  • other communication links can be digitized and converted to text, etc.
  • sensor layer/acquisition system 104 can collect, retrieve, generate or otherwise acquire any real-time data pertaining to the ongoing operation of a vessel in or around various bodies of waters as is desired. It is noted that although the real-time data is typically highly temporally dependent, this does not imply that real-time data is solely a function of time. For example, many real-time operational characteristics of both the vessel and the payload(s) are further dependent upon loading factors and other state variables and will dynamically change in light of such.
  • training data database 190 categorizes and indexes the received training data from sensor layer/acquisition system 104 as either temporally based or non-temporally based, it is contemplated that categorizations on other variables or factors can be employed without departing from the scope of the instant disclosure.
  • sensor layer/acquisition system 104 can perform normalization operations prior to transmitting training data to training data database 190.
  • the real-time training data could be indexed on the desired variable(s) and normalized to one or more of a table structure, an XML structure, a conventional database file type structure, or various other formats known in the art.
  • training data database 190 may itself perform this normalization operation prior to or as a part of the write operation.
  • the third category of input training data is labeled training examples, which are provided or obtained in architecture 100 by metadata generation and classification system 120. These labeled training examples can be used to drive or guide a machine learning algorithm or network as it learns from the raw or input training data contained in training data database 190.
  • This labeled training example data generally comprises one or more input conditions (e.g., possible input conditions are enumerated in the a priori data, the real-time data, or a combination of the two) and a desired or recorded output.
  • the input conditions might be given by, based upon, or extracted from one or more of the detected deviations 131, from a priori or real time data determined to be associated with the detected deviations 131, an identified cause of the detected deviations 131 (wherein the identified cause can be drawn from any combination of a priori data and real-time data), a causal chain constructed by metadata system 120 comprising various combinations of a priori data and real-time data with the chain terminating in the detected deviations 131.
  • the output conditions of the labeled training example pair might be given by, based upon, or extracted from an analysis of real-time data indicative of control inputs made to the vessel in a time window immediately surrounding the deviation, a log of direct control inputs made into the vessel, or an external or separate input specifying, after the fact, the specific control inputs and series of control inputs that were made into the vessel in response to the deviation.
  • the metadata generation and classification system 120 can be installed on a vessel under human control or supervision such that metadata (i.e.
  • labels for the training example pairs, or simply labeled training examples can be generated in a substantially automated fashion with minimal or occasional human input (e.g., to the supervisor UI 140 in response to one or more prompts 121 from metadata system 120), for example, only to explain why a specific action or set of actions was undertaken.
  • metadata system 120 receives or is able to access a stored mission plan 130 corresponding to the planned operations of a given vessel at a given time. By comparing the stored mission plan 130 with the actual real-time vessel data obtained by sensor layer/acquisition system 104, metadata system 120 can automatically identify any deviations 131 from the mission plan 130.
  • metadata system 120 may first attempt to determine the underlying cause or reason for the deviation 131 automatically. In order to do so, metadata system 120 receives from sensor layer 104 a selection of control data 105, which here includes VHF radio and other communications data, payload changes, and signaling events, that is most likely to indicate that deviation 131 was undertaken due to, for example, duress or an emergency and did not simply arise accidentally or erroneously.
  • control data 105 includes VHF radio and other communications data, payload changes, and signaling events
  • metadata system 120 can analyze the selection of control data 105 for factors indicating an emergency (e.g., words and phrases associated with emergency extracted from the communications data, drastic or catastrophic state changes in the payload(s), signaling events associated with duress or emergency, etc.) and can oftentimes determine, with a reasonable level of confidence, whether or not deviation 131 occurred due to an emergency, and therefore, whether or not deviation 131 can be utilized as a labeled data pair for training purposes.
  • an emergency e.g., words and phrases associated with emergency extracted from the communications data, drastic or catastrophic state changes in the payload(s), signaling events associated with duress or emergency, etc.
  • the metadata system can transmit a prompt 121 to a supervisor user interface (UI) 140 that is provided on a computing device of the human supervisor (e.g., captain) of the vessel or on a computing device of the vessel under human supervision.
  • UI supervisor user interface
  • the prompt 121 can present the identified one or more deviations 130 from stored mission plan 130 and request an input or response 141 from the captain of the vessel specifying why the deviation occurred.
  • the input or response 141 can be entered free- form and subsequently analyzed to extract its meaning.
  • the input or response 141 can be selected from a pre-determined set of options or a series of linked questions such that metadata system 120 can leverage these pre-defined categorizations to more efficiently determine why a deviation occurred and either dismiss the deviation (in case of a false alarm or accidental deviation) or generate a labeled data pair consisting of all measured input conditions in some time period prior to the deviation and the reason why the deviation occurred.
  • the prompt 121 can include a set of uniquely generated selectable options, the answers to which can be utilized by metadata system 120 to generate a refined categorization of the previously unknown or difficult to categorize deviation.
  • the deviation categorization can be performed on the basis of the severity or expected severity of the deviation, wherein this severity based categorization can be performed in the initial algorithmic categorization attempt, in the refined categorization made in light of the input/response 141 received from the supervisor UI 140, or both.
  • metadata system 120 should be able to either identify a root cause (e.g., because the refining process eliminated all other options) or a most likely root cause (e.g., the refining process eliminated most but not all other options), both of which can be used for labeling the input, output training example pair to indicate a correlation between the identified most likely cause (the input) and the determined control inputs taken in response to the deviation (the output).
  • a root cause e.g., because the refining process eliminated all other options
  • a most likely root cause e.g., the refining process eliminated most but not all other options
  • the combination of a labeled (input, output) pair can be combined with its corresponding a priori and real-time training data in order to form a complete training unit (e.g., supervised, etc.) for the presently contemplated machine learning processes.
  • a complete training unit e.g., supervised, etc.
  • the input or response 141 can be solicited or otherwise received some period of time after the deviation occurred, as a serious deviation is expected to require the full attention of the captain in order to achieve resolution - once the deviation has been handled, the captain may then be prompted to provide input or response 141 to the prompt 121.
  • a captain may be on board a vessel in a purely supervisory role for a semi-autonomous vessel guidance system.
  • the captain may monitor all operational conditions, traffic, etc. as he would normally, but with the sole purpose of identifying errors or undesirable actions output by the semi-autonomous vessel guidance system. If such an action is identified, then the captain can trigger a manual override and take control of the vessel until the issue has been satisfactorily resolved.
  • These supervisor overrides 143 can optionally be collected by metadata generation and classification system 120, as such overrides can be enormous valuable in performing additional training on the machine learning network that made the error. If supervisor overrides 143 are received, metadata system 120 may transmit them to training data database 190 where the overrides 143 can be stored along with the existing metadata 196.
  • labeled training data is provided under the assumption that the machine learning algorithm or network is being trained in a supervised or semi- supervised manner, both of which have labeled pairs (e.g., input conditions, desired/correct output) in order to apply either a positive reinforcement (e.g., machine learning obtained desired/correct output) or a negative reinforcement (e.g., machine learning failed to obtain desired/correct output) during the back propagation step of the training process.
  • labeled pairs e.g., input conditions, desired/correct output
  • a positive reinforcement e.g., machine learning obtained desired/correct output
  • a negative reinforcement e.g., machine learning failed to obtain desired/correct output
  • machine learning algorithm or network is unsupervised, then labeled training data is generally not required, as the machine learning builds its own pattern recognitions and correlations across the input training data set.
  • metadata generation and classification system 120 may be omitted, or the input/responses 141 received from the captain or human supervisor can be split and stored as temporally indexed training data 192 or non- temporally indexed training data 194 for use as additional training data for the unsupervised machine learning algorithm or network.
  • any data involved in the operation, navigation, or control of a vessel and its payload(s) can be collected across any number of disparate sources, classified in training data database 190 on the basis of one or more desired classification variables (temporal or non-temporal in the example above), and made available for use in training one or more machine learning algorithms or networks - while training data database 190 does not itself build correlations or causal links between different training data inputs or variables, database 190 is operable to provide a comprehensive collection of all data that might be needed for a machine learning network to build these correlations and causal links.
  • Training data database 290 is generally similar to training data database 190 of FIG. 1, although training data database 290 is contemplated to contain training data assembled across a much larger and more diverse selection of vessels, conditions, missions, etc. than the single vessel, single training data capture session that was discussed as an example with respect to FIG. 1. In other words, training data database 290 has collected sufficient training data and labeled data that it provides a massive, but balanced (i.e. is not biased towards certain conditions) training data set.
  • Training data database 290 can be utilized to train anywhere between one machine learning network to thousands of machine learning networks. For the sake of simplicity, only a single machine learning network 201 is depicted, although the additional branching arrows and ellipses are indicative of the fact that additional machine learning networks may also train from training data database 290, subsequent to or simultaneous with the training of machine learning network 201.
  • FIG. 2 makes use of a convolutional neural network (CNN) 230, which is notable for its ability to learn and construct spatial correlations between various portions of received input data.
  • CNN convolutional neural network
  • the training data stored within training data database 290 (which is stored as either temporal data 292 or non-temporal data 294) must be converted into a spatial format that is readable by CNN 230.
  • the spatial non- spatial categorization could be performed when the training data is saved into the training data database (i.e., categorize the training data on spatial dependence in lieu of categorizing on temporality).
  • the raw training data is received in a format that is already either naturally temporal (e.g., real-time data, timestamped by a sensor or system that received the data) or naturally non-temporal (e.g., a priori data, a static quantity or property).
  • the altitude of a deployed sensor payload off of the seafloor could easily be represented as a spatial variable (plot altitude as a third dimension at x-y coordinates where an altitude was recorded) or represented as a non-spatial variable (store a scalar value of the current altitude, and optionally, store the altitude at t-l, the altitude at t-2, etc.).
  • a complication arises because the determination of whether a variable is best represented in a spatial form or a non-spatial form for purposes of input to a machine learning network is largely opaque to the human mind.
  • a spatial v. non-spatial classification was to be performed at the moment of entry of the training data into the training data database 290, the classification would become fixed and the entire training data set would adopt this (e.g., likely arbitrary) spatial classification scheme.
  • the disclosed approach of storing training data in training data database 290 based on temporal and non temporal indices avoids this problem, as different assignations of spatial and non-spatial representations of the training data variables can be made as desired.
  • a plurality of machine learning networks can be trained off of training data database 290 each receive a different assignation of variables to be represented spatially and variables to be represented non- spatially.
  • the spatial variable assignations can be random.
  • the spatial variable assignations can be constrained by various variables that are not suitable for representation in one of either a spatial form or a non-spatial form. For example, variables such as the number of engines running do not lend themselves to a spatial representation, while variables such as bathymetry maps do not lend themselves to a non-spatial representation. These constraints can be determined manually and input into the system as an existing condition, or could be previously learned constraints from earlier iterations of the machine learning network training cycle described below.
  • machine learning network 201 receives a first spatial variable assignation 202 (e.g., spatial vars., non-spatial vars.), which will be applied to each training data set that is input into machine learning network 201.
  • Additional machine learning networks undergoing training receive spatial variable assignations 204 (e.g., spatial vars. B, non-spatial vars. B) and 206 (e.g., spatial vars. C, non-spatial vars. C), thereby providing a greater probability that one or more of the machine learning networks being trained will generate outputs that are better than any of those that would be generated if only a single, fixed spatial variable assignation was used across all of the machine learning networks being trained.
  • spatial variable assignations 204 e.g., spatial vars. B, non-spatial vars. B
  • 206 e.g., spatial vars. C, non-spatial vars. C
  • machine learning network 201 first receives its spatial variable assignation 202 and may additionally receive at least a first training data set selected from the temporally indexed training data 292 and the non- temporally indexed training data 294.
  • each training data set will be normalized at normalization system 210 in view of the spatial variable assignation 202.
  • Those variables that are assigned to a spatial representation are converted into an appropriate spatial format. In some examples, this will be a raster (also known as a‘grid’ or‘bin’ representation), represented herein by spatial tensors 212.
  • a triangular irregular network might be employed in order to provide a spatial representation of the desired variables.
  • one or more coordinate gridding systems with a desired number of spatial dimensions can be utilized for the spatially represented variables of a given training data set (e.g., spatial tensors 212 are represented by a two-dimensional x-y coordinate grid, while other spatial tensors or matrices might have a three-dimensional x-y-z coordinate grid), although of course each given training data set is not localized to the same portion of the coordinate gridding system, as it is the coordinate system convention or selection (e.g. choice of x-y, choice of x-y-z, Cartesian, polar, etc.) that may be shared or common.
  • coordinate system convention or selection e.g. choice of x-y, choice of x-y-z, Cartesian, polar, etc.
  • a first training data set might correspond to a first vessel’s activities that were recorded in the Atlantic Ocean while a second training data set might correspond to a second vessel’s activities that were recorded in the Gulf of Mexico.
  • Both training data sets can utilize an x-y-z coordinate grid, but the two grids are not necessarily normalized in any overlapping manner and are not necessarily correlated relative to one another or positioned in a master frame.
  • different training data sets can employ different coordinate systems, which can be converted or normalized if necessary or as desired.
  • a new dimension is added to the existing coordinate grid for each spatially represented variable.
  • the coordinate system itself may be common to the spatially represented variables, it is possible or even likely that different spatially represented variables will have a different resolution or minimum grid size.
  • a spatial variable for the altitude of a deployed sub- surface instrumentation payload might have a minimum grid size of lOm while a spatial variable for surface wind speeds might have a minimum grid size of 300m.
  • normalizer 210 For each given training data set, normalizer 210 generates normalized spatial tensors 212, which are subsequently fed into CNN 230 for training.
  • CNN 230 is configured with a weight matrix 232 (e.g., initialized with random values) that is suitably sized for matrix multiplication with the normalized spatial tensors 212.
  • weight matrix 232 e.g., initialized with random values
  • normalized spatial tensors 212 are multiplied with the weight matrix 232 to yield a spatial CNN tensor output.
  • the ultimate goal is to improve the accuracy of the weight matrix 232 by evaluating this spatial CNN tensor output against an actual or expected control action that is contained in labeled data corresponding to the given training data set.
  • several more steps are often required before the output of CNN 230 is ready to be evaluated against the corresponding labeled data.
  • normalizer 210 may normalize temporal data (or other non-spatial data) to a common time frame or temporal reference frame, as many machine learning techniques do not support the use of asynchronous data.
  • a convolutional neural network such as CNN 230 requires different grids/layers of temporal data to be correlated in time before they may be analyzed or manipulated, such that the convolutional neural network receives a series of rasters that are correlated to a common epoch.
  • temporal data is generated in an asynchronous fashion, e.g. temporal data can be generated at differing frequencies, timestamped based on an inaccurate clock, not timestamped at all, etc.
  • gyroscope data might be measured at 60 Hz, while vehicle position data might be generated or received at 40 Hz.
  • the temporal data should be in a common epoch or time frame.
  • an interpolation can be performed on training data retrieved from training data database 290 in order to align each individual retrieved sensor data to a common epoch or time frame suitable for use in a training update step of a machine learning process. For example, each individually retrieved sensor data is analyzed against a desired time. If a given sensor data was collected at the desired time, then this collected value can be used for the training data set. However, if a given sensor data was not collected at the desired time, then an interpolated value corresponding to the desired time must be calculated. This interpolation can be performed by normalizer 210 or by a dedicated interpolation system (not shown).
  • two data points might be retrieved - the data point collected most immediately prior to the desired time and the data point collected most immediately subsequent to the desired time. These two data points can be thought of as bracketing the desired time.
  • Various interpolation algorithms are then applied to generate an interpolated value or an interpolated function between the two bracketing data points.
  • the interpolation algorithms might analyze a greater portion of the sensor data in order to produce the interpolation between the two bracketing data points.
  • an interpolation function or process might be applied to training data before or as it is being stored in training data database 290, such that training data database 290 stores a plurality of interpolation function that receive as input a desired time and output an interpolated sensor data corresponding to the input time.
  • training data database 290 stores a plurality of interpolation function that receive as input a desired time and output an interpolated sensor data corresponding to the input time.
  • interpolation and/or extrapolation can be performed in order to generate one or more intervening values for a data set (e.g., interpolate 30 Hz data into 60 Hz data).
  • asynchronous data can be converted into a synchronous form by downsampling higher frequency data into a common, lower frequency (e.g., downsample 60 Hz to 30 Hz).
  • some combination of interpolation and downsampling can be employed to convert asynchronous data into synchronous data with a common time step or a common collection frequency.
  • the common collection frequency can be pre-defined, for example, to be a minimum collection frequency of the data set (to avoid interpolation and instead only downsample data), to be a maximum collection frequency of the data set (to avoid downsampling data and instead only interpolate lower frequency data), or to be the collection frequency that will minimize the number of downsampling and/or interpolation operations that must be performed.
  • the disclosure turns now to the flow of training data assigned to a non-spatial representation.
  • the variables that are assigned to take a non-spatial representation are normalized into a plurality of non-spatial tensors 214 that are compatible for input into a sensor fusion network 220, which in some embodiments can be provided by a neural network or an embedding layer.
  • Sensor fusion network 220 takes as input the plurality of non-spatial tensors 214 and multiplies them with its stored or learned weight matrix 222 to yield as output a fused tensor that is in a machine learning readable/compatible format.
  • the fused tensor output combines and weights the different non-spatial variables contained within the tensors 214, which may otherwise be un-related or not immediately obvious as combinable, into a single vector.
  • the output of CNN 230 and the output of sensor fusion network 220 are coupled into a 4D state estimator 240, which recombines and analyzes the spatially derived tensor output of CNN 230 and the non-spatially derived fused tensor output of sensor fusion network 220 to generate a 4D Risk Map (t) 242.
  • 4D state estimator 240 can additionally receive as input a 4D Risk Map (t-1) 243, which is generated in the previous time step (t-1) by 4D state estimator 240.
  • 4D Risk Map 242 can be human readable, whereas the two tensor inputs received at 4D state estimator 240 are typically not human readable.
  • 4D state estimator 240 is provided as another neural network, although it is appreciated that various other machine learning algorithms and networks may be utilized to without departing from the scope of the instant disclosure.
  • the 4D Risk Map 242 provides a representation of a 3D environment (e.g., the environment derived from or contained within the current training data set) that has been overlaid with future risk potential as a fourth dimension, wherein risk can be defined and/or quantified according to various different standards.
  • the risk calculation can be driven by the following goal hierarchy: 1) Safety of Life 2) Safety of Environment 3) Safety of Equipment 4) Continuation of Vessel Operations/Mission. Based on these or other criterion, risk or future risk potential can be quantified by 4D state estimator 240 and overlaid on the 3D environment in which the various future risks may present themselves.
  • a back propagation process 282 can be provided by a first training module 280, which in some embodiments can utilize one or more portions of the training data and metadata 296 stored in training data database 290 in order to perform back propagation 282.
  • first training module 280 can utilize one or more portions of the training data and metadata 296 stored in training data database 290 in order to perform back propagation 282.
  • back propagation it is appreciated that other processes and cost minimization functions can be employed in lieu of back propagation 282, such as, but not limited to the genetic or evolutionary algorithms described previously.
  • back propagation process 282 and first training module 280 can be thought of as governing or controlling the training of the machine learning system with respect to the 4D Risk Map 242 and its generation.
  • first training module 280 can receive as input the 4D Risk Map 242 and training data obtained from the metadata store 296 of training data database 290. From these inputs, first training module 280 generates back propagation feedback for injection into one or more of sensor fusion network 220 (e.g., to update weight matrix 222), CNN 230 (e.g., to update weight matrix 232), and 4D state estimator 240 (e.g., to update weigh matrix 241). In some embodiments, this back propagation process 282 can be performed prior to the action prediction of action prediction system 250, which also receives as input the 4D Risk Map 242.
  • back propagation process 282 can be performed simultaneously or asynchronously with the action prediction of action prediction system 250, as long as the injection and weight matrix updating process performed in view of the additional back propagation is completed before the ingestion of a next training data set, e.g. the t+1 training data set.
  • back propagation 282 might utilize a separate training set in order to converge to the weight matrix 222 for the sensor fusion network 220, or weight matrix 222 might otherwise be stored in advance. In instances where the weight matrix 222 is not stored in advance, then it can be jointly trained with weight matrix 232 of CNN 230 and weight matrix
  • the weight matrices 222, 232, 241 comprise learned values that impart certain weights onto the various constituent factors or variables of the corresponding inputs received to sensor fusion network 220, CNN 230, and 4D state estimator 240.
  • weight matrix 222 is a learned weighting matrix that imparts a certain weight onto the various non-spatial tensors 214, e.g., the weight matrix 222 may have been trained such that vessel heading is weighted to be twice as important or influential as vessel heave.
  • the weighting can take into consideration or otherwise reflect correlations between the different variables and how they affect one another.
  • the 4D Risk Map//) 242 is input directly into an action prediction system 250 (which itself may also be provided by a machine learning algorithm, function, or network).
  • the action prediction system 250 is operative to assess 4D Risk Map//)
  • This determined best course of action can be generated as a human-readable vector 252 of actionable changes for the current time step, labeled here as‘Actions//)’.
  • actionable changes or control variables include the vessel’s rudder angle, the vessel’s throttle position (e.g. from 0% to 100%), signal to blow the whistle, etc.
  • the action prediction system 250 may be governed, at least in part, by the same four goals described above with respect to the risk calculation.
  • action prediction system 250 receives an actionable changes vector Actions//- i ) 253, representing the actionable changes generated by action prediction system 250 in the previous time step (t-1).
  • Action prediction system 250 further receives as input ColRegs (International Regulations for Preventing Collisions at Sea) 254 in either a standard computer-readable form or in a more specifically tailored machine learning network format.
  • action prediction system 250 may additionally receive or access a stored copy of the current vessel mission plan, which can be utilized with respect to the final goal of the goal hierarchy - 4) Continuation of Vessel Operations/Mission. Given these inputs, action prediction system 250 is operable to generate the human-readable vector of actionable changes for the current time step, Actions//). [0056] Ultimately, once machine learning network 201 is trained and deployed on a vessel in either an autonomous or semi-autonomous control configuration, actionable changes vector Actions//,) 252 will be utilized to either control the vessel (in autonomous or semi-autonomous mode) or advise a human captain (semi- autonomous mode).
  • the actionable changes vector Actions//) 252 is instead received at a second training module 260, which evaluates the action or actions contained in Actions//) 252 against the actual or expected action specified by the corresponding labeled training data for the training data set currently being processed.
  • This corresponding labeled training data is retrieved from the metadata store 296 of training data database 290.
  • the evaluation performed by second training module 260 can vary depending upon the type of training being utilized. Supervised training will provide the correct action for every training data set, semi- supervised training will provide the correct action for some but not all training data sets, and reinforced learning will provide a positive or negative reinforcement but no explicit indication of whether or not Actions//) was correct.
  • back propagation 262 provides a positive reinforcement allowing one or more of the weight matrices 222, 232, 241, 251 to be adjusted to give incrementally more weight to the matrix values that produced the correct actionable changes vector Actions//) 252.
  • back propagation 262 provides a negative reinforcement enabling one or more of the weight matrices 222, 232, 241, 251 to give incrementally less weight to the matrix values that produced the incorrect actionable changes vector Actions//) 252.
  • the negative reinforcement can specify a degree to which Actions//) 252 was incorrect and the size of the incremental weighting reduction can vary in accordance with the degree to which the actionable change vector was incorrect. If unsupervised training is employed, then second training module 260 will not provide any feedback regarding Actions//), and in some embodiments may itself not be present.
  • second training module 260 pertains to the overall machine learning network 201, as second training module 260 provides a back propagation 262 to all four of the weight matrices 222, 232, 241, 251 present in the machine learning network. Accordingly, weight matrix 222 of sensor fusion network 220, weight matrix 232 of CNN 230, and weight matrix 241 of 4D state estimator 240 can each receive two back propagation inputs, and are thus driven to converge not only to a final weighting that produces suitable 4D risk maps, but a final weighting that also produces suitable actionable change vectors.
  • machine learning network 201 analyzes multiple training data sets, no time dependence exists between different training data sets - while each training data set on its own is associated with some time period wherein the data was collected in one or more time stamps, the training data sets compared to one another are stateless, and it can in fact be beneficial for machine learning network 201 to analyze drastically different training data sets rather than training data sets that are temporally linked or collected by only a single vessel.
  • variety of training data sets is perhaps the most highly desired quality, which emphasizes the need for the robust and automated training data generation outlined with respect to FIG. 1.
  • the training process can conclude when the training data is exhausted, the machine learning network 201 fails or cannot recover from predicting erratic actionable change vectors 252, or if it is determined that machine learning network 201 has been sufficiently trained and its training matrix 232 has suitably converged for deployment or live testing.
  • FIG. 3 depicts a first diagram 300a of a selection of a vessel’s primary command center system and a second diagram 300b of a machine learning autonomous operations system deployed on the vessel.
  • the vessel’s primary command center system is split between an in situ bridge or command center 310 that is located on the vessel and a remote command center 320 that is communicatively linked (e.g., over a wireless network, cellular network, short-range wireless, satellite, etc.) with vessel command 310 but is not physically located on the vessel.
  • remote command center 320 might be located on land, such as near a port or headquarters of the company operating the vessel.
  • remote command center 320 might be located on another vessel, which may or may not be in close proximity to vessel command 310.
  • vessel command 310 can be characterized by two primary operations. The first is captain/control operations 312, which involves the control, steering, and general running of the vessel. The second is survey/payload operations 314, which more broadly can be thought of as mission operations, which involve performing various operations, with or without the assistance of deployed payload devices and instrumentation, as specified by the Mission Plan for the vessel.
  • diagram 300b presents the deployed machine learning autonomous operation system in a modular perspective similar to the manner in which discrete or separate computer systems might be interconnected on the vessel in order to implement the presently disclosed techniques.
  • the autonomous operations are split into two primary systems, an USV (unmanned surface vessel) command core 330 and a payload mechanics system 360. Note that these two primary autonomous systems parallel the two primary operations of the vessel command center 310.
  • an in-situ data collection system 332 transmits in-situ data to both USV command core 330 and to a collision avoidance system 336, wherein the collision avoidance system 336 may be an existing system of the vessel command center 310.
  • Collision avoidance system 336 receives AIS (Automatic Identification System) data from an AIS system 338, which may be provided by an AIS transceiver, a satellite link, or other communications link, any of which may either be retrofit to the vessel or pre-existing on the vessel, whether in command center 310 or elsewhere.
  • AIS Automatic Identification System
  • Collision avoidance system 336 further receives radar input from one or more radars 339, which in some embodiments may be split into at least a high-frequency radar (for tracking targets on the horizon) and a low-frequency radar (for tracking targets in the immediate vicinity).
  • the one or more radars 339 may be supplemented with one or more Lidar units, which utilize laser scanning to further increase resolution and sensing capabilities in the close-range otherwise handled by the low-frequency radar.
  • collision avoidance system 336 is illustrated as receiving input from an image classifier and computer vision system 337, which can be utilized to further assist in the dynamic target recognition that is performed by collision avoidance system 336.
  • the computer vision system 337 may, in some embodiments, capture or receive raw photo or video data captured from on the vessel or in the vicinity immediately surrounding the vessel and subsequently process this raw photo or video data in order to identify and classify objects (e.g., other ships, buoys, rigs, etc.) or generate alerts corresponding to any unidentifiable objects.
  • the raw photo or video data can be transformed or normalized from the camera space into a real-world or geographical coordinate space/system. This coordinate space can be similar to or shared with the coordinate space of one or more of the spatial data inputs that are normalized to a grid or raster format, such that the image classification data can itself be utilized as both a training data input and a machine learning input.
  • computer vision system 337 may be driven by a machine learning function.
  • Collision avoidance system 336 receives and processes the aforementioned data and generates a collision warning profile or a listing of potential upcoming threats to USV command core 330.
  • USV command core 330 further receives as input a listing of a priori data, which here is illustrated as including a Mission Plan for the vessel, Meteorological Data, Bathymetry data, GIS data, and chart/no-go zone data, all of which are described previously.
  • a priori data here is illustrated as including a Mission Plan for the vessel, Meteorological Data, Bathymetry data, GIS data, and chart/no-go zone data, all of which are described previously.
  • USV command core is configured to generate a 4D Risk Map 344, for example by utilizing the training data generated in accordance with FIG. 1 and the machine learning training process described with respect to FIG. 2.
  • the 4D Risk Map 344 is then transmitted to a safe ops logic system 348, which in some instances is the same as the action prediction system 242 of FIG. 2.
  • safe ops logic system 348 can be supplemented with a variety of pre defined rules specific to the operation of a specific vessel, or specific to the current environment in which the vessel is operating (e.g., temporary restrictions in place, foreign port with different and unfamiliar rules and regulations, etc.). In some embodiments, it is contemplated that safe ops logic system 348 can be programmed to account for any deficiencies in the USV command core 330 and underlying machine learning that are not discovered until the vessel is already underway. In this manner, a captain or supervisor of the USV command core 330 can apply temporary updates to maintain a high level of vessel performance until the USV command core 330 can be adequately retrained or refreshed.
  • a ColRegs system 349 which is utilized to define the international maritime collision avoidance rules, regulations, and procedures and couples to safe ops logic 348 in order to provide a rule-based implementation of the ColRegs such that autonomous or semi-autonomous vessel actions will not violate ColRegs unless it is necessary in order to avoid impinging upon one of the four criteria of the goal hierarchy: 1) Safety of Life 2) Safety of Environment 3) Safety of Equipment 4) Continuation of Vessel Operations/Mission.
  • Safe ops logic 348 outputs an actionable or guidance control vector to payload control system 362, which is a sub-component of the payload mechanics system 360, which is the second primary system of the depicted autonomous operation modular architecture.
  • payload control system 362 which is a sub-component of the payload mechanics system 360, which is the second primary system of the depicted autonomous operation modular architecture.
  • This transmission between safe ops logic 348 and payload control system 362 is effected because the two control systems must work in conjunction. While vessel control provided by safe ops logic 348 takes precedence over payload control system 362 (which ties in only to goal 4) Continuation of Mission), it is nevertheless highly important that vessel operations and payload operations are not operating independently.
  • the default response to a possible collision is to stop or take quick evasive action (e.g., perform a tight turn).
  • evasive action e.g., perform a tight turn.
  • neither of these maneuvers would be acceptable in the vast majority of instances wherein a vessel has one or more payloads or instruments deployed into the water at the time when the possible collision is detected.
  • the vessel is towing sub-surface sea floor measurement instrumentation, such instrumentation is almost always constrained to require some minimal altitude above the sea bed.
  • the towed instrumentation would almost certainly collide with the sea bed, causing undesired damage or even environmental harm.
  • the command core 330 will additionally not have any predictive knowledge or model of when the towed instrument is predicted to pass through the minimum clearance altitude above the sea bed required for the vessel to execute a full stop. As such, a great deal of uncertainty and inefficient and even dangerous operations can result due to a lack of communication and predictive modeling undertaken between safe ops logic 348 and payload control 362. As a further example, vessel operations and payload operations can interfere with one another even when there is no towed payload (which can present the most challenges with respect to combined vessel and payload control).
  • a vessel may have one or more sensors hanging off the side of the vessel, with a tether length typically at least a full order of magnitude less than the length of the tow cable for a sub-surface instrument or sensor array.
  • these deployed sensors interfere with the vessel’s minimum turning radius or maximum turning rate, as the cables will snap and/or the sensor will break if the vessel turns too quickly or sharply, such as might be commanded when performing an evasive collision avoidance maneuver.
  • all manner of deployed payloads are operable to impose dynamic constraints upon a vessel in terms of its operating capabilities and its safe operating regime. Consequently, the presently disclosed machine learning technique is operable to resolve this deficiency by enabling predictive, synchronous control of both the vessel and any of its payloads, whether they are deployed, stowed, or somewhere in between.
  • payload control 362 is supplemented with a payload status system 364, which can be a passive or active component that either receives or requests/determines various operations parameters and characteristics of the payload, such as those described with respect to the data parameters of FIG. 1.
  • Payload status system 364 communicates with safe ops log 348, thereby completing the bi-directional communication channel between USV command core 330 (vessel control) and payload mechanics 360 (payload control).
  • FIG. 4 depicts a diagram 400 corresponding to the operation of a deployed autonomous or semi-autonomous machine learning vessel control system, for example, machine learning network 201 of FIG. 2 (once it has completed training). Because training has been completed, note that the weight matrix 422 of sensor fusion network 420, weight matrix 432 of CNN 430, weight matrix 441 of 4D state estimator 440, and weight matrix 451 of action predictor 450 are all fully trained, and thus, no training modules or back propagation processes are depicted nor required. In other words, where the weighting matrices and their constituent values were in flux and otherwise converging or being adjusted in the training process of FIG. 2, these weight matrix values are now held constant in FIG.
  • the varying data is received from an operational database or data source 492, which provides a priori and real-time or in situ data, shown here as being indexed into temporal indices 493 and non-temporal indices 494.
  • the a priori data can be received at some refresh interval (e.g., every 15+ minutes for weather forecast refresh which is substantially not qualified as real time data, or some data such as vessel length may never be refreshed) and the real-time or in situ data can be received at some known periodic rate (e.g., the measurement frequency of various real-time sensors, before or after any interpolation or downsampling is applied, etc.)
  • FIG. 4 can in some embodiments contain an additional in situ validation system, which receives as input real-time or in-situ data from the data source 492 and checks to see if any portion of the real-time data can be correlated with or compared against a portion of the a priori data received.
  • an additional in situ validation system which receives as input real-time or in-situ data from the data source 492 and checks to see if any portion of the real-time data can be correlated with or compared against a portion of the a priori data received.
  • a vessel performing survey operations may have loaded a priori chart, GIS, and/or bathymetry data before setting out to perform a seafloor measurement operation using a towed instrument, it can be desirable to obtain in situ measurements of the sea floor using a forward looking sensor mounted on the vessel.
  • the in situ data can be received and analyzed against the seafloor topography/profile contained in the a priori data such that the analysis is complete well before the towed instrument would need to be controlled in response to a discrepancy detected in the a priori data.
  • the in situ validation system can be governed by a weighting or disparity policy, which stipulates how the conflicting data should be handled.
  • data might be associated with a reliability score, e.g., a priori data might be considered extremely reliable when obtained from a National Oceanic and Atmospheric Administration (NOAA) survey performed in the last 3 years whereas the in situ data might be considered moderately unreliable due to the optimal operating parameters of the in situ sensor being exceeded.
  • NOAA National Oceanic and Atmospheric Administration
  • a conservative approach might be taken, wherein whichever data set that poses a greater threat or is the comparative worst case scenario is taken as the reference data set for the autonomous or semi-autonomous vehicle operations.
  • normalization system 410 will either receive a priori data and real-time/m-szYw data directly from data source 492, or normalization system 410 will receive a priori data and real- timdin-situ data as they have been adjusted based on the conflict policy implemented at an optional in situ validation system.
  • the normalization system 410 divides the input data received from data source 492 into a spatial portion and a non- spatial portion, as stipulated by the (spatial vars., non-spatial vars.) distribution associated with the deployed CNN 430. From the spatial portion of data generated by normalization system 410, CNN 430 uses its trained weight matrix 432 to generate a CNN tensor output which is passed to 4D state estimator 440. Similarly, from the non-spatial portion of data generated by normalization system 410, sensor fusion network 420 uses its trained weight matrix 422 to generate a fused tensor output which is passed to 4D state estimator 440.
  • 4D state estimator 440 also works in much the same manner as was described with respect to the 4D state estimator 240 of FIG. 2. However, 4D state estimator is no longer coupled to a first training module to receive back propagation or otherwise update its machine learning function and weight matrix. Instead, no training is needed, as 4D state estimator 440 utilizes its trained weight matrix 441 to generate an output of 4D Risk Map//) 442 which displays a high degree of predictive accuracy for the time step t (e.g., assuming that the training process has been properly performed).
  • 4D Risk Map//-/ 443, which is the 4D Risk Map that estimator 440 generated in the previous time step /-/
  • a far greater degree of predictive accuracy is obtained, as any unexpected or large fluctuations in risk calculations that appear for only a single time step are diminished and damped at least in part due to the feed forward nature of the 4D state estimator 440.
  • this feed forward state-driven dependency is also present in action prediction system 450, which receives as input at time step t the vector of actionable changes Actions//- i ) 453 from time step t-1 and the 4D Risk Map/ ⁇ ) from time step t.
  • ColRegs logic 456 are further combined with ColRegs logic 456 in order to calculate the vector of actionable changes Actions//) 452 for the current time step t.
  • One or more of the 4D Risk Map//) 442 and the vector of actionable changes Actions//) 452 can be transmitted to an optional supervision system (not shown) which might be provided in semi-autonomous rather than fully autonomous deployments, or as a backup or failsafe system in fully autonomous deployments. It is noted that 4D Risk Map//) 442 and the vector of actionable changes Actions//) 452 are also stored in memory or in a buffer such that they can be fed into 4D state estimator 440 and action prediction system 450, respectively, at the next time step t+1.
  • one or more of 4D Risk Map//) 442 and the vector of actionable changes Actions//) 452 are analyzed to determine whether Actions//) 452 should be passed to a vessel and payload control system for execution or if an override command should be pushed to the vessel and payload control system instead of Actions//).
  • supervision system can be implemented in software, such as by a separate machine learning function or network, or by a non-learning algorithm or method that applies more conventional analyses.
  • supervision system can be manually implemented, or can be semi-autonomous with the expectation of manual intervention when needed, for example, a human operator (e.g., captain, machine learning supervisor) monitors the 4D Risk Map(s) and other vessel data in order to make a judgment call, based on his or her experience, as to whether or not the computed actionable changes vector Actions//) 452 is reasonable or should be overridden with a substitute command input.
  • deployed autonomous and semi-autonomous machine learning systems might include an onboard operations database which can be utilized to store a variety of operational records and data, including but not limited to, all override instances (can be used to refresh machine learning training), new raw data for generating new training data for future trainings, etc.
  • the training data and trained machine learning systems can be used to train a sub-surface vessel, for example, an ROV. That is, while the above disclosure generally discusses autonomous or semi-autonomous surface vessels, sub-surface vessels are also contemplated.
  • a sub-surface vessel can be autonomous (or semi-autonomous) and can also control the operation of a surface vessel, for example, as if the surface vessel was autonomous.
  • An autonomous (or semi-autonomous) surface vessel can also control the operation of a sub- surface vessel.
  • FIG. 5 presents an alternate state diagram 500 for a machine learning operation according to aspects of the instant disclosure.
  • previous figures presented a priori data and real time data as being processed each time step, in some embodiments it can be possible to only process the a priori data initially, and reuse the results of this calculation until an update to one or more variables of the a priori data is received.
  • the existing a priori data can be updated based on real-time observations and ensuing predictions.
  • the a priori data processing begins in a pre-processing step seen at the far left, wherein the a priori data (either spatial or non-spatial, although labeled in the figure as only spatial data 502) are input into a convolutional network 504.
  • This convolutional network 504 provides an initial state characterization of the a priori parameters that should remain valid until one or more of these parameters change. Additionally, as illustrated the a priori data 502 is processed using a convolutional network 504 that is different from the convolutional-LSTM 522 that is utilized at each time step to process real-time or current spatial data such as 5l2a and 5l2b. Convolutional network 504 might be trained and converged in a separate training process taking as input only the a priori training data contained in the system, and does not include the real-time data in the training set.
  • a priori training data may likely be limited in comparison to real-time data, it is possible that a single convolutional network 504 is created from the available a priori training data and that this same network is then utilized to perform the pre-processing step for each discrete machine learning function or network that is deployed.
  • real-time spatial data 5l2a and real-time non-spatial data 5l4a are freshly received, and each is input into a respective machine learning function.
  • the real-time spatial data 512 is input into a convolutional-LSTM 522 (comparable to the CNN of the previous discussion) and the real-time non-spatial data is input into a sensor fusion function (comparable to the sensor fusion network of the previous discussion).
  • the training data and trained machine learning systems can be used to train a sub-surface vessel, for example, an ROV. That is, while the above disclosure generally discusses autonomous or semi-autonomous surface vessels, sub-surface vessels are also contemplated.
  • a sub-surface vessel can be autonomous (or semi-autonomous) and can also control the operation of a surface vessel, for example, as if the surface vessel was autonomous.
  • An autonomous (or semi-autonomous) surface vessel can also control the operation of a sub- surface vessel.
  • FIG. 6 illustrates an example system and method 600 for training and autonomous operation of a sub-surface vessel.
  • machine learning concepts as discussed above and further illustrated below, can be applied to sub-surface vessels, in particular for structure inspections or routing around structural components.
  • an ROV can recognize from its video sensor(s) and sonar sensor(s) components of the structure, locate itself and proceed to perform inspections on appropriate structural elements.
  • the system and method can construct one or more 3D virtual models.
  • a sub-surface vessel can have one or more 3D virtual models constructed, which render a variety of perspective views, orientations and observation points of the sub-surface vessel.
  • the 3D virtual models can be constructed to resemble images that would be viewed by sensors (e.g., video, acoustic, etc.) on the sub-surface vessel.
  • a plurality of views from one or more directions can be constructed.
  • the 3D virtual models can be used to train one or more machine learning systems (e.g., neural networks, etc.), for example, as discussed above.
  • the trained systems can be trained to recognize specific components of the structure from portions of the 3D virtual models (e.g., video, images, acoustics, etc.).
  • Block 608 represents one or more video sensors (e.g., imaging devices, imaging cameras, thermal cameras, etc.) coupled to the sub-surface vessel.
  • the orientation of the video sensors can be used to construct the one or more images/3D virtual models.
  • Block 610 represents one or more sonar sensors (e.g., acoustic capture device, acoustic camera, sonar scanner, etc.).
  • the sound waves can be used to measure distance and location to objects and further, the images/3D virtual models can be represented in grey scale based on the acoustic reflectivity of the component.
  • steel structural member(s) can have a much brighter return than, for instance, a rope.
  • Block 612 represents one or more trained machine learning system (614, 616).
  • Trained machine learning system 614, 616 can be one or more neural network or learning function, as discussed above.
  • Learning system 614 can, based on the observed images of block 608 and the training performed in block 604, classify and/or identify one or more members of the structure (or structural components) that are visible in the image(s).
  • Learning system 616 can, based on the observed sonar sensors 610 and the training performed in block 606, classify/locate one or more members of the structure (or structural components) are visible in the image(s).
  • the trained machine learning systems can be configured to perform pattern recognition between the data received from sensors 608, 610 and known 3D virtual models (e.g., known 3D models previously presented to the machine learning systems).
  • the orientation of the sub-surface vessel e.g., ROV, etc.
  • Sub-surface vehicles are normally positioned via inertial systems and Ultra-Short Baseline (USBL) systems attached to a surface vessel. The accuracy of this configuration is limited and usually not sufficient for close up work.
  • the sub-surface vessel can refine its position relative to the observed structure (e.g., from the video sensors 608).
  • the range/distance and orientation of the sub-surface vessel (e.g., ROV, etc.) relative to observed feature can be determined.
  • the range/distance can be determined by triangulation with, for example, by known dimensions of the observed feature.
  • Sub-surface vehicles are normally positioned via inertial systems and USBL systems attached to a surface vessel. The accuracy of this configuration is limited and usually not sufficient for close up work.
  • the sub-surface vessel can refine its position relative to the observed structure (e.g., from the sonar sensors 610).
  • blocks 618 and 620 can be combined, in other instances they can be independent on one another.
  • range/distance and bearing information can be utilized to guide the sub-surface vessel to a desired structure/structural component.
  • a safe path e.g., free of objects, obstructions, etc.
  • the surface vessel to which the sub-surface vessel is in communication with can receive input of the safe path and follow the sub-surface vessel.
  • a surface vessel does not exist (e.g., UUV) and this block is ignored.
  • the surface vessel may be a manned vessel or it may be unmanned (USV).
  • the position of the sub-surface vessel (from block 622) and knowledge of the 3D virtual model can be utilized, by the surface vessel, to follow the sub-surface vessel (keeping as short as possible) and avoid collisions or avoidance areas around the structure/structural components.
  • the position can be determined based on triangulation with known dimensions of the 3D virtual model.
  • FIG. 7 shows an example of computing system 700 in which the components of the system are in communication with each other using connection 705.
  • Connection 705 can be a physical connection via a bus, or a direct connection into processor 710, such as in a chipset architecture.
  • Connection 705 can also be a virtual connection, networked connection, or logical connection.
  • computing system 700 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple datacenters, a peer network, etc.
  • one or more of the described system components represents many such components each performing some or all of the function for which the component is described.
  • the components can be physical or virtual devices.
  • Example system 700 includes at least one processing unit (CPU or processor) 710 and connection 705 that couples various system components including system memory 715, such as read only memory (ROM) and random access memory (RAM) to processor 710.
  • Computing system 700 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 710.
  • Processor 710 can include any general purpose processor and a hardware service or software service, such as services 732, 734, and 736 stored in storage device 730, configured to control processor 710 as well as a special-purpose processor where software instructions are incorporated into the actual processor design.
  • Processor 710 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc.
  • a multi-core processor may be symmetric or asymmetric.
  • computing system 700 includes an input device 745, which can represent any number of input mechanisms, such as a microphone for speech, a touch- sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc.
  • Computing system 700 can also include output device 735, which can be one or more of a number of output mechanisms known to those of skill in the art.
  • output device 735 can be one or more of a number of output mechanisms known to those of skill in the art.
  • multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 700.
  • Computing system 700 can include communications interface 740, which can generally govern and manage the user input and system output.
  • Storage device 730 can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices.
  • a computer such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices.
  • the storage device 730 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 710, it causes the system to perform a function.
  • a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 710, connection 705, output device 735, etc., to carry out the function.
  • Methods according to the aforementioned description can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network.
  • the computer executable instructions may be binaries, intermediate format instructions such as assembly language, firmware, or source code.
  • Computer-readable media that may be used to store instructions, information used, and/or information created during methods according to the aforementioned description include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
  • the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like.
  • non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
  • Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Such form factors can include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device.
  • the instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne des systèmes, des procédés et des supports lisibles par ordinateur, destinés à la création et l'application de données de formation pour la formation de systèmes informatiques d'apprentissage de données et pour un fonctionnement autonome de vaisseaux de surface et de sous-surface. Ces systèmes, procédés et supports lisibles par ordinateur peuvent comprendre une source de données, cette source de données fournissant au moins une donnée parmi des données de véhicule, des données opérationnelles et des données environnementales, et peuvent comprendre en outre un processeur, ce processeur analysant au moins des données provenant de la source de données et générant au moins une sortie de commande sur la base, au moins en partie, de l'analyse.
PCT/US2018/067298 2017-12-21 2018-12-21 Création et classification de données de formation destinées à des fonctions d'apprentissage machine WO2019126755A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762609119P 2017-12-21 2017-12-21
US62/609,119 2017-12-21

Publications (1)

Publication Number Publication Date
WO2019126755A1 true WO2019126755A1 (fr) 2019-06-27

Family

ID=66992812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/067298 WO2019126755A1 (fr) 2017-12-21 2018-12-21 Création et classification de données de formation destinées à des fonctions d'apprentissage machine

Country Status (1)

Country Link
WO (1) WO2019126755A1 (fr)

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716012A (zh) * 2019-09-10 2020-01-21 淮阴工学院 一种基于现场总线网络的油气浓度智能监测系统
CN110806692A (zh) * 2019-10-21 2020-02-18 上海海事大学 一种基于cnn-latm组合模型的波浪补偿预测方法
CN110949407A (zh) * 2019-12-25 2020-04-03 清华大学 基于驾驶员实时风险响应的动态人机共驾驾驶权分配方法
CN111639513A (zh) * 2019-12-10 2020-09-08 珠海大横琴科技发展有限公司 一种船只遮挡识别方法、装置及电子设备
US10845812B2 (en) 2018-05-22 2020-11-24 Brunswick Corporation Methods for controlling movement of a marine vessel near an object
CN112362042A (zh) * 2020-10-30 2021-02-12 智慧航海(青岛)科技有限公司 一种基于智能船舶多传感设备的航迹关联判断方法
US10926855B2 (en) 2018-11-01 2021-02-23 Brunswick Corporation Methods and systems for controlling low-speed propulsion of a marine vessel
WO2021094650A1 (fr) * 2019-11-11 2021-05-20 Awake.Ai Oy Procédé de fourniture d'un modèle d'apprentissage machine spécifique à un emplacement
CN112948969A (zh) * 2021-03-01 2021-06-11 哈尔滨工程大学 一种基于lstmc混合网络的船舶横摇预测方法
CN113361614A (zh) * 2021-06-15 2021-09-07 广西民族大学 一种船只捕鱼行为预测方法
US11198494B2 (en) 2018-11-01 2021-12-14 Brunswick Corporation Methods and systems for controlling propulsion of a marine vessel to enhance proximity sensing in a marine environment
US11257378B2 (en) 2019-01-31 2022-02-22 Brunswick Corporation Marine propulsion control system and method
US11260949B2 (en) 2016-03-01 2022-03-01 Brunswick Corporation Marine vessel station keeping systems and methods
US11373537B2 (en) 2018-12-21 2022-06-28 Brunswick Corporation Marine propulsion control system and method with collision avoidance override
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11403955B2 (en) 2018-12-14 2022-08-02 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11436927B2 (en) 2018-11-21 2022-09-06 Brunswick Corporation Proximity sensing system and method for a marine vessel with automated proximity sensor location estimation
US11443637B2 (en) 2018-11-21 2022-09-13 Brunswick Corporation Proximity sensing system and method for a marine vessel
CN115081592A (zh) * 2022-06-13 2022-09-20 华设设计集团股份有限公司 基于遗传算法和前馈神经网络的公路低能见度预估方法
US20220332328A1 (en) * 2021-04-14 2022-10-20 Zf Friedrichshafen Ag Device for determining a length of a vehicle combination
US11480966B2 (en) 2020-03-10 2022-10-25 Brunswick Corporation Marine propulsion control system and method
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US20230016199A1 (en) * 2021-07-16 2023-01-19 State Farm Mutual Automobile Insurance Company Root cause detection of anomalous behavior using network relationships and event correlation
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
EP4174591A1 (fr) * 2021-10-27 2023-05-03 Yokogawa Electric Corporation Système d'exploitation, procédé d'exploitation et programme d'exploitation
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11702178B2 (en) 2019-01-31 2023-07-18 Brunswick Corporation Marine propulsion control system, method, and user interface for marine vessel docking and launch
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11794865B1 (en) 2018-11-21 2023-10-24 Brunswick Corporation Proximity sensing system and method for a marine vessel
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US12007771B1 (en) 2019-08-08 2024-06-11 Brunswick Corporation Marine steering system and method
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US12065230B1 (en) 2022-02-15 2024-08-20 Brunswick Corporation Marine propulsion control system and method with rear and lateral marine drives
US12110088B1 (en) 2022-07-20 2024-10-08 Brunswick Corporation Marine propulsion system and method with rear and lateral marine drives
US12125389B1 (en) 2023-11-20 2024-10-22 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170336790A1 (en) * 2016-05-17 2017-11-23 Telenav, Inc. Navigation system with trajectory calculation mechanism and method of operation thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170336790A1 (en) * 2016-05-17 2017-11-23 Telenav, Inc. Navigation system with trajectory calculation mechanism and method of operation thereof

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11260949B2 (en) 2016-03-01 2022-03-01 Brunswick Corporation Marine vessel station keeping systems and methods
US12020476B2 (en) 2017-03-23 2024-06-25 Tesla, Inc. Data synthesis for autonomous control systems
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US12086097B2 (en) 2017-07-24 2024-09-10 Tesla, Inc. Vector computational unit
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US10845812B2 (en) 2018-05-22 2020-11-24 Brunswick Corporation Methods for controlling movement of a marine vessel near an object
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US12079723B2 (en) 2018-07-26 2024-09-03 Tesla, Inc. Optimizing neural network structures for embedded systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11983630B2 (en) 2018-09-03 2024-05-14 Tesla, Inc. Neural networks for embedded devices
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11904996B2 (en) 2018-11-01 2024-02-20 Brunswick Corporation Methods and systems for controlling propulsion of a marine vessel to enhance proximity sensing in a marine environment
US10926855B2 (en) 2018-11-01 2021-02-23 Brunswick Corporation Methods and systems for controlling low-speed propulsion of a marine vessel
US12084160B2 (en) 2018-11-01 2024-09-10 Brunswick Corporation Methods and systems for controlling low-speed propulsion of a marine vessel
US11198494B2 (en) 2018-11-01 2021-12-14 Brunswick Corporation Methods and systems for controlling propulsion of a marine vessel to enhance proximity sensing in a marine environment
US11436927B2 (en) 2018-11-21 2022-09-06 Brunswick Corporation Proximity sensing system and method for a marine vessel with automated proximity sensor location estimation
US11794865B1 (en) 2018-11-21 2023-10-24 Brunswick Corporation Proximity sensing system and method for a marine vessel
US11443637B2 (en) 2018-11-21 2022-09-13 Brunswick Corporation Proximity sensing system and method for a marine vessel
US12046144B2 (en) 2018-11-21 2024-07-23 Brunswick Corporation Proximity sensing system and method for a marine vessel
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11862026B2 (en) 2018-12-14 2024-01-02 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting
US11403955B2 (en) 2018-12-14 2022-08-02 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting
US11373537B2 (en) 2018-12-21 2022-06-28 Brunswick Corporation Marine propulsion control system and method with collision avoidance override
US11804137B1 (en) 2018-12-21 2023-10-31 Brunswick Corporation Marine propulsion control system and method with collision avoidance override
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11600184B2 (en) 2019-01-31 2023-03-07 Brunswick Corporation Marine propulsion control system and method
US12024273B1 (en) 2019-01-31 2024-07-02 Brunswick Corporation Marine propulsion control system, method, and user interface for marine vessel docking and launch
US11257378B2 (en) 2019-01-31 2022-02-22 Brunswick Corporation Marine propulsion control system and method
US11702178B2 (en) 2019-01-31 2023-07-18 Brunswick Corporation Marine propulsion control system, method, and user interface for marine vessel docking and launch
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US12007771B1 (en) 2019-08-08 2024-06-11 Brunswick Corporation Marine steering system and method
CN110716012B (zh) * 2019-09-10 2020-09-25 淮阴工学院 一种基于现场总线网络的油气浓度智能监测系统
CN110716012A (zh) * 2019-09-10 2020-01-21 淮阴工学院 一种基于现场总线网络的油气浓度智能监测系统
CN110806692A (zh) * 2019-10-21 2020-02-18 上海海事大学 一种基于cnn-latm组合模型的波浪补偿预测方法
US20220371705A1 (en) * 2019-11-11 2022-11-24 Awake.Ai Oy Method for providing a location-specific machine learning model
WO2021094650A1 (fr) * 2019-11-11 2021-05-20 Awake.Ai Oy Procédé de fourniture d'un modèle d'apprentissage machine spécifique à un emplacement
CN111639513A (zh) * 2019-12-10 2020-09-08 珠海大横琴科技发展有限公司 一种船只遮挡识别方法、装置及电子设备
CN110949407A (zh) * 2019-12-25 2020-04-03 清华大学 基于驾驶员实时风险响应的动态人机共驾驾驶权分配方法
CN110949407B (zh) * 2019-12-25 2020-12-25 清华大学 基于驾驶员实时风险响应的动态人机共驾驾驶权分配方法
US11480966B2 (en) 2020-03-10 2022-10-25 Brunswick Corporation Marine propulsion control system and method
CN112362042B (zh) * 2020-10-30 2023-03-10 智慧航海(青岛)科技有限公司 一种基于智能船舶多传感设备的航迹关联判断方法
CN112362042A (zh) * 2020-10-30 2021-02-12 智慧航海(青岛)科技有限公司 一种基于智能船舶多传感设备的航迹关联判断方法
CN112948969A (zh) * 2021-03-01 2021-06-11 哈尔滨工程大学 一种基于lstmc混合网络的船舶横摇预测方法
US20220332328A1 (en) * 2021-04-14 2022-10-20 Zf Friedrichshafen Ag Device for determining a length of a vehicle combination
CN113361614B (zh) * 2021-06-15 2024-02-02 广西民族大学 一种船只捕鱼行为预测方法
CN113361614A (zh) * 2021-06-15 2021-09-07 广西民族大学 一种船只捕鱼行为预测方法
US20230016199A1 (en) * 2021-07-16 2023-01-19 State Farm Mutual Automobile Insurance Company Root cause detection of anomalous behavior using network relationships and event correlation
US12040935B2 (en) * 2021-07-16 2024-07-16 State Farm Mutual Automobile Insurance Company Root cause detection of anomalous behavior using network relationships and event correlation
EP4174591A1 (fr) * 2021-10-27 2023-05-03 Yokogawa Electric Corporation Système d'exploitation, procédé d'exploitation et programme d'exploitation
US12065230B1 (en) 2022-02-15 2024-08-20 Brunswick Corporation Marine propulsion control system and method with rear and lateral marine drives
CN115081592B (zh) * 2022-06-13 2024-05-03 华设设计集团股份有限公司 基于遗传算法和前馈神经网络的公路低能见度预估方法
CN115081592A (zh) * 2022-06-13 2022-09-20 华设设计集团股份有限公司 基于遗传算法和前馈神经网络的公路低能见度预估方法
US12110088B1 (en) 2022-07-20 2024-10-08 Brunswick Corporation Marine propulsion system and method with rear and lateral marine drives
US12124277B1 (en) 2023-04-28 2024-10-22 Brunswick Corporation Method and system for controlling attitude of a marine vessel
US12125389B1 (en) 2023-11-20 2024-10-22 Brunswick Corporation Marine propulsion control system and method with proximity-based velocity limiting

Similar Documents

Publication Publication Date Title
WO2019126755A1 (fr) Création et classification de données de formation destinées à des fonctions d'apprentissage machine
US10782691B2 (en) Deep learning and intelligent sensing system integration
US10936907B2 (en) Training a deep learning system for maritime applications
Zhang et al. Collision-avoidance navigation systems for Maritime Autonomous Surface Ships: A state of the art survey
US12013243B2 (en) Passage planning and navigation systems and methods
Liu et al. Unmanned surface vehicles: An overview of developments and challenges
US11988513B2 (en) Imaging for navigation systems and methods
US20200012283A1 (en) System and method for autonomous maritime vessel security and safety
EP3729407A1 (fr) Procédé et système d'évitement de collision pour navires maritimes
US10895802B1 (en) Deep learning and intelligent sensing systems for port operations
US20130282210A1 (en) Unmanned maritime vehicle with inference engine and knowledge base and related methods
Johansen et al. Unmanned aerial surveillance system for hazard collision avoidance in autonomous shipping
US20220145756A1 (en) Seafloor Harvesting With Autonomous Drone Swarms
Rivkin Unmanned ships: Navigation and more
Zhuang et al. Navigating high‐speed unmanned surface vehicles: System approach and validations
Tsai et al. Design and application of an autonomous surface vehicle with an AI-based sensing capability
Han et al. Field demonstration of advanced autonomous navigation technique for a fully unmanned surface vehicle in complex coastal traffic areas
Taubert et al. Model identification and controller parameter optimization for an autopilot design for autonomous underwater vehicles
WO2023164705A1 (fr) Systèmes et procédés de cartographie sémantique en vue plongeante (bev) à l'aide d'une caméra monoculaire
JP7086475B2 (ja) 操船支援装置
CN113885533A (zh) 一种无人艇的无人驾驶方法及系统
KR102650152B1 (ko) 4차원 레이더를 활용한 해적 선박 감지 시스템 및 방법, 동 방법을 컴퓨터에서 실행하기 위한 컴퓨터 프로그램이 기록된, 컴퓨터 판독 가능한 기록 매체
Noguchi et al. Guidance method of underwater vehicle for rugged seafloor observation in close proximity
KR102553331B1 (ko) 4차원 레이더를 활용한 선외 감시 시스템 및 방법
CN117910674B (zh) 一种基于机器学习的海上船舶指挥方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18892185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18892185

Country of ref document: EP

Kind code of ref document: A1