US11354473B1 - Method and system for designing a robotic system architecture with optimized system latency - Google Patents

Method and system for designing a robotic system architecture with optimized system latency Download PDF

Info

Publication number
US11354473B1
US11354473B1 US17/160,758 US202117160758A US11354473B1 US 11354473 B1 US11354473 B1 US 11354473B1 US 202117160758 A US202117160758 A US 202117160758A US 11354473 B1 US11354473 B1 US 11354473B1
Authority
US
United States
Prior art keywords
graph
latency
hardware
software
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/160,758
Inventor
Jason Ziglar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Argo AI LLC
Original Assignee
Argo AI LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Argo AI LLC filed Critical Argo AI LLC
Priority to US17/160,758 priority Critical patent/US11354473B1/en
Assigned to Argo AI, LLC reassignment Argo AI, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZIGLAR, JASON
Priority to PCT/US2022/070378 priority patent/WO2022165497A1/en
Priority to EP22746907.9A priority patent/EP4285198A1/en
Priority to CN202280008813.8A priority patent/CN116670649B/en
Priority to US17/664,979 priority patent/US11972184B2/en
Application granted granted Critical
Publication of US11354473B1 publication Critical patent/US11354473B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/327Logic synthesis; Behaviour synthesis, e.g. mapping logic, HDL to netlist, high-level language to RTL or netlist
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/337Design optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2117/00Details relating to the type or aim of the circuit design
    • G06F2117/08HW-SW co-design, e.g. HW-SW partitioning

Definitions

  • Real-time computing systems such as robotic systems (e.g., autonomous vehicles) need to be responsive. Such systems have a set of deadlines, and missing a deadline is considered either a system failure and/or degrades the quality of service. In such systems, a delay of less than a millisecond in the system can, in some situations, cause a failure.
  • Many applications of real-time computing are also low-latency applications (e.g., autonomous vehicles must be reactive to sudden changes in the environment).
  • a robotic system such as an autonomous vehicle performs several successive operations on sensor data with the output of one operation used as the input of another operation (e.g., pipeline stages). As such, real-time computer systems and applications often require low latency. Therefore, for a system design to meet the needs of such robotic systems, the core software and hardware components must not interfere with the requirements of real-time computing.
  • the systems may include a processor and non-transitory computer readable medium including programming instructions that can be executed by the processor to perform the methods of this disclosure.
  • the system may define a software graph and a hardware graph.
  • the software graph may include a first plurality of nodes representing discrete computational tasks, and a first plurality of edges representative of data flow between the first plurality of tasks.
  • the hardware graph may include a second plurality of nodes representing hardware components of a robotic device, and a second plurality of edges representative of communication links between the second plurality of nodes.
  • the system may map the software graph to the hardware graph by assigning each of the first plurality of nodes to one of the second plurality of nodes and each of the first plurality of edges to one or more of the second plurality of edges. For the mapping, the system may then model a latency associated with a computational path included in the software graph, and allocate (based on the latency) a plurality of computational tasks in the computational path to a plurality of the hardware components to yield a robotic system architecture. The system may use the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph.
  • the software graph may be a multigraph and the hardware graph may be a pseudograph.
  • the system may also update the mapping of the software graph to the hardware graph by defining an objective function comprising a first cost of assigning a discrete computational task to a hardware component and a second cost of assigning a data flow to a communication link.
  • the system may minimize the objective function using a plurality of constraints, the plurality of constraints comprising at least one of the following: an atomicity constraint, a bandwidth constraint, a resource constraint, or a flow constraint.
  • the system may minimize the objective function using the latency associated with the computational path included in the software multigraph.
  • the system may model a system latency across all computational paths included in the robotic system, and then generate the robotic system architecture by allocating a plurality of computational tasks across all computations paths included in the robotic system to a plurality of hardware components for improving the performance of the robotic system architecture using the system latency.
  • the latency may be modeled as a function of a first latency introduced by the hardware components and a second latency introduced by the communication links.
  • the second latency for a communication link may be modeled as a function of, for example, a number of messages currently assigned to that communication link, a average size of the currently assigned messages, a standard deviation of sizes of the currently assigned messages, a number of messages an additional communications link will add, and/or a size of the currently assigned messages.
  • the first latency for a hardware component may be modeled as a function of input and output message timings from that hardware component.
  • one or more parameters for modeling the first latency may be determined based on checkpoint data corresponding to the robotic device.
  • the system may, in certain implementations, in the software graph: associate a type and an amount of resources required to execute a task associated with that node with each of the first plurality of nodes, and associate an amount of required bandwidth with each of the first plurality of edges. Additionally and/or alternatively, the system may in the hardware graph associate a type and an amount of resources provided by that node with each of the second plurality of nodes, and associating an amount of available bandwidth with each of the second plurality of edges.
  • FIG. 1 is a schematic diagram illustrating an example system for designing a robotic system architecture.
  • FIG. 2 is a flowchart illustrating an example method for designing a robotic system architecture.
  • FIG. 3 illustrates an example hardware pseudograph.
  • FIG. 4 illustrate an example software multigraph.
  • FIG. 5 illustrates example elements of an autonomous vehicle and/or external electronic device.
  • real-time tasks are tasks for which the task has a predefined maximum threshold for delay between requesting a service from the robotic system and having the request fulfilled. Failure to service the real-time task within the threshold can cause serious failure of the task and/or systems managed by the real-time task.
  • latency may refer to a time lag between receipt of an input and production of an output based on the received input. For example, latency may be the time from when the first sensor inputs occur which would produce a particular response, to when the system actually produces such a response (e.g., time period between time of photons striking a camera sensor to the time when navigation of an autonomous vehicle is controlled based on the camera detection).
  • Production of such an output may include assignment of a hardware resource (e.g., processor) for task execution, exclusive access to a resource, and the like.
  • Latency typically includes other more specific latencies well known to those of skill in the art such as scheduling latency, task switching latency, and the like.
  • Embedded software architectures or platforms used in robots may be statically configured at design time with a fixed set of operating system tasks.
  • some or all tasks that will execute on a given computing hardware may be allocated at runtime using, for example, a configuration file that specifies which tasks execute in which locations.
  • the assignment may be performed during compilation and/or linking.
  • the current disclosure describes methods and systems for automatically constructing optimal robotic hardware and software systems from a set of available components, using queueing networks that optimize against key latency requirements.
  • the disclosure provides a unified framework for enabling a wide array of capabilities in the design, development, and operation of robot systems.
  • the systems and methods of this disclosure automates the process of selecting components to build a complete system with some user-defined functionality, as well as generates the structure that relates all elements (e.g., connecting hardware, assigning tasks, and routing communications).
  • elements e.g., connecting hardware, assigning tasks, and routing communications.
  • designs can be made more quickly and with more confidence in the validity of a given solution, since the entire problem is solved simultaneously.
  • This also enables more robust consideration of system resiliency in design, since the impact of small changes in component parameterization (e.g., how much computing resources a particular task needs, the size of a particular message, the cost of using a particular sensor, etc.) can immediately be propagated to the global system design.
  • the proposed systems and methods also help to ensure that the designed robotic system meets specified latency budgets, while helping to optimize resource utilization across the system (including latency). Furthermore, by automating the process, system resiliency can be extended by solving the design problem in an online setting, through the same process of generating solutions in response to local changes. For example, changes in the environmental context (e.g., transitioning from indoor to outdoor operation) can require different capabilities (e.g., using GPS for localization), changes in software performance (e.g., a task consuming more resources than anticipated) and changes in hardware components (e.g., computer failure) can require a reallocation of software tasks through the system. The ability to automatically synthesize a novel system capturing these requirements will allow for complex robotic systems that are more efficient and resilient by design.
  • System design or “system architecture design” refers to a process of defining the components (e.g., hardware and software architectures, interfaces, and data) and the interrelationships of the components of a system that is capable of executing a set of computational tasks with a user-defined set of functionality; and/or guidelines for implementing and building such a system.
  • System architecture refers to a set of rules that defines the interconnectivity for a robotic system's hardware components as well as the mode of data transfer and processing exhibited by the robotic system to execute a software.
  • the system 100 may design the architecture for a robotic system based on, for example, various robotic system components, task constraints, constraints provided by a user, or the like, in accordance with the methods described below.
  • System 100 may include a plurality of clients 102 a - n , a robotic system 110 , and a design server 103 coupled to each other via one or more networks 104 .
  • Clients 102 a - n represent any suitable local or remote end-user device that facilitates communication between users and the design server 103 .
  • Clients 102 a - n may be configured to receive information regarding the design of requested robotic system 110 .
  • the received information may include target values for design criteria of the robotic system, where design criteria may refer to robotic system context, constraints, standards, objectives, goals, and/or requirements for the implementation, performance, operation, integration, cost, and/or maintenance of the robotic system 110 .
  • a robotic system 110 includes a plurality of hardware components 116 .
  • the hardware components 116 may include, for example, processing units and memory 132 accessible to the processing units as well as logical access to peripherals of a robotic system 110 .
  • the processing units may include, for example, one or more central processing units (CPUs) 120 , one or more graphical processing units (GPUs) 124 , one or more digital signal processors (DSPs) 128 , other processors 130 (e.g., field programmable gate arrays (FPGAs), FPGA-like devices, and/or reconfigurable instruction cell array (RICA) devices), or a combination thereof.
  • processors 130 e.g., field programmable gate arrays (FPGAs), FPGA-like devices, and/or reconfigurable instruction cell array (RICA) devices
  • FPGAs field programmable gate arrays
  • RRICA reconfigurable instruction cell array
  • One or more of the processing units may include more than one processing core.
  • the robotic system 110 may also include a plurality of software resources 118 .
  • the software resources 118 may include, for example, sensor data processing applications, motion planning applications, graphical programming applications, machine learning applications, or the like.
  • the hardware components 116 and the software resources 118 may be accessible to a task planner 114 ( a ) (where tasks are described in and assigned process dependent parameters in accordance with a system design) and a scheduler 114 ( b ) (where tasks are scheduled in accordance with a system design) of the robotic system 110 .
  • Information about the robotic system 110 is provided to the design server 103 .
  • FIG. 2 a flowchart illustrating an example method for designing a robotic system architecture is shown.
  • the method can be performed at design time, in real time, or at any suitable time.
  • the method can be performed a predetermined number of times for a robotic system, iteratively performed at a predetermined frequency for a robotic system, or performed at any suitable time. Multiple instances of the method can be concurrently performed for multiple concurrent robotic systems. However, any suitable number of method instances can be performed at any suitable time.
  • the method described in FIG. 2 a can be practiced by a system such as but not limited to the one described with respect to FIG. 1 .
  • method 200 describes a series of operations that are performed in a sequence, it is to be understood that method 200 is not limited by the order of the sequence depicted. For instance, some operations may occur in a different order than that described. In addition, one operation may occur concurrently with another operation. In some instances, not all operations described are performed.
  • the system may receive information relating to software that will be executed by a robotic system.
  • software refers to a set of instructions, data, or programs used to execute specific tasks or processes, where a task or process is any discrete computational process that has an input and an output. Tasks may consume computational resources, process and publish data, and interface with sensors and actuators; therefore a hardware component is required to provide these resources for task execution. Examples of such software in a robotic system may include, without limitations, path planner software, path tracker software, localization software, obstacle mapper software, operator GUI software, sensor interface software, or the like.
  • the system may receive information relating to hardware components of a robotic system.
  • the hardware components of the robotic system can provide computational resources (such as available processing power, RAM, disk storage, or logical access to particular peripherals) and connections to other devices/systems for executing the tasks of a software.
  • the hardware components may be heterogeneous such that not every device may provide every resource
  • the system may generate a hardware graph corresponding to the hardware components of the robotic system and a software graph corresponding to the software.
  • a graph is a combinatorial structure consisting of nodes and edges, where each edge links a pair of nodes. The two nodes linked by an edge are called its endpoints.
  • e i,n n i,2 .
  • Restrictions on the set of edges E define several important classes of graphs: a simple graph possesses a set of edges E in which no edge is a loop; a multigraph possesses a multiset E with no loops; and a pseudograph possesses a multiset E possibly with loops. Additionally, if the order of vertices in edges is fixed, this defines a directed graph, otherwise a graph may be referred to as an undirected graph.
  • the graph generated for the software may be multigraph, where nodes and edges are referred to as tasks and links (as shown in FIG. 4 ).
  • the multigraph may be directed with the first element in each edge representing the source, and the second representing a destination.
  • the system defines a graph that includes a plurality of nodes representing discrete computational elements (i.e., tasks or processes) of a software routine that will perform the computational work and a plurality of edges connecting the nodes.
  • the edge multiset A does not contain self-loops since tasks have internal access to generated data.
  • the graph generated for the hardware components may be a pseudograph (shown in FIG. 3 ) including a set of nodes representing the hardware components that can provide computations resources and a set of undirected edges representing routes associated with connections between the hardware components that are capable of transmission of data associated with the software.
  • ⁇ k ⁇ i , ⁇ 0 ⁇ .
  • the system may then associate information and/or properties with the nodes and edges of the software and hardware graphs.
  • the resources required for a particular task may vary between devices, e.g. due to hardware specialization, where the resources consumed. Therefore, a given task ⁇ may require a specific type/amount resources to execute that may vary depending on the hardware component executing a given task both in type (e.g. memory, CPU cycles, access to specific hardware) and magnitude (e.g. number of bytes, number of CPU cycles, etc.).
  • each route i.e., hardware graph edge
  • connections provide finite bandwidth for transmitting data resulting in a set of bandwidth limits, and is a property of the hardware graph edges.
  • each link consumes a certain amount of bandwidth when traversing the specified route (d ⁇ k ⁇ l ).
  • the system may map the software graph to the hardware graph in order to define a system design or configuration. This may be considered an assignment problem that requires assignment of tasks to devices and assignment of links to routes. Specifically, in order to define a complete computational system, there must also exist a mapping ⁇ H S : S ⁇ H, which defines how software executes on the specified hardware.
  • An assignment variable ⁇ ⁇ d ⁇ t ⁇ ⁇ 0, 1 ⁇ defines whether task ⁇ t executes on device ⁇ d
  • an assignment variable ⁇ ⁇ k ⁇ l ⁇ ⁇ 0, 1 ⁇ defines whether a link ⁇ l transmits over a route ⁇ k .
  • the system may optimize the defined system design ( 210 ).
  • the system may then identify a set of constraints, and use the constraints to optimize the system design.
  • constraints may include, without limitation, a constraint that a task can be assigned to only a single device (binary and atomicity constraint), a constraint that a link must be assigned to a set of routes which link the two devices on which two tasks are executing (flow constraint), a constraint that a set of tasks assigned to any device must not consume more resources than the device can provide (resource constraint—i.e., consumption of computational resources by tasks cannot over allocate device budgets), a constraint that links may not consume more bandwidth than the assigned hardware connection or route provides (bandwidth constraint), or the like.
  • Resource Constraint ⁇ ⁇ p ⁇ T ⁇ ⁇ d ⁇ p ⁇ right arrow over (c) ⁇ ⁇ d ⁇ p ⁇ r ⁇ d ⁇ d ⁇
  • Bandwidth Constraint ⁇ ⁇ l ⁇ ⁇ ⁇ k ⁇ l d ⁇ k ⁇ l ⁇ b ⁇ k ⁇ k ⁇
  • the bandwidth constraint equation is formulated taking into account that the transfer of data between two connected tasks consumes bandwidth traversing a connection, with link ⁇ l consuming d ⁇ k ⁇ l worth of bandwidth over connection ⁇ k .
  • the amount of bandwidth consumed can vary due to the differences in connections (e.g., packetized network overhead, requirements on data representations, etc.), requiring the bandwidth utilization to take the connection ⁇ k into account as well.
  • a link ⁇ l may be assigned to transmit over a connection ⁇ k , denoted by the variable ⁇ ⁇ k ⁇ l ⁇ ⁇ 0, 1 ⁇ , which consumes the specified amount of bandwidth d ⁇ k ⁇ l .
  • the bandwidth constraint equation ensures that the assignment of links to connections respects the bandwidth limits specified previously.
  • Tasks may be assigned to devices not directly connected to one another, requiring data routing along multiple connections.
  • Multi-hop paths require routing data along a connected path between the devices assigned to each task.
  • the flow constraint ensures these properties for all routes with a flow constraint stating that for any device interacting with a link, it must have either an odd number of connections and assigned to the relevant device (e.g., a source or sink) or an even number of connections transmitting the data (e.g., uninvolved or flowing through). This logic is formulated on a per-link basis to ensure a linear constraint.
  • Consistency requires that the assignment variables represent a physically realizable system—devices cannot connect to non-existent devices, tasks cannot send or receive data from inactive tasks, and so forth.
  • Viability ensures that the synthesized graphs support the requirements of all constituent elements, while still respecting the assignment and routing constraints described above. For instance, any generated hardware pseudograph must provide the necessary resources to support execution of the software multigraph; devices must have sufficient resources to support task execution, and the connections between devices must provide enough bandwidth for transferring data between tasks. Viability addresses only local concerns in generating graphs (e.g. ensuring devices can connect to one another, tasks have required resources and data inputs, etc.), deferring the treatment of systemic functionality to later constraints.
  • the system may also optimize the system design by formulating and using an objective functions that defines the quality of a system design such that system configurations that include desirable qualities or properties are favored over those that do not include such desirable qualities.
  • Any now or hereafter known objective function may be used.
  • an objective function may be formulated to favor properties such as reducing or minimizing resource consumption, balancing load across system, reducing or minimizing energy consumption, or the like.
  • an objective function may be formulated to reduce or minimize the cost of assigning a task to a device (f task ( ⁇ t , ⁇ d )) and the cost of assigning a link to a route (f link ( ⁇ l , ⁇ k )).
  • the system may can define both functions as the fractional utilization of resources available, and define ⁇ as the relative weighting of utilizing device or link utilization as follows:
  • the system may optimize the system design (by providing an optimal mapping between the software graph and the hardware graph) using the following objective function:
  • the system may further help to optimize the system design for reducing or minimizing latency.
  • the system may introduce latency into the objective function model by computing latency across a computational path in the software graph as a function of latency introduced by a task node and latency introduced by a route.
  • the system may, for each task, first define rules for when a message will be emitted on a particular link in the software graph. For example, for an image processing task, the system may define rules to define when processed images will be send to, for example, a motion planning system. The system may do so by, receiving from the task nodes, input conditions that will lead to production of an output message. Thus, for each task node, the system may define a function property (M ⁇ ( ⁇ , S)) for determining that when a given input message is received over an incoming link of task node, whether it results in generating a message on a specified outgoing link (for example, an image processing task may generate a processed image output every time a LIDAR image is received).
  • M ⁇ ( ⁇ , S) function property
  • the system may identify an average output period ( ⁇ ) with a variation ( ⁇ o ) indicative of a periodic rate of message production at an outgoing link from the task node (for example, a sensor data publishing task may output a message every 5 seconds).
  • an average output period
  • ⁇ o a variation indicative of a periodic rate of message production at an outgoing link from the task node
  • the system may model when a task receives a message because (M ⁇ ( ⁇ , S)) links an incoming message to the production of an output message.
  • the system may provide a mapping of input to output messages because when a task receives a message is the sum of the rates from preceding node publications that match the rule.
  • the latency induced by a task node when processing a message can then be modeled using the rules regarding input and output message timings from the task node, as determined above.
  • the system may model the latency using queueing theory.
  • the processing time may be modeled as the response from a server in a G/G/1 queue. This modeling may assume that the system is stationary at design time, since implementing this in the design time optimization problem enforces that the network cannot be overloaded (i.e., assuming steady state of some upper bound of the operating case at design time).
  • the expected time for an output message can be estimated using Kingman's Formula, which requires knowledge of the average time to process a message p, the average input rate i, and the standard deviation of these two situations ⁇ p , ⁇ i . Therefore, the latency induced by a message passing through a task node can be estimated as:
  • These parameters may be estimated based on checkpoint data and/or internal instrumentation corresponding to a robotic system.
  • the robotic system may validate and/or tune the parameters by comparing actual measured latency against modeled latency.
  • Latency across a route can be estimated through a variety of means. For example, for design time estimation, a model that can estimate the expected latency for transmitting data over a particular link may be used. As a first order approximation, the system may model this latency as a steady state of some upper bound of the operating case, at the expense of being able to model traffic bursts. For example, the latency across a single route may be modeled as a function of five parameters: the number of messages currently assigned to that route, the average size of those messages, the standard deviation of those message sizes, the number of messages an additional link would add, and the size of the assigned messages. This may be represented by: L ⁇ ( ⁇ new , ⁇ right arrow over ( ⁇ ) ⁇ assigned ).
  • the system may compute the latency of any computational path through the software graph, given the hardware graph and a mapping between them, using the task node latencies and route latencies in the computation path. Specifically, starting with some initial output message ⁇ 1,l , and a destination task ⁇ f , the system can define the end-to-end latency.
  • the system may, for P ( ⁇ , ⁇ , S) which is the function producing the connected path of links in the software graph connecting ⁇ to ⁇ using the task defined transfer functions MT, define latency as follows:
  • ⁇ representative of the computational path and a vector of latency requirements l r ⁇ t 1 , . . . , t
  • the system may then introduce an end-to-end latency constraint of the form: L ( ⁇ i , ⁇ ,S ) ⁇ t ⁇ r
  • the system may, optionally, update the optimization function of step 210 by adding an additional term to the objective function based on a set of end-to-end latencies, in order to optimize these latencies as well.
  • the system can define a second set of latencies ⁇ o in the same way as the constraint form, which can include the constrained latencies, or any other set of end-to-end latencies possible in the system.
  • Such an updated optimization function can be represented as:
  • the system may optimize all link latencies directly because the structure of the software graph is fixed, and the latencies induced by computation through tasks may not vary as a function of the assignment.
  • the system may, therefore, optimize latencies across the entire robotic system instead of certain specific and/or critical end-to-end latencies in an efficient manner. This would result in an objective function of the following form, while the constraints would remain unchanged:
  • the system may output a system design or architecture for configuring a robotic system that may be capable of performing one or more software such as, sensor data processing, predictions, motions, planning, or the like.
  • the above latency optimization may also be used as a validation step for an already designed system. Specifically, given a designed system, the above disclosure may be used to validate that the system meets certain latency requirements and/or estimate actual end-to-end latency. For performing such validation, the system may evaluate the above latency equations without the rest of the system optimization framework because the assignment variables (e.g., variables in the above equations denoted by a) are binary (can only take a value of 0 or 1), and the optimization function is a linear integer program.
  • assignment variables e.g., variables in the above equations denoted by a
  • the optimization function is a linear integer program.
  • plugging in the pre-determined values may result in direct equations that can be evaluated for latency evaluation and estimation.
  • the current disclosure solves the equation below for all ⁇ values (second term on the right hand side) during system synthesis, and plugging in the values for a obtained during system synthesis in this equation will provide the end-to-end latency for a specified path.
  • FIG. 5 is a block diagram that depicts example hardware elements that may be included in any of the electronic components of the system, such as internal processing systems of an robotic device or computing device, or remote servers (such as that associated with a design server).
  • An electrical bus 500 serves as an information highway interconnecting the other illustrated components of the hardware.
  • Processor 505 is a central processing device of the system, configured to perform calculations and logic operations required to execute programming instructions.
  • the terms “processor” and “processing device” may refer to a single processor or any number of processors in a set of processors that collectively perform a set of operations, such as a central processing unit (CPU), a graphics processing unit (GPU), a remote server, or a combination of these.
  • CPU central processing unit
  • GPU graphics processing unit
  • remote server or a combination of these.
  • ROM Read only memory
  • RAM random access memory
  • flash memory hard drives and other devices capable of storing electronic data constitute examples of memory devices 525 .
  • a memory device may include a single device or a collection of devices across which data and/or instructions are stored.
  • Various embodiments of the invention may include a computer-readable medium containing programming instructions that are configured to cause one or more processors, and/or devices to perform the functions described in the context of the previous figures.
  • An optional display interface 530 may permit information from the bus 500 to be displayed on a display device 535 in visual, graphic or alphanumeric format, such on an in-dashboard display system of the vehicle.
  • An audio interface and audio output (such as a speaker) also may be provided.
  • Communication with external devices may occur using various communication devices 540 such as a wireless antenna, a radio frequency identification (RFID) tag and/or short-range or near-field communication transceiver, each of which may optionally communicatively connect with other components of the device via one or more communication system.
  • the communication device(s) 540 may be configured to be communicatively connected to a communications network, such as the Internet, a local area network or a cellular telephone data network.
  • the hardware may also include a user interface sensor 545 that allows for receipt of data (such as node and connection definition data) from input devices 550 such as a keyboard or keypad, a joystick, a touchscreen, a touch pad, a remote control, a pointing device and/or microphone.
  • input devices 550 such as a keyboard or keypad, a joystick, a touchscreen, a touch pad, a remote control, a pointing device and/or microphone.
  • the system also may include sensors that the system uses to detect actors in the environment.
  • the sensed data may include digital image frames received from a camera 520 that can capture video and/or still images.
  • the system also may receive data from a LiDAR system 560 such as that described earlier in this document.
  • the system also may receive data from a motion and/or position sensor 570 such as an accelerometer, gyroscope or inertial measurement unit.
  • Terminology that is relevant to the disclosure provided above includes:
  • task refers to the smallest portion of a software program that an operating system (OS) can switch between a waiting or blocked state and a running state in which the task executes on a particular hardware resource.
  • OS operating system
  • Tasks can come from a single software program or a plurality of software programs and typically include portions of the OS.
  • An “automated device” or “robotic device” refers to an electronic device that includes a processor, programming instructions, and one or more physical hardware components that, in response to commands from the processor, can move with minimal or no human intervention. Through such movement, a robotic device may perform one or more automatic functions or function sets. Examples of such operations, functions or tasks may include without, limitation, operating wheels or propellers to effectuate driving, flying or other transportation actions, operating robotic lifts for loading, unloading, medical-related processes, construction-related processes, and/or the like.
  • Example automated devices may include, without limitation, autonomous vehicles, drones and other autonomous robotic devices.
  • vehicle refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy.
  • vehicle includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like.
  • An “autonomous vehicle” is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator.
  • An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle.
  • Autonomous vehicles also include vehicles in which autonomous systems augment human operation of the vehicle, such as vehicles with driver-assisted steering, speed control, braking, parking and other systems.
  • An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement.
  • the memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions.
  • memory each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices.
  • processor and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.
  • communication link and “communication path” mean a wired or wireless path via which a first device sends communication signals to and/or receives communication signals from one or more other devices.
  • Devices are “communicatively connected” or “communicatively coupled” if the devices are able to send and/or receive data via a communication link.
  • Electrical communication refers to the transmission of data via one or more signals between two or more electronic devices, whether through a wired or wireless network, and whether directly or indirectly via one or more intermediary devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Stored Programmes (AREA)
  • Multi Processors (AREA)

Abstract

Systems and methods for designing a robotic system architecture are disclosed. The methods include defining a software graph including a first plurality of nodes, and a first plurality of edges representative of data flow between the first plurality of tasks, and defining a hardware graph including a second plurality of nodes, and a second plurality of edges. The methods may include mapping the software graph to the hardware graph, modeling a latency associated with a computational path included in the software graph for the mapping between the software graph and the hardware graph, allocating a plurality of computational tasks in the computational path to a plurality of the hardware components to yield a robotic system architecture using the latency, and using the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph.

Description

BACKGROUND
Real-time computing systems such as robotic systems (e.g., autonomous vehicles) need to be responsive. Such systems have a set of deadlines, and missing a deadline is considered either a system failure and/or degrades the quality of service. In such systems, a delay of less than a millisecond in the system can, in some situations, cause a failure. Many applications of real-time computing are also low-latency applications (e.g., autonomous vehicles must be reactive to sudden changes in the environment). Moreover, a robotic system such as an autonomous vehicle performs several successive operations on sensor data with the output of one operation used as the input of another operation (e.g., pipeline stages). As such, real-time computer systems and applications often require low latency. Therefore, for a system design to meet the needs of such robotic systems, the core software and hardware components must not interfere with the requirements of real-time computing.
The design and organization of such complex robotic systems traditionally requires laborious trial-and-error processes to ensure both hardware and software components are correctly connected with the resources necessary for computation. Optimizing system properties such as latency often becomes intractable due to the number of components involved (compute hardware, communication networks, communication protocols, software allocation, etc.). Current approaches for optimizing system performance primarily address resource allocation issues (e.g. memory usage, bandwidth utilization), but not latency. However, as discussed above, latency is a critical factor in optimizing system performance for real time systems for improving system performance and hitting important safety goals such as ensuring system reaction times are sufficient.
This document describes methods and systems that are directed to addressing the problems described above, and/or other issues.
SUMMARY
In various scenarios, systems and methods for designing a robotic system architecture are disclosed. The systems may include a processor and non-transitory computer readable medium including programming instructions that can be executed by the processor to perform the methods of this disclosure. The system may define a software graph and a hardware graph. The software graph may include a first plurality of nodes representing discrete computational tasks, and a first plurality of edges representative of data flow between the first plurality of tasks. The hardware graph may include a second plurality of nodes representing hardware components of a robotic device, and a second plurality of edges representative of communication links between the second plurality of nodes. The system may map the software graph to the hardware graph by assigning each of the first plurality of nodes to one of the second plurality of nodes and each of the first plurality of edges to one or more of the second plurality of edges. For the mapping, the system may then model a latency associated with a computational path included in the software graph, and allocate (based on the latency) a plurality of computational tasks in the computational path to a plurality of the hardware components to yield a robotic system architecture. The system may use the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph. Optionally, the software graph may be a multigraph and the hardware graph may be a pseudograph.
In various implementations, the system may also update the mapping of the software graph to the hardware graph by defining an objective function comprising a first cost of assigning a discrete computational task to a hardware component and a second cost of assigning a data flow to a communication link. Optionally, the system may minimize the objective function using a plurality of constraints, the plurality of constraints comprising at least one of the following: an atomicity constraint, a bandwidth constraint, a resource constraint, or a flow constraint. Additionally and/or alternatively, the system may minimize the objective function using the latency associated with the computational path included in the software multigraph.
In certain implementations, the system may model a system latency across all computational paths included in the robotic system, and then generate the robotic system architecture by allocating a plurality of computational tasks across all computations paths included in the robotic system to a plurality of hardware components for improving the performance of the robotic system architecture using the system latency.
In some implementations, the latency may be modeled as a function of a first latency introduced by the hardware components and a second latency introduced by the communication links. The second latency for a communication link may be modeled as a function of, for example, a number of messages currently assigned to that communication link, a average size of the currently assigned messages, a standard deviation of sizes of the currently assigned messages, a number of messages an additional communications link will add, and/or a size of the currently assigned messages. The first latency for a hardware component may be modeled as a function of input and output message timings from that hardware component. Optionally, one or more parameters for modeling the first latency may be determined based on checkpoint data corresponding to the robotic device.
The system may, in certain implementations, in the software graph: associate a type and an amount of resources required to execute a task associated with that node with each of the first plurality of nodes, and associate an amount of required bandwidth with each of the first plurality of edges. Additionally and/or alternatively, the system may in the hardware graph associate a type and an amount of resources provided by that node with each of the second plurality of nodes, and associating an amount of available bandwidth with each of the second plurality of edges.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic diagram illustrating an example system for designing a robotic system architecture.
FIG. 2 is a flowchart illustrating an example method for designing a robotic system architecture.
FIG. 3 illustrates an example hardware pseudograph.
FIG. 4 illustrate an example software multigraph.
FIG. 5 illustrates example elements of an autonomous vehicle and/or external electronic device.
DETAILED DESCRIPTION
As used in this document, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” means “including, but not limited to.” Definitions for additional terms that are relevant to this document are included at the end of this Detailed Description.
As noted in the Background section above, real-time tasks (for example, in robotic systems) are tasks for which the task has a predefined maximum threshold for delay between requesting a service from the robotic system and having the request fulfilled. Failure to service the real-time task within the threshold can cause serious failure of the task and/or systems managed by the real-time task. For robotic systems, latency may refer to a time lag between receipt of an input and production of an output based on the received input. For example, latency may be the time from when the first sensor inputs occur which would produce a particular response, to when the system actually produces such a response (e.g., time period between time of photons striking a camera sensor to the time when navigation of an autonomous vehicle is controlled based on the camera detection). Production of such an output may include assignment of a hardware resource (e.g., processor) for task execution, exclusive access to a resource, and the like. Latency typically includes other more specific latencies well known to those of skill in the art such as scheduling latency, task switching latency, and the like.
Embedded software architectures or platforms (middleware and real time operating systems) used in robots may be statically configured at design time with a fixed set of operating system tasks. In such robotic systems, some or all tasks that will execute on a given computing hardware may be allocated at runtime using, for example, a configuration file that specifies which tasks execute in which locations. Optionally, the assignment may be performed during compilation and/or linking. The current disclosure describes methods and systems for automatically constructing optimal robotic hardware and software systems from a set of available components, using queueing networks that optimize against key latency requirements. The disclosure provides a unified framework for enabling a wide array of capabilities in the design, development, and operation of robot systems. For design-time operation, the systems and methods of this disclosure automates the process of selecting components to build a complete system with some user-defined functionality, as well as generates the structure that relates all elements (e.g., connecting hardware, assigning tasks, and routing communications). By automating this stage, designs can be made more quickly and with more confidence in the validity of a given solution, since the entire problem is solved simultaneously. This also enables more robust consideration of system resiliency in design, since the impact of small changes in component parameterization (e.g., how much computing resources a particular task needs, the size of a particular message, the cost of using a particular sensor, etc.) can immediately be propagated to the global system design. The proposed systems and methods also help to ensure that the designed robotic system meets specified latency budgets, while helping to optimize resource utilization across the system (including latency). Furthermore, by automating the process, system resiliency can be extended by solving the design problem in an online setting, through the same process of generating solutions in response to local changes. For example, changes in the environmental context (e.g., transitioning from indoor to outdoor operation) can require different capabilities (e.g., using GPS for localization), changes in software performance (e.g., a task consuming more resources than anticipated) and changes in hardware components (e.g., computer failure) can require a reallocation of software tasks through the system. The ability to automatically synthesize a novel system capturing these requirements will allow for complex robotic systems that are more efficient and resilient by design.
Referring now to FIG. 1, a schematic diagram of an example system for designing a robotic system architecture is illustrated. “System design” or “system architecture design” refers to a process of defining the components (e.g., hardware and software architectures, interfaces, and data) and the interrelationships of the components of a system that is capable of executing a set of computational tasks with a user-defined set of functionality; and/or guidelines for implementing and building such a system. System architecture, as used herein, refers to a set of rules that defines the interconnectivity for a robotic system's hardware components as well as the mode of data transfer and processing exhibited by the robotic system to execute a software. The system 100 may design the architecture for a robotic system based on, for example, various robotic system components, task constraints, constraints provided by a user, or the like, in accordance with the methods described below. System 100 may include a plurality of clients 102 a-n, a robotic system 110, and a design server 103 coupled to each other via one or more networks 104.
Clients 102 a-n represent any suitable local or remote end-user device that facilitates communication between users and the design server 103. Clients 102 a-n may be configured to receive information regarding the design of requested robotic system 110. The received information may include target values for design criteria of the robotic system, where design criteria may refer to robotic system context, constraints, standards, objectives, goals, and/or requirements for the implementation, performance, operation, integration, cost, and/or maintenance of the robotic system 110.
As shown in FIG. 1, a robotic system 110 includes a plurality of hardware components 116. The hardware components 116 may include, for example, processing units and memory 132 accessible to the processing units as well as logical access to peripherals of a robotic system 110. The processing units may include, for example, one or more central processing units (CPUs) 120, one or more graphical processing units (GPUs) 124, one or more digital signal processors (DSPs) 128, other processors 130 (e.g., field programmable gate arrays (FPGAs), FPGA-like devices, and/or reconfigurable instruction cell array (RICA) devices), or a combination thereof. One or more of the processing units may include more than one processing core. For example, each of the one or more CPUs 120 may include multiple processing cores 122. As another example, each of the one or more GPUs 124 may include multiple processing cores 126.
The robotic system 110 may also include a plurality of software resources 118. The software resources 118 may include, for example, sensor data processing applications, motion planning applications, graphical programming applications, machine learning applications, or the like. The hardware components 116 and the software resources 118 may be accessible to a task planner 114(a) (where tasks are described in and assigned process dependent parameters in accordance with a system design) and a scheduler 114(b) (where tasks are scheduled in accordance with a system design) of the robotic system 110. Information about the robotic system 110 is provided to the design server 103.
Referring now to FIG. 2, a flowchart illustrating an example method for designing a robotic system architecture is shown. The method can be performed at design time, in real time, or at any suitable time. The method can be performed a predetermined number of times for a robotic system, iteratively performed at a predetermined frequency for a robotic system, or performed at any suitable time. Multiple instances of the method can be concurrently performed for multiple concurrent robotic systems. However, any suitable number of method instances can be performed at any suitable time. The method described in FIG. 2 a can be practiced by a system such as but not limited to the one described with respect to FIG. 1. While method 200 describes a series of operations that are performed in a sequence, it is to be understood that method 200 is not limited by the order of the sequence depicted. For instance, some operations may occur in a different order than that described. In addition, one operation may occur concurrently with another operation. In some instances, not all operations described are performed.
At 202, the system may receive information relating to software that will be executed by a robotic system. As used herein, “software” refers to a set of instructions, data, or programs used to execute specific tasks or processes, where a task or process is any discrete computational process that has an input and an output. Tasks may consume computational resources, process and publish data, and interface with sensors and actuators; therefore a hardware component is required to provide these resources for task execution. Examples of such software in a robotic system may include, without limitations, path planner software, path tracker software, localization software, obstacle mapper software, operator GUI software, sensor interface software, or the like.
At 204, the system may receive information relating to hardware components of a robotic system. The hardware components of the robotic system can provide computational resources (such as available processing power, RAM, disk storage, or logical access to particular peripherals) and connections to other devices/systems for executing the tasks of a software. The hardware components may be heterogeneous such that not every device may provide every resource
At 206, the system may generate a hardware graph corresponding to the hardware components of the robotic system and a software graph corresponding to the software. A graph is a combinatorial structure consisting of nodes and edges, where each edge links a pair of nodes. The two nodes linked by an edge are called its endpoints. Specifically, a graph G=(V, E) is defined by a set of nodes V={vi=1, . . . , I}, and a multiset of edges E=={ei={vi, vj}|vi, vj∈V} which connect pairs of nodes. The vertices participating in edge ei are indexed in the form ei,n|n=1, 2, also known as a loop if ei,1=ei,2. Restrictions on the set of edges E define several important classes of graphs: a simple graph possesses a set of edges E in which no edge is a loop; a multigraph possesses a multiset E with no loops; and a pseudograph possesses a multiset E possibly with loops. Additionally, if the order of vertices in edges is fixed, this defines a directed graph, otherwise a graph may be referred to as an undirected graph.
The graph generated for the software may be multigraph, where nodes and edges are referred to as tasks and links (as shown in FIG. 4). The multigraph may be directed with the first element in each edge representing the source, and the second representing a destination. In particular, the system defines a graph that includes a plurality of nodes representing discrete computational elements (i.e., tasks or processes) of a software routine that will perform the computational work and a plurality of edges connecting the nodes. Such a multigraph may be represented by the equation: S={T, Λ}, where:
Nodes are represented as: T=τ1, . . . τ|T|;
Edges are represented as: Λ={λ1, . . . , λ|Λ|} and model the links that transmit data necessary for operation, and the output of computation used by other tasks.
Data flow between the nodes is represented as: λ1={τ1, τ0}|τi≠τ0
Unlike the hardware pseudograph, the edge multiset A does not contain self-loops since tasks have internal access to generated data.
The graph generated for the hardware components may be a pseudograph (shown in FIG. 3) including a set of nodes representing the hardware components that can provide computations resources and a set of undirected edges representing routes associated with connections between the hardware components that are capable of transmission of data associated with the software. Such a pseudograph may be represented by the equation: H={Π, Γ}, where:
Nodes are represented as: Π={πi, π0};
Edges are represented as: Γ={γ1, . . . , γ|Γ|}; and
Data transmission connections between the hardware components are represented as:
γk={πi, π0}.
This allows multiple connections between devices, since devices can be connected to each other via differing physical transports (e.g. both USB and Ethernet) or various forms of internal communication (e.g. shared memory or a loopback interface).
The system may then associate information and/or properties with the nodes and edges of the software and hardware graphs. For example, the resources required for a particular task may vary between devices, e.g. due to hardware specialization, where the resources consumed. Therefore, a given task τ may require a specific type/amount resources to execute that may vary depending on the hardware component executing a given task both in type (e.g. memory, CPU cycles, access to specific hardware) and magnitude (e.g. number of bytes, number of CPU cycles, etc.). The system, therefore, also associates with each task node in the software graph, the amount of a particular resource required to execute a task on a particular hardware component. This may be represented as an R-dimensional vector Cπ τ=<c1, . . . , cR>. Similarly, since each hardware component in the hardware graph provide a certain type of resource and their respective amount, which may be associated with the hardware component node as an R-dimensional vector r{right arrow over (π)}=<r1, . . . , rR>, which is available to support task execution.
For edges in a hardware graph and the software graph, each route (i.e., hardware graph edge) provides a certain amount of bandwidth to support communication of information (bγ k ). Specifically, connections provide finite bandwidth for transmitting data resulting in a set of bandwidth limits, and is a property of the hardware graph edges. Similarly, each link consumes a certain amount of bandwidth when traversing the specified route (dγ k λ l ). These parameters are associated with the edges in the hardware graph and the software graph, respectively.
At 208, the system may map the software graph to the hardware graph in order to define a system design or configuration. This may be considered an assignment problem that requires assignment of tasks to devices and assignment of links to routes. Specifically, in order to define a complete computational system, there must also exist a mapping αH S: S→H, which defines how software executes on the specified hardware. A computational system is, therefore, defined as R={H, S, αH S}, a set containing the hardware graph, software graph, and the assignment mapping. An assignment variable απ d π t ∈ {0, 1} defines whether task τt executes on device πd, and an assignment variable αγ k λ l ∈ {0, 1} defines whether a link λl transmits over a route γk.
The system may optimize the defined system design (210). In some implementations, the system may then identify a set of constraints, and use the constraints to optimize the system design. Examples of such constraints may include, without limitation, a constraint that a task can be assigned to only a single device (binary and atomicity constraint), a constraint that a link must be assigned to a set of routes which link the two devices on which two tasks are executing (flow constraint), a constraint that a set of tasks assigned to any device must not consume more resources than the device can provide (resource constraint—i.e., consumption of computational resources by tasks cannot over allocate device budgets), a constraint that links may not consume more bandwidth than the assigned hardware connection or route provides (bandwidth constraint), or the like. These constraints may be represented by the following equations:
Atomicity Constraint: Σπ d Π απ d τ t =1∀τt ∈T
Flow Constraint: αγ k λ l Y j ∈E(γ k,1 )αγ i λ l,1 γ k,2 λ l,2 Y j ∈E(γ k,2 )αγ j λ l ∀λl∈Λ,γk∈Γ
Resource Constraint: Στ p ∈Tαπ d τ p {right arrow over (c)} π d τ p ≤r π d ∀πd∈Π
Bandwidth Constraint: Σγ l ∈Λαγ k λ l d γ k λ l ≤b γ k ∀γk∈Π
The bandwidth constraint equation is formulated taking into account that the transfer of data between two connected tasks consumes bandwidth traversing a connection, with link λl consuming dγ k λ l worth of bandwidth over connection γk. The amount of bandwidth consumed can vary due to the differences in connections (e.g., packetized network overhead, requirements on data representations, etc.), requiring the bandwidth utilization to take the connection γk into account as well. A link λl may be assigned to transmit over a connection γk, denoted by the variable αγ k λ l ∈ {0, 1}, which consumes the specified amount of bandwidth dγ k λ l . The bandwidth constraint equation ensures that the assignment of links to connections respects the bandwidth limits specified previously.
Tasks may be assigned to devices not directly connected to one another, requiring data routing along multiple connections. Multi-hop paths require routing data along a connected path between the devices assigned to each task. The flow constraint ensures these properties for all routes with a flow constraint stating that for any device interacting with a link, it must have either an odd number of connections and assigned to the relevant device (e.g., a source or sink) or an even number of connections transmitting the data (e.g., uninvolved or flowing through). This logic is formulated on a per-link basis to ensure a linear constraint.
Additional constraints must be introduced to ensure the graph structure produces a system with two key properties: consistency and viability. Consistency requires that the assignment variables represent a physically realizable system—devices cannot connect to non-existent devices, tasks cannot send or receive data from inactive tasks, and so forth. Viability ensures that the synthesized graphs support the requirements of all constituent elements, while still respecting the assignment and routing constraints described above. For instance, any generated hardware pseudograph must provide the necessary resources to support execution of the software multigraph; devices must have sufficient resources to support task execution, and the connections between devices must provide enough bandwidth for transferring data between tasks. Viability addresses only local concerns in generating graphs (e.g. ensuring devices can connect to one another, tasks have required resources and data inputs, etc.), deferring the treatment of systemic functionality to later constraints.
The system may also optimize the system design by formulating and using an objective functions that defines the quality of a system design such that system configurations that include desirable qualities or properties are favored over those that do not include such desirable qualities. Any now or hereafter known objective function may be used. For example, an objective function may be formulated to favor properties such as reducing or minimizing resource consumption, balancing load across system, reducing or minimizing energy consumption, or the like. Optionally, an objective function may be formulated to reduce or minimize the cost of assigning a task to a device (ftask t, πd)) and the cost of assigning a link to a route (flink l, γk)). The system may can define both functions as the fractional utilization of resources available, and define ω as the relative weighting of utilizing device or link utilization as follows:
τ t T π d Π α π d τ t f t ask ( τ t , π d ) + ω γ k Γ γ l Λ α γ k λ l f l i n k ( λ l , γ k )
This allows for defining a core multi-resource quadratic assignment and routing problem by minimizing the objective function while maintaining the above identified constraints. Specifically, the system may optimize the system design (by providing an optimal mapping between the software graph and the hardware graph) using the following objective function:
Z = min τ t T π d Π α π d τ t f task ( τ t , π d ) + ω γ k Γ γ l Λ α γ k λ l f link ( λ l , γ k )
such that the following are satisfied:
π d Π α π d τ t = 1 τ t T α γ k , 1 λ l , 1 + γ i E ( γ k , 1 ) α γ i λ l , 1 = α γ i λ l , 1 + γ i E ( γ k , 2 ) α γ i λ l , 1 λ l Λ , λ k Γ τ p T α π d τ p c π d τ p r π d π d Π λ l Λ α λ k λ l d λ k λ l b γ k γ k Γ
At 212, the system may further help to optimize the system design for reducing or minimizing latency. Specifically, the system may introduce latency into the objective function model by computing latency across a computational path in the software graph as a function of latency introduced by a task node and latency introduced by a route.
The system may, for each task, first define rules for when a message will be emitted on a particular link in the software graph. For example, for an image processing task, the system may define rules to define when processed images will be send to, for example, a motion planning system. The system may do so by, receiving from the task nodes, input conditions that will lead to production of an output message. Thus, for each task node, the system may define a function property (Mτ(λ, S)) for determining that when a given input message is received over an incoming link of task node, whether it results in generating a message on a specified outgoing link (for example, an image processing task may generate a processed image output every time a LIDAR image is received). Additionally and/or alternatively, the system may identify an average output period (ō) with a variation (σo) indicative of a periodic rate of message production at an outgoing link from the task node (for example, a sensor data publishing task may output a message every 5 seconds). Given these rules for when a task emits a message, the system may model when a task receives a message because (Mτ(λ, S)) links an incoming message to the production of an output message. For example, for event based steps, the system may provide a mapping of input to output messages because when a task receives a message is the sum of the rates from preceding node publications that match the rule.
The latency induced by a task node when processing a message can then be modeled using the rules regarding input and output message timings from the task node, as determined above. For example, the system may model the latency using queueing theory. Specifically, the processing time may be modeled as the response from a server in a G/G/1 queue. This modeling may assume that the system is stationary at design time, since implementing this in the design time optimization problem enforces that the network cannot be overloaded (i.e., assuming steady state of some upper bound of the operating case at design time). The expected time for an output message can be estimated using Kingman's Formula, which requires knowledge of the average time to process a message p, the average input rate i, and the standard deviation of these two situations σp, σi. Therefore, the latency induced by a message passing through a task node can be estimated as:
L ( τ , λ i ) = p 2 ι _ 1 - p i ( c p 2 + c i 2 2 ) , where : c p = σ p p , c i = σ i i
These parameters may be estimated based on checkpoint data and/or internal instrumentation corresponding to a robotic system. For example, the robotic system may validate and/or tune the parameters by comparing actual measured latency against modeled latency.
Latency across a route can be estimated through a variety of means. For example, for design time estimation, a model that can estimate the expected latency for transmitting data over a particular link may be used. As a first order approximation, the system may model this latency as a steady state of some upper bound of the operating case, at the expense of being able to model traffic bursts. For example, the latency across a single route may be modeled as a function of five parameters: the number of messages currently assigned to that route, the average size of those messages, the standard deviation of those message sizes, the number of messages an additional link would add, and the size of the assigned messages. This may be represented by: Lγnew,{right arrow over (λ)}assigned).
The system may compute the latency of any computational path through the software graph, given the hardware graph and a mapping between them, using the task node latencies and route latencies in the computation path. Specifically, starting with some initial output message λ1,l, and a destination task τf, the system can define the end-to-end latency. Optionally, the system may, for P (λ, τ, S) which is the function producing the connected path of links in the software graph connecting λ to τ using the task defined transfer functions MT, define latency as follows:
L ( λ i , τ , S ) = λ k P ( λ l , 1 τ f , S ) L ( λ k , 2 λ k ) γ i Γ α λ k λ l L γ ( λ k , λ assigned )
The system may integrate this function into the multi-Resource quadratic assignment and routing problem described above by augmenting the problem with a set of end-to-end latency requirements, defined by a set of latencies θr={ϕ1=<λ, τ>, . . . , ϕ r |} representative of the computational path and a vector of latency requirements lr=<t1, . . . , t r |>. The system may then introduce an end-to-end latency constraint of the form:
Li ,τ,S)≤ϕt∀ϕ∈θr
The system may, optionally, update the optimization function of step 210 by adding an additional term to the objective function based on a set of end-to-end latencies, in order to optimize these latencies as well. For example, the system can define a second set of latencies θo in the same way as the constraint form, which can include the constrained latencies, or any other set of end-to-end latencies possible in the system. Such an updated optimization function can be represented as:
Z = min τ t T π d Π α π d τ t f task ( τ t , π d ) + ω λ γ k Γ γ l Λ α γ k λ l f link ( λ l , γ k ) + w ϕ ϕ Θ o f l a t e n c y ( ϕ λ , ϕ τ , S ) L ( ϕ λ , ϕ τ , S )
such that the following are satisfied:
π d Π α π d τ t = 1 τ t T α γ k , 1 λ l , 1 + γ i E ( γ k , 1 ) α γ i λ l , 1 = α γ k , 2 λ l , 1 + γ i E ( γ k , 2 ) α γ j λ l λ l Λ , λ k Γ τ p T α π d τ p c π d τ p r π d π d Π λ l Λ α λ k λ l d λ k λ l b γ k γ k Γ L ( λ i , τ , S ) ϕ t ϕ Θ r
In some example implementations, the system may optimize all link latencies directly because the structure of the software graph is fixed, and the latencies induced by computation through tasks may not vary as a function of the assignment. The system may, therefore, optimize latencies across the entire robotic system instead of certain specific and/or critical end-to-end latencies in an efficient manner. This would result in an objective function of the following form, while the constraints would remain unchanged:
Z = min τ t T π d Π α π d τ t f task ( τ t , π d ) + ω λ γ k Γ γ l Λ α γ k λ l ( f link ( λ l , γ k ) + w ϕ f l a t e n c y ( λ l , γ l ) )
While the description above uses the example application of a robotic device such as an autonomous vehicle, the processes described above are generally for designing a system at compile-time for use by any computing device during runtime. Other applications of these methods may include other computing device applications such as programs for scheduling jobs or tasks, encryption techniques and other cryptographic programs, and other applications that include a distributed computation system (e.g., cloud compute systems, computers using a CPUs and GPUs, multi-robot systems, etc.).
At 214, the system may output a system design or architecture for configuring a robotic system that may be capable of performing one or more software such as, sensor data processing, predictions, motions, planning, or the like.
It should be noted that while the above disclosure discusses the optimization of latency during a system design process, the disclosure is not so limiting. The above latency optimization may also be used as a validation step for an already designed system. Specifically, given a designed system, the above disclosure may be used to validate that the system meets certain latency requirements and/or estimate actual end-to-end latency. For performing such validation, the system may evaluate the above latency equations without the rest of the system optimization framework because the assignment variables (e.g., variables in the above equations denoted by a) are binary (can only take a value of 0 or 1), and the optimization function is a linear integer program. As such, for a given system design, plugging in the pre-determined values may result in direct equations that can be evaluated for latency evaluation and estimation. For example, the current disclosure solves the equation below for all α values (second term on the right hand side) during system synthesis, and plugging in the values for a obtained during system synthesis in this equation will provide the end-to-end latency for a specified path.
L ( λ i , τ , S ) = λ k P ( λ l , 1 τ f , S ) L ( λ k , 2 λ k ) + γ i Γ α λ k λ l L γ ( λ k , λ assigned )
FIG. 5 is a block diagram that depicts example hardware elements that may be included in any of the electronic components of the system, such as internal processing systems of an robotic device or computing device, or remote servers (such as that associated with a design server). An electrical bus 500 serves as an information highway interconnecting the other illustrated components of the hardware. Processor 505 is a central processing device of the system, configured to perform calculations and logic operations required to execute programming instructions. As used in this document and in the claims, the terms “processor” and “processing device” may refer to a single processor or any number of processors in a set of processors that collectively perform a set of operations, such as a central processing unit (CPU), a graphics processing unit (GPU), a remote server, or a combination of these. Read only memory (ROM), random access memory (RAM), flash memory, hard drives and other devices capable of storing electronic data constitute examples of memory devices 525. A memory device may include a single device or a collection of devices across which data and/or instructions are stored. Various embodiments of the invention may include a computer-readable medium containing programming instructions that are configured to cause one or more processors, and/or devices to perform the functions described in the context of the previous figures.
An optional display interface 530 may permit information from the bus 500 to be displayed on a display device 535 in visual, graphic or alphanumeric format, such on an in-dashboard display system of the vehicle. An audio interface and audio output (such as a speaker) also may be provided. Communication with external devices may occur using various communication devices 540 such as a wireless antenna, a radio frequency identification (RFID) tag and/or short-range or near-field communication transceiver, each of which may optionally communicatively connect with other components of the device via one or more communication system. The communication device(s) 540 may be configured to be communicatively connected to a communications network, such as the Internet, a local area network or a cellular telephone data network.
The hardware may also include a user interface sensor 545 that allows for receipt of data (such as node and connection definition data) from input devices 550 such as a keyboard or keypad, a joystick, a touchscreen, a touch pad, a remote control, a pointing device and/or microphone.
The system also may include sensors that the system uses to detect actors in the environment. The sensed data may include digital image frames received from a camera 520 that can capture video and/or still images. The system also may receive data from a LiDAR system 560 such as that described earlier in this document. The system also may receive data from a motion and/or position sensor 570 such as an accelerometer, gyroscope or inertial measurement unit.
The above-disclosed features and functions, as well as alternatives, may be combined into many other different systems or applications. Various components may be implemented in hardware or software or embedded software. Various presently unforeseen or unanticipated alternatives, modifications, variations or improvements may be made by those skilled in the art, each of which is also intended to be encompassed by the disclosed embodiments.
Terminology that is relevant to the disclosure provided above includes:
As used herein, the term “task” refers to the smallest portion of a software program that an operating system (OS) can switch between a waiting or blocked state and a running state in which the task executes on a particular hardware resource. Tasks can come from a single software program or a plurality of software programs and typically include portions of the OS.
An “automated device” or “robotic device” refers to an electronic device that includes a processor, programming instructions, and one or more physical hardware components that, in response to commands from the processor, can move with minimal or no human intervention. Through such movement, a robotic device may perform one or more automatic functions or function sets. Examples of such operations, functions or tasks may include without, limitation, operating wheels or propellers to effectuate driving, flying or other transportation actions, operating robotic lifts for loading, unloading, medical-related processes, construction-related processes, and/or the like. Example automated devices may include, without limitation, autonomous vehicles, drones and other autonomous robotic devices.
The term “vehicle” refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy. The term “vehicle” includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like. An “autonomous vehicle” is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator. An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle. Autonomous vehicles also include vehicles in which autonomous systems augment human operation of the vehicle, such as vehicles with driver-assisted steering, speed control, braking, parking and other systems.
An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement. The memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions.
The terms “memory,” “memory device,” “data store,” “data storage facility” and the like each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices.
The terms “processor” and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.
In this document, the terms “communication link” and “communication path” mean a wired or wireless path via which a first device sends communication signals to and/or receives communication signals from one or more other devices. Devices are “communicatively connected” or “communicatively coupled” if the devices are able to send and/or receive data via a communication link. “Electronic communication” refers to the transmission of data via one or more signals between two or more electronic devices, whether through a wired or wireless network, and whether directly or indirectly via one or more intermediary devices.
In this document, when relative terms of order such as “first” and “second” are used to modify a noun, such use is simply intended to distinguish one item from another, and is not intended to require a sequential order unless specifically stated.

Claims (24)

The invention claimed is:
1. A method for designing a robotic system architecture, the method comprising, by a processor:
defining a software graph comprising:
a first plurality of nodes representing discrete computational tasks, and
a first plurality of edges representative of data flow between the first plurality of nodes;
defining a hardware graph comprising:
a second plurality of nodes representing hardware components of a robotic device, and
a second plurality of edges representative of communication links between the second plurality of nodes;
mapping the software graph to the hardware graph by assigning each of the first plurality of nodes to one of the second plurality of nodes and each of the first plurality of edges to one or more of the second plurality of edges;
modeling, for the mapping between the software graph and the hardware graph, a latency associated with a computational path included in the software graph;
allocating, using the latency, a plurality of computational tasks in the computational path to a plurality of the hardware components to yield a robotic system architecture; and
using the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph.
2. The method of claim 1, further comprising updating the mapping of the software graph to the hardware graph by defining an objective function comprising a first cost of assigning a discrete computational task to a hardware component and a second cost of assigning a data flow to a communication link.
3. The method of claim 2, further comprising minimizing the objective function using a plurality of constraints, the plurality of constraints comprising at least one of the following: an atomicity constraint, a bandwidth constraint, a resource constraint, or a flow constraint.
4. The method of claim 2, further comprising minimizing the objective function using the latency associated with the computational path included in the software graph.
5. The method of claim 1, further comprising:
modeling a system latency across all computational paths included in the robotic system; and
generating the robotic system architecture by allocating, using the system latency, a plurality of computational tasks across all computations paths included in the robotic system to a plurality of hardware components for improving the performance of the robotic system architecture.
6. The method of claim 1, wherein the latency is modeled as a function of a first latency introduced by the hardware components and a second latency introduced by the communication links.
7. The method of claim 6, wherein the second latency for a communication link is modeled as a function of the following parameters: a number of messages currently assigned to that communication link, a average size of the currently assigned messages, a standard deviation of sizes of the currently assigned messages, a number of messages an additional communications link will add, and a size of the currently assigned messages.
8. The method of claim 6, wherein the first latency for a hardware component is modeled as a function of input and output message timings from that hardware component.
9. The method of claim 8, wherein one or more parameters for modeling the first latency are determined based on checkpoint data corresponding to the robotic device.
10. The method of claim 1, wherein the software graph is a multigraph and the hardware graph is a pseudograph.
11. The method of claim 1, further comprising, in the software graph:
associating, with each of the first plurality of nodes, a type and an amount of resources required to execute a task associated with that node; and
associating, with each of the first plurality of edges, an amount of required bandwidth.
12. The method of claim 11, further comprising, in the hardware graph:
associating, with each of the second plurality of nodes, a type and an amount of resources provided by that node; and
associating, with each of the second plurality of edges, an amount of available bandwidth.
13. A system for designing a robotic system architecture, the system comprising:
a processor; and
a non-transitory computer readable medium comprising programming instructions, that when executed by the processor, cause the processor to:
define a software graph comprising:
a first plurality of nodes representing discrete computational tasks, and
a first plurality of edges representative of data flow between the first plurality of nodes;
define a hardware graph comprising:
a second plurality of nodes representing hardware components of a robotic device, and
a second plurality of edges representative of communication links between the second plurality of nodes;
map the software graph to the hardware graph by assigning each of the first plurality of nodes to one of the second plurality of nodes and each of the first plurality of edges to one or more of the second plurality of edges;
model, for the mapping between the software graph and the hardware graph, a latency associated with a computational path included in the software graph;
allocate, using the latency, a plurality of computational tasks in the computational path to a plurality of the hardware components to yield a robotic system architecture; and
use the robotic system architecture to configure the robotic device to be capable of performing functions corresponding to the software graph.
14. The system of claim 13, further comprising programming instructions, that when executed by the processor, cause the processor to update the mapping of the software graph to the hardware graph by defining an objective function comprising a first cost of assigning a discrete computational task to a hardware component and a second cost of assigning a data flow to a communication link.
15. The system of claim 14, further comprising programming instructions, that when executed by the processor, cause the processor to: minimize the objective function using a plurality of constraints, the plurality of constraints comprising at least one of the following: an atomicity constraint, a bandwidth constraint, a resource constraint, or a flow constraint.
16. The system of claim 14, further comprising programming instructions, that when executed by the processor, cause the processor to minimize the objective function using the latency associated with the computational path included in the software graph.
17. The system of claim 13, further comprising programming instructions, that when executed by the processor, cause the processor to:
model a system latency across all computational paths included in the robotic system; and
generate the robotic system architecture by allocating, using the system latency, a plurality of computational tasks across all computations paths included in the robotic system to a plurality of hardware components for improving the performance of the robotic system architecture.
18. The system of claim 13, wherein the latency is modeled as a function of a first latency introduced by the hardware components and a second latency introduced by the communication links.
19. The system of claim 18, wherein the second latency for a communication link is modeled as a function of the following parameters: a number of messages currently assigned to that communication link, a average size of the currently assigned messages, a standard deviation of sizes of the currently assigned messages, a number of messages an additional communications link will add, and a size of the currently assigned messages.
20. The system of claim 18, wherein the first latency for a hardware component is modeled as a function of input and output message timings from that hardware component.
21. The system of claim 20, wherein one or more parameters for modeling the first latency are determined based on checkpoint data corresponding to the robotic device.
22. The system of claim 13, wherein the software graph is a multigraph and the hardware graph is a pseudograph.
23. The system of claim 13, further comprising programming instructions, that when executed by the processor, cause the processor to, in the software graph:
associate, with each of the first plurality of nodes, a type and an amount of resources required to execute a task associated with that node; and
associate, with each of the first plurality of edges, an amount of required bandwidth.
24. The system of claim 13, further comprising programming instructions, that when executed by the processor, cause the processor to, in the hardware graph:
associate, with each of the second plurality of nodes, a type and an amount of resources provided by that node; and
associate, with each of the second plurality of edges, an amount of available bandwidth.
US17/160,758 2021-01-28 2021-01-28 Method and system for designing a robotic system architecture with optimized system latency Active US11354473B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US17/160,758 US11354473B1 (en) 2021-01-28 2021-01-28 Method and system for designing a robotic system architecture with optimized system latency
PCT/US2022/070378 WO2022165497A1 (en) 2021-01-28 2022-01-27 Method and system for designing a robotic system architecture with optimized system latency
EP22746907.9A EP4285198A1 (en) 2021-01-28 2022-01-27 Method and system for designing a robotic system architecture with optimized system latency
CN202280008813.8A CN116670649B (en) 2021-01-28 2022-01-27 Method and system for designing a robotic system architecture with optimized system latency
US17/664,979 US11972184B2 (en) 2021-01-28 2022-05-25 Method and system for designing a robotic system architecture with optimized system latency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/160,758 US11354473B1 (en) 2021-01-28 2021-01-28 Method and system for designing a robotic system architecture with optimized system latency

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/664,979 Continuation US11972184B2 (en) 2021-01-28 2022-05-25 Method and system for designing a robotic system architecture with optimized system latency

Publications (1)

Publication Number Publication Date
US11354473B1 true US11354473B1 (en) 2022-06-07

Family

ID=81852266

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/160,758 Active US11354473B1 (en) 2021-01-28 2021-01-28 Method and system for designing a robotic system architecture with optimized system latency
US17/664,979 Active US11972184B2 (en) 2021-01-28 2022-05-25 Method and system for designing a robotic system architecture with optimized system latency

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/664,979 Active US11972184B2 (en) 2021-01-28 2022-05-25 Method and system for designing a robotic system architecture with optimized system latency

Country Status (4)

Country Link
US (2) US11354473B1 (en)
EP (1) EP4285198A1 (en)
CN (1) CN116670649B (en)
WO (1) WO2022165497A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220043445A1 (en) * 2019-05-06 2022-02-10 Argo AI, LLC Method and System for Real-Time Diagnostics and Fault Monitoring in a Robotic System
US20220222405A1 (en) * 2019-11-19 2022-07-14 Mitsubishi Electric Corporation Design support system and computer readable medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10042614B1 (en) * 2017-03-29 2018-08-07 International Business Machines Corporation Hardware device based software generation
US20210192314A1 (en) * 2019-12-18 2021-06-24 Nvidia Corporation Api for recurrent neural networks

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156413B (en) * 2016-06-29 2019-08-20 南京航空航天大学 A kind of Multiscale modelling design method towards large-scale distributed comprehensively modularized avionics system DIMA
US10332320B2 (en) * 2017-04-17 2019-06-25 Intel Corporation Autonomous vehicle advanced sensing and response
WO2019099111A1 (en) * 2017-11-16 2019-05-23 Intel Corporation Distributed software-defined industrial systems
US20200356951A1 (en) * 2019-01-03 2020-11-12 Lucomm Technologies, Inc. Robotic Devices
US10972768B2 (en) * 2019-06-27 2021-04-06 Intel Corporation Dynamic rebalancing of edge resources for multi-camera video streaming
CN111045803A (en) * 2019-11-25 2020-04-21 中国人民解放军国防科技大学 Optimization method for software and hardware partitioning and scheduling of dynamic partially reconfigurable system on chip
US11845464B2 (en) * 2020-11-12 2023-12-19 Honda Motor Co., Ltd. Driver behavior risk assessment and pedestrian awareness

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10042614B1 (en) * 2017-03-29 2018-08-07 International Business Machines Corporation Hardware device based software generation
US20210192314A1 (en) * 2019-12-18 2021-06-24 Nvidia Corporation Api for recurrent neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ter Break, T. et al., "Dynamic Resource Allocation," Centre for Telematics and Information Technology, Feb. 29, 2016, pp. 1-45.
Ziglar, J. et al., "Context-Aware System Synthesis, Task Assignment and Routing," IIEE, arXiv:1706.04580v2 [cs.RO], Aug. 2017, pp. 1-17.

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220043445A1 (en) * 2019-05-06 2022-02-10 Argo AI, LLC Method and System for Real-Time Diagnostics and Fault Monitoring in a Robotic System
US20220222405A1 (en) * 2019-11-19 2022-07-14 Mitsubishi Electric Corporation Design support system and computer readable medium
US11657197B2 (en) * 2019-11-19 2023-05-23 Mitsubishi Electric Corporation Support system and computer readable medium

Also Published As

Publication number Publication date
CN116670649B (en) 2024-05-31
US11972184B2 (en) 2024-04-30
WO2022165497A1 (en) 2022-08-04
US20220292241A1 (en) 2022-09-15
EP4285198A1 (en) 2023-12-06
CN116670649A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US11824784B2 (en) Automated platform resource management in edge computing environments
US11184236B2 (en) Methods and apparatus to control processing of telemetry data at an edge platform
US11972184B2 (en) Method and system for designing a robotic system architecture with optimized system latency
Cui et al. Offloading autonomous driving services via edge computing
US20220261661A1 (en) Methods, systems, articles of manufacture and apparatus to improve job scheduling efficiency
US20210328944A1 (en) Methods, apparatus, and articles of manufacture to dynamically allocate cache
Shekhar et al. URMILA: Dynamically trading-off fog and edge resources for performance and mobility-aware IoT services
US20220109742A1 (en) Apparatus, articles of manufacture, and methods to partition neural networks for execution at distributed edge nodes
Li et al. Extensible discrete-event simulation framework in SimEvents
JP7379668B2 (en) Task scheduling for machine learning workloads
US20200192336A1 (en) Network Adapted Control System
Tang et al. Pi-edge: A low-power edge computing system for real-time autonomous driving services
Wu et al. Oops! it's too late. your autonomous driving system needs a faster middleware
McLean et al. Configuring ADAS platforms for automotive applications using metaheuristics
US20230245452A1 (en) Adaptive multimodal event detection
Barzegaran et al. Towards extensibility-aware scheduling of industrial applications on fog nodes
Moghaddasi et al. An enhanced asynchronous advantage actor-critic-based algorithm for performance optimization in mobile edge computing-enabled internet of vehicles networks
CN116996941A (en) Calculation force unloading method, device and system based on cooperation of cloud edge ends of distribution network
Long et al. A power optimised and reprogrammable system for smart wireless vibration monitoring
US20220114033A1 (en) Latency and dependency-aware task scheduling workloads on multicore platforms using for energy efficiency
Wang et al. Task offloading for edge computing in industrial Internet with joint data compression and security protection
Partap et al. On-Device CPU Scheduling for Robot Systems
Sarker et al. FOLD: Fog-dew infrastructure-aided optimal workload distribution for cloud robotic operations
Kunkel et al. DECICE: Device-Edge-Cloud Intelligent Collaboration Framework
Nigam et al. Edge Computing: A Paradigm Shift for Delay-Sensitive AI Application

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE