WO2014149109A1 - Nœuds de référence dans un graphe de calcul - Google Patents

Nœuds de référence dans un graphe de calcul Download PDF

Info

Publication number
WO2014149109A1
WO2014149109A1 PCT/US2013/076998 US2013076998W WO2014149109A1 WO 2014149109 A1 WO2014149109 A1 WO 2014149109A1 US 2013076998 W US2013076998 W US 2013076998W WO 2014149109 A1 WO2014149109 A1 WO 2014149109A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
state value
primary
primary node
executing
Prior art date
Application number
PCT/US2013/076998
Other languages
English (en)
Inventor
Donald P. OROFINO
Original Assignee
The Mathworks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Mathworks, Inc. filed Critical The Mathworks, Inc.
Priority to EP13822035.5A priority Critical patent/EP2972799A1/fr
Publication of WO2014149109A1 publication Critical patent/WO2014149109A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/31Programming languages or programming paradigms
    • G06F8/314Parallel programming languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements

Definitions

  • Fig. 1 is a diagram of an overview of an example implementation described herein;
  • Fig. 2 is a diagram of an example environment in which systems and/or methods described herein may be implemented;
  • Fig. 3 is a diagram of example components of one or more devices of Fig. 2;
  • Fig. 4 is a flow chart of an example process for creating a computational graph that includes primary computational nodes and reference computational nodes;
  • Figs. 5A-5E are diagrams of an example implementation relating to the example process shown in Fig. 4;
  • Fig. 6 is a flow chart of an example process for compiling and executing a computational graph that includes primary and reference nodes
  • Fig. 7 is a diagram of an example implementation relating to the example process shown in Fig. 6;
  • Fig. 8 is a diagram of another example implementation relating to the example process shown in Fig. 6.
  • a computational graph may be configured to implement a dynamic system (e.g., a mechanical device, an electrical device, a human organ, a physical phenomena, etc.).
  • the computational graph may generally correspond to a representation of interconnected
  • a computational node may represent a computation implemented in the system (e.g., an element, of the system, that performs an operation associated with the computation), such as a computation performed on an input to generate an output.
  • Implementations described herein introduce reference nodes to computational graphs, where a reference node allows a designer to construct a computational graph capable of executing multiple operations, associated with a single computation, in a single iteration of the graph.
  • Such computational graphs may be divided into segments that are executed in parallel (e.g., multi-threaded execution), are executed at different rates (e.g., multi- rate execution), and/or are executed using different algorithms, thereby increasing efficiency of computational graph execution.
  • Fig. 1 is a diagram of an overview of an example implementation 100 described herein.
  • a designer may interact with a technical computing environment (TCE) to design a computational graph that includes a primary computational node (sometimes referred to herein as a "primary node”) and a reference computational node (sometimes referred to herein as a "reference node”), and/or multiple primary nodes and reference nodes.
  • the computational graph may represent a system (e.g., an electronic circuit, a digital circuit, etc.), and the primary node and the reference node(s) may represent the same element of the system (e.g., a filter, a transfer function, a random number generator, etc.).
  • the primary node and the reference node(s) may represent a computation performed by the system (e.g., filtering of an input to generate an output, applying a function to an input to generate an output, generating a random number output based on receiving an input signal, etc.).
  • the primary node and the reference node(s) may be associated with node characteristics.
  • the node characteristics may include, for example, a computation represented by both the primary node and the reference node(s), a state of the computation (e.g., a value of the state (a state value), potential values of the state, etc.), and an operation, associated with the computation, performed by the nodes. Executing a node may cause the operation, associated with the node, to be performed, which may modify the state value of the computation.
  • the node characteristics may also include, for example, shared parameters of the primary node and the reference node(s). The shared parameters may control a manner in which the
  • computation and/or operation is performed, such as a data type associated with inputs to or outputs from the computation, an algorithm (e.g., a series of steps) performed by the
  • the state associated with the computation represented by node 6 modifies the state associated with the computation represented by node 6 (e.g., dsp.Minimum)
  • the state is also modified for Nodes 2, 7, and 8.
  • the state associated with the computation may be a single shared state that is referenced by each of the primary node and the reference node(s).
  • each node may be associated with a separate, distributed copy of the state.
  • the computational graph may execute multiple operations, associated with the same computation, in a single graph iteration.
  • implementations described herein enable a graph to conditionally execute optional operations on a node by including additional references to the node in the graph.
  • implementations described herein remove the need for a single node to offer multiple input ports representing all possible functions, each of which would be triggered by input signals connecting to the single node. Using multiple input ports on a node increases the likelihood of forming cycles or feedback loops in the graph, due to connections later in the graph looping back to the node earlier in the graph, which creates execution problems for the graph.
  • Implementations described herein allow a user to segment the computational graph into portions that are executed in parallel (e.g., different nodes being processed in parallel by different processors), and/or are executed at different rates (e.g., a first node being executed once every second, and a second node being executed once every minute), thereby increasing efficiency of computational graph execution.
  • Fig. 2 is a diagram of an example environment 200 in which systems and/or methods described herein may be implemented.
  • environment 200 may include a client device 210, which may include a technical computing environment (TCE) 220.
  • TCE technical computing environment
  • environment 200 may include a server device 230, which may include TCE 220, and a network 240. Devices of environment 200 may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections.
  • Client device 210 may include a device capable of receiving, generating, storing, processing, executing, and/or providing a model, such as a computational graph.
  • client device 210 may include a computing device (e.g., a desktop computer, a laptop computer, a tablet computer, a handheld computer, a server, etc.), a mobile phone (e.g., a smart phone, a radiotelephone, etc.), or a similar device.
  • client device 210 may receive information from and/or transmit information to server device 230 (e.g., a computational graph and/or information associated with a computational graph).
  • Client device 210 may host TCE 220.
  • TCE 220 may include any hardware-based logic or a combination of hardware and software-based logic that provides a computing environment that allows tasks to be performed (e.g., by users) related to disciplines, such as, but not limited to, mathematics, science, engineering, medicine, and business.
  • TCE 220 may include a text-based environment (e.g., MATLAB® software), a graphically-based environment (e.g., Simulink® software, Stateflow® software, SimEvents® software, etc., by The Math Works, Inc.; VisSim by Visual Solutions; Lab View® by National Instruments; Agilent VEE by Agilent Technologies; Advanced Design System (ADS) by Agilent Technologies; Agilent Ptolemy by Agilent
  • a text-based environment e.g., MATLAB® software
  • a graphically-based environment e.g., Simulink® software, Stateflow® software, SimEvents® software, etc., by The Math Works, Inc.; VisSim by Visual Solutions; Lab View® by National Instruments; Agilent VEE by Agilent Technologies; Advanced Design System (ADS) by Agilent Technologies; Agilent Ptolemy by Agilent
  • a hybrid environment may include, for example, a text-based environment and a graphically-based environment.
  • a text-based environment may include, for example, a text-based environment and a graphically-based environment.
  • a graphically-based environment systems and/or methods, described herein, are equally applicable to text-based environments and hybrid environments.
  • TCE 220 may be integrated with or operate in conjunction with a graphical modeling environment, which may provide graphical tools for constructing computational graphs, systems, or processes.
  • TCE 220 may include additional tools, such as tools designed to convert a computational graph into an alternate representation, such as program code, source computer code, compiled computer code, and/or a hardware description (e.g., a description of a circuit layout).
  • TCE 220 may provide this ability using graphical toolboxes (e.g., toolboxes for signal processing, image processing, color manipulation, data plotting, parallel processing, etc.).
  • TCE 220 may provide these functions as block sets.
  • TCE 220 may provide these functions in another way.
  • a computational graph generated using TCE 220 may include, for example, any equations, assignments, constraints, computations, algorithms, and/or process flows.
  • the computational graph may be implemented as, for example, time-based block diagrams (e.g., via the Simulink software), discrete-event based diagrams (e.g., via the SimEvents software), dataflow diagrams, state transition diagrams (e.g., via the Stateflow software), software diagrams, a textual array-based and/or dynamically typed language (e.g., via the MATLAB software), a list or tree, and/or another form.
  • a computational graph generated using TCE 220 may include, for example, a model of a physical system, a computing system, an engineered system, an embedded system, a biological system, a chemical system, etc.
  • a computational node in the computational graph may include, for example, a function in a TCE environment (e.g., a MATLAB function), an object in a TCE environment (e.g., a MATLAB system object), a block in a graphically-based environment (e.g., a Simulink block, a Lab View block, an Agilent VEE block, an Agilent ADS block, an Agilent Ptolemy block, etc.), or the like.
  • a function in a TCE environment e.g., a MATLAB function
  • an object in a TCE environment e.g., a MATLAB system object
  • a block in a graphically-based environment e.g., a Simulink block, a Lab View block, an Agilent VEE block, an Agilent ADS block, an Agilent Ptolemy block, etc.
  • TCE 220 may schedule and/or execute a computational graph using one or more computational resources, such as one or more central processing units (CPUs) or cores, one or more field programmable gate arrays (FPGAs), one or more graphics processing units (GPUs), and/or other elements that can be used for computation.
  • TCE 220 may include a compiler that may be used to schedule the computational nodes of the computational graph, allocate hardware resources, such as memory and CPUs, to the computational nodes and to the connections that interconnect the computational nodes, or the like.
  • Server device 230 may include one or more devices capable of receiving, generating, storing, processing, executing, and/or providing a computational graph and/or information associated with a computational graph.
  • server device 230 may include a computing device, such as a server, a desktop computer, a laptop computer, a tablet computer, a handheld computer, or a similar device.
  • server device 230 may host TCE 220.
  • Network 240 may include one or more wired and/or wireless networks.
  • network 240 may include a cellular network, a public land mobile network ("PLMN”), a local area network (“LAN”), a wide area network (“WAN”), a metropolitan area network (“MAN”), a telephone network (e.g., the Public Switched Telephone Network (“PSTN”)), an ad hoc network, an intranet, the Internet, a fiber optic-based network, and/or a combination of these or other types of networks.
  • PLMN public land mobile network
  • LAN local area network
  • WAN wide area network
  • MAN metropolitan area network
  • PSTN Public Switched Telephone Network
  • Fig. 3 is a diagram of example components of a device 300, which may correspond to client device 210 and/or server device 230.
  • each of client device 210 and/or server device 230 may include one or more devices 300 and/or one or more components of device 300.
  • device 300 may include a bus 310, a processor 320, a memory 330, a storage component 340, an input component 350, an output component 360, and a communication interface 370.
  • Bus 310 may include a path that permits communication among the components of device 300.
  • Processor 320 may include a processor (e.g., a central processing unit, a graphics processing unit, an accelerated processing unit, etc.), a microprocessor, and/or any processing logic (e.g., a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), etc.) that interprets and/or executes instructions.
  • Memory 330 may include a random access memory (RAM), a read only memory (ROM), and/or another type of dynamic or static storage component (e.g., a flash, magnetic, or optical memory) that stores information and/or instructions for use by processor 320.
  • RAM random access memory
  • ROM read only memory
  • static storage component e.g., a flash, magnetic, or optical memory
  • Storage component 340 may store information and/or software related to the operation and use of device 300.
  • storage component 340 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, a solid state disk, etc.), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of computer-readable medium, along with a corresponding drive.
  • storage component 340 may store TCE 220.
  • Input component 350 may include a component that permits a user to input information to device 300 (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, etc.).
  • Output component 360 may include a component that outputs information from device 300 (e.g., a display, a speaker, one or more light-emitting diodes (LEDs), etc.).
  • Communication interface 370 may include a transceiver- like component, such as a transceiver and/or a separate receiver and transmitter, that enables device 300 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections.
  • communication interface 370 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, or the like.
  • RF radio frequency
  • USB universal serial bus
  • Device 300 may perform various operations described herein. Device 300 may perform these operations in response to processor 320 executing software instructions included in a computer-readable medium, such as memory 330 and/or storage component 340.
  • a computer- readable medium may be defined as a non-transitory memory device.
  • a memory device may include memory space within a single physical storage device or memory space spread across multiple physical storage devices.
  • Software instructions may be read into memory 330 and/or storage component 340 from another computer-readable medium or from another device via communication interface 370. When executed, software instructions stored in memory 330 and/or storage component 340 may cause processor 320 to perform one or more processes described herein. Additionally, or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
  • device 300 may include additional, fewer, different, or differently arranged components than those shown in Fig. 3. Additionally, or alternatively, one or more components of device 300 may perform one or more functions described as being performed by another one or more components of device 300.
  • Fig. 4 is a flow chart of an example process 400 for creating a computational graph that includes primary computational nodes and reference computational nodes.
  • one or more process blocks of Fig. 4 may be performed by client device 210. In some implementations, one or more process blocks of Fig. 4 may be performed by another device or a group of devices separate from or including client device 210, such as server device 230.
  • process 400 may include receiving information that identifies a primary node to be included in a computational graph (block 410). For example, a user may cause client device 210 (e.g., TCE 220) to create or open a user interface. The user interface may not include any primary nodes. The user may then add primary nodes to the user interface.
  • client device 210 e.g., TCE 220
  • client device 210 may receive a command, from the user, that indicates that a primary node is to be added to the user interface.
  • Client device 210 may receive the command based, for example, on detecting a selection of a particular menu item, entry of a particular textual or audible input from the user, and/or entry of some other
  • client device 210 may add the primary node to the user interface. For example, client device 210 may add the primary node to a computational graph represented via the user interface.
  • process 400 may include receiving characteristics of the primary node (block 420).
  • client device 210 e.g., TCE 220
  • the characteristics may include, for example, a computation associated with the primary node (e.g., a computation performed by an underlying system element), a state associated with the primary node (e.g., one or more state values and/or potential states that the computation is capable of storing), an operation associated with the primary node (e.g., an operation that, when executed by client device 210, generates an output from an input, and/or modifies a state value of the computation), or the like.
  • a computation associated with the primary node e.g., a computation performed by an underlying system element
  • a state associated with the primary node e.g., one or more state values and/or potential states that the computation is capable of storing
  • an operation associated with the primary node
  • the characteristics may include a shared parameter of the primary and reference node(s) (e.g., a parameter common to the primary and reference node(s)).
  • the shared parameters may influence a manner in which the computation and/or operation is performed, when the primary and/or reference node(s) are executed by client device 210.
  • the shared parameter may include a data type (e.g., a data type of an input to the node, an output from the node, etc.), an execution speed of the node (e.g., a high fidelity execution that causes a slower execution speed than a low fidelity execution), a value used to perform the computation and/or operation (e.g., a coefficient value, a gain value, etc.), a dimensionality of the value (e.g., an array dimension), a complexity of the value (e.g., a complex number or a real number), an algorithm (e.g., a series of steps) performed by the computation and/or operation, or the like.
  • a data type e.g., a data type of an input to the node, an output from the node, etc.
  • an execution speed of the node e.g., a high fidelity execution that causes a slower execution speed than a low fidelity execution
  • a value used to perform the computation and/or operation e.
  • an operation associated with a node may include a setup operation (e.g., to initialize a state of the computation, such as by setting the state to an initial value), a step operation (e.g., to execute an operation associated with the node, which may modify the state value), a terminate operation (e.g., to perform operations associated with finishing a final iterative execution of the computational graph, such as setting a state to a terminal value), or the like.
  • a step operation may include, for example, an output operation, an update operation, a reset operation, or the like.
  • An output operation may output a value generated by performing the computation.
  • An update operation may modify a state value associated with the computation.
  • a reset operation may set a state to a default value.
  • client device 210 may allow the user to specify characteristics in another manner. For example, client device 210 may allow the user to specify another node of the computational graph from which the characteristics (e.g., shared parameters) are to be obtained. As another example, client device 210 may allow the user to specify a particular file that includes the characteristics that are to be used for the primary node.
  • characteristics e.g., shared parameters
  • process 400 may include receiving information that identifies a reference node, associated with the primary node, to be included in the computational graph (block 430).
  • a user may cause client device 210 (e.g., TCE 220) to create or open a user interface that includes a representation of a primary node. The user may then add a reference node, associated with the primary node, to the user interface.
  • client device 210 e.g., TCE 220
  • the user may then add a reference node, associated with the primary node, to the user interface.
  • client device 210 may receive a command, from the user, that indicates that a reference node, associated with a primary node, is to be added to the user interface. Client device 210 may receive the command based, for example, on detecting a selection of a particular primary node, and detecting a further selection of a particular menu item, a particular textual or audible input from the user, and/or some other predetermined input that indicates a desire to add a reference node, associated with the particular primary node, to the user interface. Based on the command, client device 210 may add the reference node to the user interface. For example, client device 210 may add the reference node to a computational graph that includes the primary node.
  • the user may provide, to client device 210, an indication of a set of nodes that represent a single, shared computation (e.g., an underlying dynamic system).
  • the user may further provide an indication of one node, of the set of nodes, to be designated as the primary node.
  • Client device 210 may designate a primary node and reference node(s), of the set of nodes, based on the user indications.
  • the designation of a primary node may aid in the interpretation of the computational graph by a user. For example, client device 210 may distinguish between the primary node and the reference node(s) on a user interface.
  • client device 210 may store the shared computation in a memory location associated with the primary node and/or the reference node(s).
  • process 400 may include receiving characteristics of the reference node (block 440).
  • client device 210 e.g., TCE 220
  • the characteristics may include, for example, any of the characteristics described above with respect to block 420 (e.g., associated with the primary node).
  • client device 210 may determine a characteristic of a reference node based on a corresponding characteristic (e.g., a shared parameter) of a corresponding primary node that is associated with the reference node.
  • a corresponding characteristic e.g., a shared parameter
  • the primary node and the reference node may share a computation, a state (e.g., a state value and/or potential states), an operation, and/or a parameter that influences a manner in which the computation and/or operation is performed.
  • a node may only be considered a reference node of a primary node when the reference node and the primary node share the same computation and state (e.g., a state value).
  • client device 210 may allow the user to specify characteristics in another manner. For example, client device 210 may allow the user to specify characteristics, associated with a primary node, that are to also be characteristics of a reference node (e.g., shared parameters). Additionally, or alternatively, client device 210 may allow the user to specify characteristics, associated with a first reference node, that are to also be characteristics of a second reference node.
  • client device 210 may allow the user to specify characteristics, associated with a primary node, that are to also be characteristics of a reference node (e.g., shared parameters).
  • client device 210 may allow the user to specify characteristics, associated with a first reference node, that are to also be characteristics of a second reference node.
  • process 400 may include receiving information relating to a connection between the primary node and the reference node, to form the computational graph (block 450).
  • client device 210 e.g., TCE 220
  • client device 210 may allow the user to specify how a primary node, a reference node, and other nodes are to be connected.
  • client device 210 may allow the user to connect a particular output port of one node to an input port of another node.
  • client device 210 may allow the user to connect an output port to an input port using a drawing tool (e.g., where the user may draw a line from the output port of the one node to the input port of the other node).
  • a drawing tool e.g., where the user may draw a line from the output port of the one node to the input port of the other node.
  • client device 210 may provide a user interface that allows the user to specify, for a connection, the output port of the one node and the input port of the other node.
  • Client device 210 may allow the user to specify a connection between nodes in other ways.
  • the connections may indicate shared values between connected nodes (e.g., a value output from one node may be an input value to another node).
  • the computational graph may include a first primary node associated with one or more first reference nodes, a second primary node associated with one or more second reference nodes, etc. Additionally, or alternatively, the computational graph may include other nodes that are not primary or reference nodes (e.g., that are not associated with a same computation and/or state as another node in the computational graph). Additionally, or alternatively, the computational graph may include graph inputs and graph outputs. Some of the graph inputs may be independent, and some of the graph inputs may depend on a graph output. A user may specify connections between the graph input(s), the primary node(s), the reference node(s), the other node(s), and the graph output(s).
  • client device 210 may receive a first computational graph that includes a feedback loop (e.g., a cyclic graph), and does not include primary and/or reference nodes.
  • Client device 210 may convert the first computational graph into a second computational graph that does not include the feedback loop (e.g., an acyclic graph), and that includes a primary node and a reference node.
  • client device 210 may replace a node (e.g., a step operation in the feedback loop) in the first computational graph with a primary node (e.g., an output operation) and a reference node (e.g., an update operation) to form the second
  • client device 210 may perform the conversion based on user input that identifies the first computational graph, the feedback loop, and/or a node to be converted to a primary node and a reference node.
  • process 400 may include storing the computational graph
  • client device 210 may store the computational graph in a memory location.
  • client device 210 may receive information, identifying a memory location, from the user, and may store the computational graph at the identified memory location.
  • Figs. 5A-5E are diagrams of an example implementation 500 relating to example process 400 shown in Fig. 4.
  • client device 210 may provide a user interface that includes a first area 510 and a second area 520.
  • First area 510 may provide a list of primary nodes from which the user may select.
  • the primary nodes may be pre-configured with particular characteristics.
  • the primary node, identified as "dsp.FFT” may correspond to a primary node that performs operations associated with a fast Fourier transform (FFT) computation, when executed.
  • the primary nodes identified as "dsp.DigitalFilter,” “dsp. Minimum,” and “dsp. Convolver” may correspond to primary nodes that perform operations associated with a filtering computation, a minimum value determination computation, and a convolution computation, respectively, when executed.
  • Second area 520 may act as a blank canvas for receiving primary nodes and/or constructing a computational graph.
  • the user may drag the desired primary node from first area 510 to second area 520. Assume, for example implementation 500, that the user drags the "dsp.FFT" primary node to second area 520, as shown in Fig. 5B.
  • client device 210 may cause a user interface 530 to be provided to the user.
  • User interface 530 may allow the user to specify characteristics of the added primary node. As shown, user interface 530 may allow the user to specify a name for the primary node, a computation associated with the primary node, a node type of the primary node (e.g., whether the node is a primary node, a reference node, or another type of node), an operation performed when the primary node is executed, a quantity of input ports associated with the primary node, a quantity of output ports associated with the primary node, and/or other node characteristics.
  • a name for the primary node e.g., a computation associated with the primary node, a node type of the primary node (e.g., whether the node is a primary node, a reference node, or another type of node), an operation performed when the primary node is executed, a quantity of input ports associated with the primary node, a quantity of output ports associated with the primary node, and/or other node characteristics.
  • client device 210 may display the newly added primary node, having the desired characteristics, in second area 520, as shown in Fig. 5C as "output(dsp.FFT)." As further shown in Fig. 5C, the user may interact with the primary node to cause client device 210 to provide a user interface 540 to be provided to the user.
  • User interface 540 may allow the user to add a reference node, associated with the primary node, to second area 520, as shown in Fig. 5D as "update(dsp.FFT)." Based on adding the reference node to second area 520, client device 210 may cause a user interface 550 to be provided to the user.
  • User interface 550 may allow the user to specify characteristics of the added reference node. As shown, user interface 550 may allow the user to specify a name for the reference node, a computation associated with the reference node, a node type of the reference node, an operation performed when the reference node is executed, a quantity of input ports associated with the reference node, a quantity of output ports associated with the reference node, and/or other node characteristics. In some implementations, one or more characteristics of the reference node (e.g., node type, computation, state(s), shared parameters) may be automatically determined based on the primary node from which the reference node was generated. Additionally, or alternatively, client device 210 may not permit the user to specify characteristics that are automatically determined based on the primary node.
  • characteristics of the reference node e.g., node type, computation, state(s), shared parameters
  • the user may continue to add primary nodes, reference nodes, and other nodes into second area 520 (e.g., by dragging the nodes from another area, by generating the nodes from existing nodes, by providing other input to add the nodes, etc.), and may connect the nodes in a desired manner, until the desired quantity and arrangement of nodes has been achieved, to form a computational graph, as shown in Fig. 5E.
  • the user may create a computational graph with a first primary node, identified as "Primary Node A,” that corresponds to a first reference node, identified as “Reference Node A.”
  • Client device 210 may provide, via the user interface, an indication that a particular node is a reference node.
  • Reference Node A is represented with an asterisk (*) in front of an operation identifier of Reference Node A, to indicate that Reference Node A is a reference node.
  • the operation identifier may represent an operation performed when Reference Node A is executed.
  • "update" represents that Reference Node A performs an update operation when executed.
  • client device 210 may provide, via the user interface, an indication of a primary node associated with a reference node.
  • Primary Node A and Reference Node A may be outlined and/or highlighted in a similar manner, as shown.
  • Reference Node A is represented with the text "*update(dsp.FFT[71])", which represents that Reference Node A performs an update operation on the dsp.FFT computation associated with Node 71 (e.g., Primary Node A).
  • Client device 210 may provide a second primary node, identified as "Primary Node B,” that corresponds to a second reference node, identified as “Reference Node B,” in a similar manner, as shown.
  • Primary Node B and Reference Node B may be outlined in a similar manner to indicate that they are associated with one another.
  • Reference Node B may be represented with the text "*update(dsp.Minimum[73])", which represents that Reference Node B performs an update operation on the dsp. Minimum computation associated with Node 73 (e.g., Primary Node B), when executed.
  • the primary Node B e.g., Primary Node B
  • computational graph may include additional primary and/or reference nodes, and/or may include multiple reference nodes associated with a single primary node. Additionally, or alternatively, client device 210 may not provide a distinction between the primary and reference nodes, on the user interface.
  • Fig. 6 is a flow chart of an example process 600 for compiling and executing a computational graph that includes primary and reference nodes.
  • one or more process blocks of Fig. 6 may be performed by client device 210.
  • one or more process blocks of Fig. 6 may be performed by another device or a group of devices separate from or including client device 210, such as server device 230.
  • process 600 may include obtaining a computational graph that includes a primary node and a reference node (block 610).
  • client device 210 e.g., TCE 220
  • the request may include information identifying the computational graph, such as a name of the computational graph, and information identifying a memory location at which the computational graph is stored.
  • the memory location may be located within client device 210 or external to, and possibly remote from, client device 210.
  • Client device 210 may, based on receiving the request, retrieve the computational graph from the memory location.
  • client device 210 may provide, for display, a user interface that depicts all or a portion of the computational graph.
  • process 600 may include receiving a command to execute the computational graph (block 620).
  • client device 210 e.g., TCE 220
  • client device 210 may receive the command based on detecting a selection of a particular menu item, entry of a particular textual or audible input from the user, and/or entry of some other predetermined input that indicates a desire to execute the computational graph.
  • client device 210 may request that the user identify a particular execution platform on which all or a portion of the computational graph is to be executed.
  • the particular execution platform may include, for example, a particular type of CPU, GPU, FPGA, ASIC, multi-core processor, a particular core or set of cores of multi-core processors, and/or another type of processing device.
  • client device 210 may provide, based on receiving the command, a user interface to the user, which allows the user to identify a particular execution platform on which particular nodes in the computational graph (e.g., primary nodes and/or reference nodes) are to execute.
  • Client device 210 may alternatively allow the user to specify a particular execution platform in other ways.
  • client device 210 may request that the user identify a particular rate at which all or a portion of the computational graph is to be executed. For example, client device 210 may execute a first portion of the graph at a first rate (e.g., once every second), and may execute a second portion of the graph at a second rate (e.g., once every minute). In some implementations, client device 210 may provide, based on receiving the command, a user interface to the user, which allows the user to identify a particular rate at which particular nodes in the computational graph (e.g., primary nodes and/or reference nodes) are to execute. Client device 210 alternatively may allow the user to specify a particular execution rate in other ways.
  • a particular rate e.g., once every second
  • client device 210 may provide, based on receiving the command, a user interface to the user, which allows the user to identify a particular rate at which particular nodes in the computational graph (e.g., primary nodes and/or reference nodes) are to execute.
  • process 600 may include compiling the computational graph, including the primary node and the reference node (block 630).
  • client device 210 e.g., TCE 220
  • compiling the computational graph may include determining a manner in which the nodes are connected (e.g., which outputs are connected to which inputs), determining characteristics associated with connections and/or nodes of the graph (e.g., a data type, a dimensionality, a complexity, etc.), assigning memory locations to particular nodes and/or connections, determining computations that are actually going to be executed, designating an order in which the nodes are going to be executed (e.g., scheduling the graph based on semantic rules, such as a synchronous data flow rule, a dynamic data flow rule, a Boolean data flow rule, a Kahn process network rule, a Petri net rule, a discrete event system rule, etc.), determining a buffer allocation and/or allocating buffer space associated with graph execution (e.g., determining and/or allocating a number and/or size of data buffers for graph nodes and/or connections), determining time delays associated with the nodes, determining memory consumption and/or memory
  • compiling the computational graph may include identifying primary node(s) and/or reference node(s) included in the computational graph. Additionally, or alternatively, compiling the computational graph may include assigning nodes to computational resources for execution and/or setting a rate at which nodes in the computational graph are to execute.
  • compiling the computational graph may include generating program code for executing the computational graph.
  • the program code may include program code describing the primary node(s) and/or the reference node(s). Additionally, or alternatively, the program code may include instructions for multi-rate execution (e.g., with different nodes being executed at different rates) and/or multi-thread execution (e.g., with different nodes being executed using different computational resources) of the computational graph.
  • client device 210 may store the program code for later execution.
  • process 600 may include executing the computational graph, including the primary node and the reference node (block 640).
  • client device 210 e.g., TCE 220
  • Executing the computational graph may include executing multiple nodes (e.g., the primary node and the reference node), associated with a single computation, in a single iteration of the computational graph execution.
  • executing the computational graph may include executing the primary node and the reference node, based on the computation, operation(s), shared parameter(s), state value(s), etc. associated with the primary node and/or the reference node.
  • Executing the computational graph may include executing multiple operations, such as a setup operation, one or more step operations, and/or a terminate operation, in a single iterative execution of the computational graph. Execution of the multiple operations may be permitted because the primary node and the reference node(s) refer to the same computation.
  • the setup operation may be required to be performed during a first iteration of the graph
  • the step operations may be required to be performed in subsequent iterations of the graph
  • the terminate operation may be required to be performed in a final iteration of the graph.
  • Execution of an operation may modify a state value associated with the primary node and the reference node.
  • the state value may be a single value (or a single set of values), stored in a particular memory location that is associated with (e.g., read/write accessible during execution of) the primary node and the reference node.
  • Client device 210 may modify the state value, stored in the particular memory location, when executing the primary node or the reference node.
  • the state value may include multiple values (or multiple sets of values) that are identical copies of one another (e.g., multiple values that are the same).
  • the multiple values may each be stored in a different memory location, and each memory location by be associated with a different node (e.g., a first memory location may be associated with the primary node, a second memory location may be associated with a reference node, a third memory location may be associated with another reference node, etc.).
  • Client device 210 may modify each state value, stored in the different memory locations, when executing the primary node or the reference node(s).
  • two different nodes may cause a particular operation (e.g., a step operation), associated with a computation, to be performed twice, at different points in time during execution of a single iteration of the computational graph (e.g., based on a location of the nodes in the graph).
  • the two different nodes may cause different operations (e.g., a step operation and a reset operation) to be performed at different points in time during execution of the single iteration.
  • an operation may be performed based on a condition being met during execution of the computational graph. In some implementations, more than two operations may be executed in this manner.
  • client device 210 may execute the computational graph based on compiling the computational graph. For example, client device 210 may execute multiple operations, associated with a single computation, on different computational resources (e.g., multiple CPUs, GPUs, FPGAs, etc.), based on compilation of the computational graph (e.g., by executing different portions of generated program code using different computational resources). Additionally, or alternatively, client device 210 may execute the multiple operations at different execution rates, based on the compilation. For example, a first node may update a computation at a first rate (e.g., every 0.01 seconds), and a second node may output a result of performing the operation at a second rate (e.g., every 1.00 second).
  • a first rate e.g., every 0.01 seconds
  • a second node may output a result of performing the operation at a second rate (e.g., every 1.00 second).
  • process 600 may include generating and/or providing a result of executing the computational graph (block 650).
  • client device 210 may execute the computational graph to generate a result, and may provide the result.
  • the result may include, for example, information determined during compilation of the computational graph.
  • the result may include information associated with execution of the nodes of the computational graph, such as information relating to states of a computation, iterations of the graph, etc.
  • client device 210 may stream changes in some or all the information obtained during compilation and/or execution of the computational graph. For example, if a state, associated with a particular node, changes during execution of the
  • client device 210 may provide the new information while the
  • computational graph is executing.
  • Fig. 7 is a diagram of an example implementation 700 relating to example process 600 shown in Fig. 6.
  • Fig. 7 depicts the computational graph generated as described herein in connection with example implementation 500 (Figs. 5A-5E).
  • a user may provide input, via a user interface, to execute the computational graph.
  • client device 210 e.g., TCE 220
  • Client device 210 may number the nodes based on the execution order, and the numbers may be provided via the user interface (e.g., as shown by the numbers in brackets:
  • Client device 210 may execute the computational graph based on the compilation. Client device 210 may execute a primary node and a reference node in a single iteration of the computational graph. For example, a first iteration of the computational graph may include executing a primary node, such as Node 71 (e.g., "dsp.FFT[71]"), identified as "Primary Node A,” and associated with the computational operation "output(dsp.FFT)." Execution of Primary Node A may cause client device 210 to perform an output operation on the dsp.FFT computation associated with Primary Node A.
  • Node 71 e.g., "dsp.FFT[71]
  • Primary Node A e.g., "dsp.FFT[71]”
  • Primary Node A e.g., "dsp.FFT[71]”
  • Primary Node A e.g., "dsp.FFT[71]”
  • Primary Node A e
  • Performing the output operation may change a state value associated with Primary Node A (e.g., associated with the dsp.FFT computation), and may also change the state value associated with Reference Node A (e.g., Node 74, or "dsp.FFT[74]) because Reference Node A and Primary Node A area associated with the same computation
  • the state value associated with Reference Node A and Primary Node A may be the same state value (e.g., may be stored in the same memory location), or may be two identical copies of the state value, each stored in a separate memory location (e.g., a first memory location accessible by Reference Node A and a second memory location accessible by Primary Node A).
  • client device 210 may execute
  • Execution of Reference Node A may cause client device 210 to perform an update operation on the dsp.FFT computation associated with Reference Node A. For example, execution of Reference Node A may update a state value associated with the dsp.FFT computation. Updating the state value of the dsp.FFT computation associated with Reference Node A may also update the state value associated with Primary Node A. In this way, client device 210 may permit multiple operations to be performed on the same computation in a single iterative execution of the computational graph.
  • Fig. 8 is a diagram of an example implementation 800 relating to example process 600 shown in Fig. 6.
  • Fig. 8 depicts a computational graph that includes a single primary node, identified as "Primary Node,” and multiple reference nodes associated with the Primary Node, including an "Update Node” (Node 6), a “Reset Node” (Node 7), and an “Output Node” (Node 8).
  • the Primary Node e.g., Node 2
  • the Primary Node outputs a minimum value from a stored array of values, or outputs a value of zero if the stored array is empty (e.g., null).
  • Node 4 e.g., dsp. Generate
  • Node 5 e.g.,
  • dsp .Average calculates an average of the values output from Node 4 and the Primary Node. If the average value is greater than 1, then the Update Node (e.g., Node 6) is executed, and the stored array of values is updated to include the average value. If the average value is less than or equal to 1, then the Reset Node (e.g., Node 7) is executed, and the stored array of values is reset to an empty array.
  • the Output Node e.g., Node 8
  • the single iteration may finish executing when the computational graph reaches Node 9, and an additional iteration may execute if the average value is greater than 1. If the average value is less than or equal to 1, then execution of the computational graph may terminate.
  • compilation of the computational graph initializes the state of dsp. Minimum with an empty array (e.g., [], or a null value).
  • the Primary Node When executed in a first iteration of the computational graph, the Primary Node outputs a value of 0, based on the array being empty.
  • Node 4 generates a random integer value of 6, and Node 5 averages the values (e.g., 0 and 6) to generate an average value of 3. Because the average value of 3 is greater than 1, the Update Node executes, and adds the average value of 3 to the stored array of values. Because the stored array of values is now [3], the Output Node outputs the minimum value included in the array, which is 3.
  • the Primary Node outputs the minimum value of 3. Assume that Node 4 generates a random integer value of 9, and Node 5 averages the values (e.g., 3 and 9) to generate an average value of 6. Because the average value of 6 is greater than 1, the Update Node executes, and adds the average value of 6 to the stored array of values. Because the stored array of values is now [3, 6], the Output Node outputs the minimum value included in the array, which is still 3.
  • the Primary Node outputs the minimum value of 3. Assume that Node 4 generates a random integer value of 1, and Node 5 averages the values (e.g., 3 and 1) to generate an average value of 2. Because the average value of 2 is greater than 1, the Update Node executes, and adds the average value of 2 to the stored array of values. Because the stored array of values is now [3, 6, 2], the Output Node outputs the minimum value included in the array, which is now 2.
  • the Primary Node outputs the minimum value of 2. Assume that Node 4 generates a random integer value of 0, and Node 5 averages the values (e.g., 2 and 0) to generate an average value of 1. Because the average value of 1 is less than or equal to 1, the Reset Node executes, and resets the stored array of values to an empty array (e.g., []). Because the stored array of values is now empty, the Output Node outputs a value of 0. This may cause execution of the computational graph to finish.
  • Node 5 averages the values (e.g., 2 and 0) to generate an average value of 1. Because the average value of 1 is less than or equal to 1, the Reset Node executes, and resets the stored array of values to an empty array (e.g., []). Because the stored array of values is now empty, the Output Node outputs a value of 0. This may cause execution of the computational graph to finish.
  • Client device 210 may output a report based on execution of the computational graph.
  • the report may indicate that after the fourth iteration, the value output from Node 5 was less than or equal to 1.
  • the report may also indicate an execution time of the four iterations (e.g., 80 milliseconds), and an average execution time of each iteration (e.g., 20 milliseconds).
  • the user may specify nodes to execute using a first processor, such as the Primary Node, and may specify nodes to execute using a second processor, such as the Reference Nodes.
  • a first processor such as the Primary Node
  • a second processor such as the Reference Nodes.
  • Such parallel processing may be permitted due to client device 210 allowing a user to specify multiple nodes that operate on a state of a single computation in a single iterative execution of a computational graph.
  • a common object e.g., a textual object
  • a property setting associated with the node may indicate that the node shares a computation with another node (e.g., may indicate that the node is a primary node or a reference node).
  • different object types may be used, with a first object type representing primary nodes, a second object type representing reference nodes, and/or a third object type representing other nodes.
  • component is intended to be broadly construed as hardware, firmware, or a combination of hardware and software.
  • program code is to be broadly interpreted to include text-based code that may not require further processing to execute (e.g., C++ code, Hardware Description Language (HDL) code, very-high-speed integrated circuits (VHSIC) HDL(VHDL) code, Verilog, Java, and/or other types of hardware or software based code that may be compiled and/or synthesized); binary code that may be executed (e.g., executable files that may directly be executed by an operating system, bitstream files that can be used to configure a field programmable gate array (FPGA), Java byte code, object files combined together with linker directives, source code, makefiles, etc.); text files that may be executed in conjunction with other executables (e.g., Python text files, a collection of dynamic-link library (DLL) files with text-based combining, configuration information that connects pre-compiled modules, an extensible markup language (XML) file describing module linkage, etc.); etc.
  • C++ code Hardware Description Language
  • HDL Hardware Description Language
  • program code may include different combinations of the above-identified classes (e.g., text-based code, binary code, text files, etc.). Additionally, or alternatively, program code may include code generated using a dynamically-typed programming language (e.g., the M language, a MATLAB® language, a MATLAB-compatible language, a MATLAB-like language, etc.) that can be used to express problems and/or solutions in mathematical notations. Additionally, or alternatively, program code may be of any type, such as a function, a script, an object, etc., and a portion of program code may include one or more characters, lines, etc. of the program code.
  • a dynamically-typed programming language e.g., the M language, a MATLAB® language, a MATLAB-compatible language, a MATLAB-like language, etc.
  • program code may be of any type, such as a function, a script, an object, etc., and a portion of program code may include one or more characters, lines,

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Debugging And Monitoring (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon l'invention, un dispositif reçoit des informations qui identifient un nœud primaire inclus dans un graphe de calcul. Le nœud primaire représente une première opération, qui modifie une valeur d'état associée au nœud primaire et à un nœud de référence lorsque le nœud primaire est exécuté. Le dispositif reçoit des informations qui identifient le nœud de référence inclus dans le graphe de calcul. Le nœud de référence représente une seconde opération, qui modifie la valeur d'état associée au nœud primaire et au nœud de référence lorsque le nœud de référence est exécuté. Le dispositif obtient le graphe de calcul qui comprend le nœud primaire et le nœud de référence, et exécute le nœud primaire et le nœud de référence en une seule itération du graphe de calcul. Le dispositif modifie la valeur d'état, associée au nœud primaire et au nœud de référence, sur la base de l'exécution du nœud primaire et du nœud de référence.
PCT/US2013/076998 2013-03-15 2013-12-20 Nœuds de référence dans un graphe de calcul WO2014149109A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP13822035.5A EP2972799A1 (fr) 2013-03-15 2013-12-20 Noeuds de référence dans un graphe de calcul

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/834,066 2013-03-15
US13/834,066 US11061539B2 (en) 2013-03-15 2013-03-15 Reference nodes in a computational graph

Publications (1)

Publication Number Publication Date
WO2014149109A1 true WO2014149109A1 (fr) 2014-09-25

Family

ID=49998681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/076998 WO2014149109A1 (fr) 2013-03-15 2013-12-20 Nœuds de référence dans un graphe de calcul

Country Status (3)

Country Link
US (1) US11061539B2 (fr)
EP (1) EP2972799A1 (fr)
WO (1) WO2014149109A1 (fr)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10331740B2 (en) 2014-02-10 2019-06-25 Apple Inc. Systems and methods for operating a server-side data abstraction layer
JP5666758B1 (ja) * 2014-06-25 2015-02-12 楽天株式会社 情報処理装置、情報処理方法、プログラム、記憶媒体
US10928970B2 (en) 2014-07-18 2021-02-23 Apple Inc. User-interface for developing applications that apply machine learning
US11151446B2 (en) * 2015-10-28 2021-10-19 Google Llc Stream-based accelerator processing of computational graphs
US10783435B2 (en) 2015-10-28 2020-09-22 Google Llc Modifying computational graphs
US10325340B2 (en) * 2017-01-06 2019-06-18 Google Llc Executing computational graphs on graphics processing units
CN108304177A (zh) * 2017-01-13 2018-07-20 辉达公司 计算图的执行
US11343352B1 (en) * 2017-06-21 2022-05-24 Amazon Technologies, Inc. Customer-facing service for service coordination
WO2019018564A1 (fr) * 2017-07-18 2019-01-24 Syntiant Synthétiseur neuromorphique
US11210823B1 (en) 2018-06-04 2021-12-28 Swoop Inc. Systems and methods for attributing value to data analytics-driven system components
KR20210107531A (ko) * 2018-12-24 2021-09-01 인텔 코포레이션 멀티-프로세스 웹 브라우저 환경에서 머신 러닝 모델을 프로세싱하기 위한 방법들 및 장치
CN109669772B (zh) * 2018-12-28 2020-03-31 第四范式(北京)技术有限公司 计算图的并行执行方法和设备
GB2582785A (en) * 2019-04-02 2020-10-07 Graphcore Ltd Compiling a program from a graph
US11580444B2 (en) 2019-04-16 2023-02-14 Apple Inc. Data visualization machine learning model performance
WO2021051958A1 (fr) * 2019-09-18 2021-03-25 华为技术有限公司 Procédé et système de fonctionnement de modèle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271381A1 (en) * 2006-05-16 2007-11-22 Joseph Skeffington Wholey Managing computing resources in graph-based computations
WO2008021953A2 (fr) * 2006-08-10 2008-02-21 Ab Initio Software Llc Distribution des services dans les calculs basés sur des graphes
US20080271041A1 (en) * 2007-04-27 2008-10-30 Kabushiki Kaisha Toshiba Program processing method and information processing apparatus

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6784903B2 (en) * 1997-08-18 2004-08-31 National Instruments Corporation System and method for configuring an instrument to perform measurement functions utilizing conversion of graphical programs into hardware implementations
EP1402379A4 (fr) * 2001-05-25 2009-08-12 Annapolis Micro Systems Inc Procede et appareil permettant de modeliser des systemes a flux de donnees et realisation de ces systemes dans le materiel informatique
US6496058B1 (en) * 2001-07-24 2002-12-17 Virtual Ip Group Method for designing an integrated circuit containing multiple integrated circuit designs and an integrated circuit so designed
US7113505B2 (en) * 2001-12-17 2006-09-26 Agere Systems Inc. Mesh architecture for synchronous cross-connects
US8510329B2 (en) * 2005-05-25 2013-08-13 Experian Marketing Solutions, Inc. Distributed and interactive database architecture for parallel and asynchronous data processing of complex data and for real-time query processing
US7761846B2 (en) * 2005-08-16 2010-07-20 National Instruments Corporation Graphical programming methods for generation, control and routing of digital pulses
KR100781358B1 (ko) * 2005-10-21 2007-11-30 삼성전자주식회사 데이터 처리 시스템 및 그의 데이터 처리방법
US20070239993A1 (en) * 2006-03-17 2007-10-11 The Trustees Of The University Of Pennsylvania System and method for comparing similarity of computer programs
US8028241B2 (en) * 2006-08-04 2011-09-27 National Instruments Corporation Graphical diagram wires whose appearance represents configured semantics
US20080126956A1 (en) * 2006-08-04 2008-05-29 Kodosky Jeffrey L Asynchronous Wires for Graphical Programming
US20080059944A1 (en) * 2006-08-15 2008-03-06 Zeligsoft Inc. Deployment-aware software code generation
US7987458B2 (en) * 2006-09-20 2011-07-26 Intel Corporation Method and system for firmware image size reduction
CN101821721B (zh) * 2007-07-26 2017-04-12 起元技术有限责任公司 具有误差处理的事务型基于图的计算
US8392878B2 (en) * 2007-10-31 2013-03-05 National Instruments Corporation In-place structure in a graphical program
US20090119640A1 (en) * 2007-11-07 2009-05-07 Microsoft Corporation Graphical application for building distributed applications
US7962650B2 (en) * 2008-04-10 2011-06-14 International Business Machines Corporation Dynamic component placement in an event-driven component-oriented network data processing system
US8838944B2 (en) * 2009-09-22 2014-09-16 International Business Machines Corporation Fast concurrent array-based stacks, queues and deques using fetch-and-increment-bounded, fetch-and-decrement-bounded and store-on-twin synchronization primitives
CA2782414C (fr) * 2009-12-14 2021-08-03 Ab Initio Technology Llc Specification d'elements d'interface utilisateur
US8451739B2 (en) * 2010-04-15 2013-05-28 Silver Spring Networks, Inc. Method and system for detecting failures of network nodes
US8572229B2 (en) * 2010-05-28 2013-10-29 Microsoft Corporation Distributed computing
US8595359B2 (en) * 2011-03-08 2013-11-26 Cisco Technology, Inc. Efficient message distribution for directed acyclic graphs
US8856060B2 (en) * 2011-03-09 2014-10-07 International Business Machines Corporation Creating stream processing flows from sets of rules
US9223488B1 (en) * 2011-05-26 2015-12-29 Lucasfilm Entertainment Company Ltd. Navigable interfaces for graphical representations
US8793599B1 (en) * 2011-08-15 2014-07-29 Lucasfilm Entertainment Company Ltd. Hybrid processing interface
US20130232433A1 (en) * 2013-02-01 2013-09-05 Concurix Corporation Controlling Application Tracing using Dynamic Visualization
US9785419B2 (en) * 2014-09-02 2017-10-10 Ab Initio Technology Llc Executing graph-based program specifications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271381A1 (en) * 2006-05-16 2007-11-22 Joseph Skeffington Wholey Managing computing resources in graph-based computations
WO2008021953A2 (fr) * 2006-08-10 2008-02-21 Ab Initio Software Llc Distribution des services dans les calculs basés sur des graphes
US20080271041A1 (en) * 2007-04-27 2008-10-30 Kabushiki Kaisha Toshiba Program processing method and information processing apparatus

Also Published As

Publication number Publication date
US11061539B2 (en) 2021-07-13
US20140282180A1 (en) 2014-09-18
EP2972799A1 (fr) 2016-01-20

Similar Documents

Publication Publication Date Title
US11061539B2 (en) Reference nodes in a computational graph
US9513880B2 (en) Graphical function specialization
US9846575B1 (en) Installation of a technical computing environment customized for a target hardware platform
US9536023B2 (en) Code generation for using an element in a first model to call a portion of a second model
US9569179B1 (en) Modifying models based on profiling information
US10114917B1 (en) Systems and methods for mapping executable models to programmable logic device resources
US9304838B1 (en) Scheduling and executing model components in response to un-modeled events detected during an execution of the model
US11327725B2 (en) Systems and methods for aggregating implicit and explicit event code of executable models
US10346138B1 (en) Graph class application programming interfaces (APIs)
US10409567B2 (en) Trimming unused dependencies using package graph and module graph
US9244652B1 (en) State management for task queues
US10387584B1 (en) Streaming on hardware-software platforms in model based designs
US20170046132A1 (en) Data type visualization
US9256405B1 (en) Code generation based on regional upsampling-based delay insertion
US9135027B1 (en) Code generation and execution for dynamic programming languages
US9740529B1 (en) High throughput synchronous resource-constrained scheduling for model-based design
US10853532B2 (en) Graphical modeling for accessing dynamic system states across different components
US11144684B2 (en) Method and system for improving efficacy of model verification by model partitioning
US10095487B1 (en) Data type visualization
US10956212B1 (en) Scheduler for tall-gathering algorithms that include control flow statements
US10684781B1 (en) Big data read-write reduction
Farooqi et al. Nonintrusive AMR asynchrony for communication optimization
US9891894B1 (en) Code continuity preservation during automatic code generation
US9753615B1 (en) Interactive heat map for graphical model performance view
US11853690B1 (en) Systems and methods for highlighting graphical models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13822035

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013822035

Country of ref document: EP