US20200348915A1 - Mapping a computer code to wires and gates - Google Patents

Mapping a computer code to wires and gates Download PDF

Info

Publication number
US20200348915A1
US20200348915A1 US16/688,127 US201916688127A US2020348915A1 US 20200348915 A1 US20200348915 A1 US 20200348915A1 US 201916688127 A US201916688127 A US 201916688127A US 2020348915 A1 US2020348915 A1 US 2020348915A1
Authority
US
United States
Prior art keywords
wires
gates
code
input
dfsm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/688,127
Inventor
Daniel Joseph Bentley Kluss
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Archeo Futurus Inc
Original Assignee
Archeo Futurus Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/630,691 external-priority patent/US9996328B1/en
Application filed by Archeo Futurus Inc filed Critical Archeo Futurus Inc
Priority to US16/688,127 priority Critical patent/US20200348915A1/en
Publication of US20200348915A1 publication Critical patent/US20200348915A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17704Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form the logic functions being realised by the interconnection of rows and columns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • G06F30/343Logical level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/35Creation or generation of source code model driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/443Optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/327Logic synthesis; Behaviour synthesis, e.g. mapping logic, HDL to netlist, high-level language to RTL or netlist
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/33Design verification, e.g. functional simulation or model checking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/39Circuit design at the physical level
    • G06F30/394Routing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/443Optimisation
    • G06F8/4434Reducing the memory space required by the program code

Definitions

  • This disclosure relates generally to data processing and, more specifically, to methods and systems for mapping a computer code to wires and gates.
  • Integrated circuits such as field-programmable gate array (FPGA) or application-specific integrated circuits (ASIC), can be used in many computing applications.
  • FPGA field-programmable gate array
  • ASIC application-specific integrated circuits
  • integrated circuits can be used in servers and computing clouds to process Hypertext Transfer Protocol (HTTP) and other requests from client devices, which may provide a faster response than standard software-based applications.
  • HTTP Hypertext Transfer Protocol
  • client devices which may provide a faster response than standard software-based applications.
  • a method includes acquiring a code that is written in a programming language. The method may further include generating, based on the code, a finite state machine (FSM). The method may further include generating, based on the FSM, a wires and gates representation.
  • the wires and gates representation may include a plurality of wires and plurality of combinatorial logics.
  • the method may further include configuring, based on the wires and gates representation, a field-programmable gate array.
  • the method may also include determining that one or more combinatorial logics of the plurality of combinatorial logics does not depend on input from wires of the plurality of wires.
  • the method may further include storing the one or more combinatorial logic in a shift register in response to the determination that one or more combinatorial logics of the plurality of combinatorial logics does not depend on input from wires of the plurality of wires.
  • the method may further include determining that one or more combinatorial logics of the plurality of combinatorial logics depend on input from wires of the plurality of wires.
  • the method further may include storing the one or more combinatorial logic in flip-flops in response to the determination that one or more combinatorial logics of the plurality of combinatorial logics depend on input from wires of the plurality of wires.
  • input of each of the plurality of wires may represent a symbol selected from a set of symbols of a structured data packet.
  • the size of the symbol can be selected to be equal to a number of bits of the structured data packet transferred per clock cycle according to a data transmission protocol.
  • a number of gates and a number of wires in the wires and gates representation can be optimized based on a rate of bits transferred per clock cycle of the transferring protocol or structure of the structured data packet.
  • the structured data packet can include an ethernet packet, optical transport network packet, or peripheral component interconnect express packet.
  • the programming language can include a high-level programming language such as JavaScript, C, C++, or a domain specific language.
  • the method may further include optimizing the FSM prior to generating the wires and gates representation. Optimizing the FSM includes minimizing a number of states in the FSM.
  • a system for mapping a computer code to wires and gates may include at least one processor and a memory storing processor-executable codes, wherein the at least one processor can be configured to implement the operations of the above-mentioned method for mapping a computer code to wires and gates.
  • the steps of the method for mapping a computer code to wires and gates are stored on a machine-readable medium comprising instructions, which, when implemented by one or more processors, perform the recited steps.
  • FIG. 1 is a block diagram showing a system for compiling source code, according to some example embodiments.
  • FIG. 2 is a block diagram showing an example system for processing of a Hypertext Transfer Protocol (HTTP) request, according to an example embodiment.
  • HTTP Hypertext Transfer Protocol
  • FIG. 3 is a process flow diagram showing a method for compiling source code, according to an example embodiment.
  • FIG. 4 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.
  • FIG. 5 is a block diagram showing a system for mapping a computer code to wires and gates, according to some example embodiments
  • FIG. 6 is a flow chart showing a method for mapping a computer code to wires and gates, according to an example embodiment.
  • the technology described herein allows translating a computer code from a high-level programming language to wires and gates representation. Some embodiments of the present disclosure may facilitate optimizing the source code according to requirements of a hardware description. Embodiments of the present disclosure may further allow configuring, based on the wires and gates representation, programmable integrated circuits.
  • the method for mapping a computer code to wires and gates may include acquiring a code written in a programming language and generating a FSM based on the acquired code. The method may further include generating, based on the FSM, a wires and gates representation.
  • the wires and gates representation may include a plurality of wires and plurality of combinatorial logics.
  • the method may further include configuring, based on the wires and gates representation, a field-programmable gate array.
  • FIG. 1 is a block diagram showing an example system 100 for compiling source code, according to some example embodiments.
  • the example system 100 may include a parsing expression grammar (PEG) module 110 , a converter 120 between abstract syntax tree (AST) and a non-deterministic finite state machine (NFSM), a converter 130 between NFSM and deterministic finite state machine (DFSM), and an optimizer 140 .
  • PEG parsing expression grammar
  • AST abstract syntax tree
  • NFSM non-deterministic finite state machine
  • DFSM deterministic finite state machine
  • the system 100 can be implemented with a computer system. An example computer system is described below with reference to FIG. 4 .
  • the PEG module 110 may be configured to receive an input code 105 .
  • the input code 105 may be written in an input programming language.
  • the input programming language may be associated with a grammar 170 .
  • the grammar 170 may be determined by an augmented Backus-Naur Form (ABNF).
  • the PEG module may be configured to convert the input code 105 into an AST 115 based on the grammar 170 .
  • the AST 115 may be further provided to converter 120 .
  • the converter 120 may be configured to transform the AST 115 into NFSM 125 . Thereafter, NFSM 125 may be provided to the converter 130 . The converter 130 may be configured to translate the NFSM 125 into DFSM 135 . The DFSM 135 can be provided to optimizer 140 .
  • optimizer 140 may be configured to optimize the DFSM 135 to obtain a DFSM 145 .
  • the optimization may include minimizing a number of states in the DFSM 135 .
  • optimization can be performed by an implication chart method, Hoperoft's algorithm, Moore reduction procedure, Brzozowski's algorithm, and other techniques.
  • Brzozowski's algorithm includes reversing the edges of a DFSM to produce a NFSM and converting this NFSM to a DFSM using a standard powerset construction by constructing only the reachable states of the converted DFSM. Repeating the reversing a second time produces a DFSM with a provable minimum of number of states in the DFSM.
  • the DFSM 145 which is an optimized DFSM 135 , can be further provided to converter 130 .
  • the converter 130 may be configured to translate the DFSM 145 into a NFSM 150 .
  • the NFSM 150 may be further provided to converter 120 .
  • the converter 120 may be configured to translate the NFSM 150 into an AST 155 .
  • the AST 155 may be further provided to PEG module 110 .
  • the PEG module 110 may be configured to convert the AST 155 into output code 160 based on a grammar 180 .
  • the grammar 180 may specify an output programming language.
  • the input languages or output languages may include one of high level programming languages, such as but not limited to C, C++, C#, JavaScript, PHP, Python, Perl, and the like.
  • the input code or output source code can be optimized to run on various hardware platforms like Advanced RISC Machine (ARM), x86-64, graphics processing unit (GPU), a field-programmable gate array (FPGA), or a custom application-specific integrated circuit (ASIC).
  • the input code or source code can be optimized to run on various operational systems and platforms, such as Linux, Windows, Mac OS, Android, iOS, OpenCL/CUDA, bare metal, FPGA, and a custom ASIC.
  • the output programming language can be the same as the input programming languages.
  • the system 100 can be used to optimize the input code 105 by converting the input code 105 to the DFSM 135 , optimizing the DFSM 135 in terms of number of states, and converting the optimized DFSM 135 to output code 160 in the original programming language.
  • the input programming language may include a domain specific language (DSL) which is determined by a strict grammar (i.e., ABNF).
  • DSL domain specific language
  • ABNF strict grammar
  • the system 100 may be used to convert documents written in a DSL to an output code 160 written in a high-level programming language or a code written in a low-level programming language.
  • input code 105 or output code 160 can be written in a presentation language, including, but not limited to, HTML, XML, and XHTML.
  • input code 105 or output code 160 may include CSS.
  • the system 100 may further include a database.
  • the database may be configured to store frequently occurring patterns in the input code written in specific programming languages and parts of optimized DFSM corresponding to the frequently occurring patterns.
  • the system 100 may include an additional module for looking up a specific pattern of the input code 105 in the database. If the database includes an entry containing a specific pattern and corresponding parts of DFSM, then system 100 may be configured to substitute the specific pattern with the corresponding part of DFSM directly, and by skipping steps for converting the specific pattern to the AST and generating the NFSM and the DFSM.
  • the input code or output code may include a binary assembly executable by a processor.
  • the input code 105 or output code 160 may be written in a HDL, such as SystemC, Verilog, and Very High Speed Integrated Circuits Hardware Description Language (VHDL).
  • the input code 105 or output code 160 may include bits native to the FPGA as programmed using Joint Test Action Group (JTAG) standards.
  • JTAG Joint Test Action Group
  • DFSM 135 can be optimized using a constraint solver.
  • the constraint solver may include some requirements on a hardware platform described by the HDL.
  • the requirements may include requirements for a runtime, power usage, and cost of the hardware platform.
  • the optimization of the DFSM 135 can be carried out to satisfy one of the restrictions of the requirements.
  • the optimization of the DFSM may be performed to satisfy several requirement restrictions with weights assigned to each of the restrictions.
  • the DFSM 135 may be formally verified in accordance with a formal specification to detect software-related security vulnerabilities, including but not limited to, memory leak, division-by-zero, out-of-bounds array access, and others.
  • the input source can be written in terms of a technical specification.
  • An example technical specification can include a Request for Comments (RFC).
  • the technical specification may be associated with a specific grammar. Using the specific grammar, the input code, written in terms of the technical specification, can be translated into the AST 115 and further into the DFSM 135 .
  • the DFSM 135 can be optimized using a constraint solver. The constraint solver may include restrictions described in the technical specification.
  • FIG. 2 is a block diagram showing an example system 200 for processing of HTTP requests, according to an example embodiment.
  • the system 200 may include a client 210 , the system 100 for compiling source codes, and a FPGA 240 .
  • the system 100 may be configured to receive a RFC 105 for Internet protocol (IP), Transmission Control Protocol (TCP), and HTTP.
  • the system 100 may be configured to program the RFC into a VHDL code, and, in turn, compile the VHDL code into bits 235 native to FPGA 240 .
  • the FPGA 240 may be programmed with bits 235 .
  • the FPGA 240 includes a finite state machine, FSM 225 , corresponding to bits 235 .
  • the bits 235 may be stored in a flash memory and the FPGA 240 may be configured to request bits 235 from the flash memory upon startup.
  • the client 210 may be configured to send a HTTP request 215 to the FPGA 240 .
  • the HTTP request 215 can be read by the FPGA 240 .
  • the FSM 225 may be configured to recognize the HTTP request 215 and return an HTTP response 245 corresponding to the HTTP request 215 back to the client 210 .
  • the FPGA 240 may include a fabric of FSM 250 - 260 to keep customers' application logics for recognizing different HTTP requests and providing different HTTP responses.
  • the system 200 may be an improvement over conventional HTTP servers because the system 200 does not require large computing resources and maintenance of software for treatment of HTTP requests.
  • the system does not need to be physically large and requires a smaller amount of power than conventional HTTP servers.
  • FIG. 3 is a process flow diagram showing a method 300 for compiling source codes, according to an example embodiment.
  • the method 300 can be implemented with a computer system.
  • An example computer system is described below with reference to FIG. 4 .
  • the method 300 may commence, in block 302 , with acquiring a first code, the first code being written in a first language.
  • method 300 may include parsing, based on a first grammar associated with the first language, the first code to obtain a first AST.
  • the method 300 may include converting the first AST to a NFSM.
  • the method 300 may include converting the first NFSM to a first DFSM.
  • the method 300 may include optimizing the first DFSM to obtain the second DFSM.
  • the method may include converting the second DFSM to a second NFSM.
  • the method 300 may include converting the second NFSM to a second AST.
  • the method 300 may include recompiling, based on a second grammar associated with a second language, the AST into the second code, the second code being written in the second language.
  • FIG. 4 shows a diagrammatic representation of a computing device for a machine in the exemplary electronic form of a computer system 400 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.
  • the machine operates as a standalone device or can be connected (e.g., networked) to other machines.
  • the machine can operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine can be a server, a personal computer (PC), a tablet PC, a set-top box (STB), a PDA, a cellular telephone, a digital camera, a portable music player (e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • a portable music player e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player
  • MP3 Moving Picture Experts Group Audio Layer 3
  • a web appliance e.g., a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • MP3 Moving Picture Experts Group Audio Layer 3
  • the example computer system 400 includes a processor or multiple processors 402 , a hard disk drive 404 , a main memory 406 , and a static memory 408 , which communicate with each other via a bus 410 .
  • the computer system 400 may also include a network interface device 412 .
  • the hard disk drive 404 may include a computer-readable medium 420 , which stores one or more sets of instructions 422 embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 422 can also reside, completely or at least partially, within the main memory 406 and/or within the processors 402 during execution thereof by the computer system 400 .
  • the main memory 406 and the processors 402 also constitute machine-readable media.
  • While the computer-readable medium 420 is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
  • computer-readable medium shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, NAND or NOR flash memory, digital video disks, RAM, ROM, and the like.
  • the exemplary embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware.
  • the computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems.
  • computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, C, Python, Javascript, Go, or other compilers, assemblers, interpreters or other computer languages or platforms.
  • FIG. 5 is a block diagram showing an example system 500 for mapping a computer code to wires and gates, according to some example embodiments.
  • the example system 500 may include a parsing expression grammar (PEG) module 110 , a converter 120 to convert between AST and NFSM, a converter 130 to convert between NFSM and DFSM, an optimizer 140 , and translator 510 to translate from DFSM to wires and gates.
  • PEG parsing expression grammar
  • the system 100 can be implemented with a computer system. An example computer system is described below with reference to FIG. 4 .
  • the PEG module 110 may receive an input code 105 written in an input programming language.
  • the input programming language can be associated with a grammar 170 .
  • the PEG module can be configured to convert the input code 105 into an AST 115 .
  • the converter 120 may further transform the AST 115 into a NFSM 125 .
  • the converter 130 may be configured to translate the NFSM 125 into a DFSM 135 .
  • Optimizer 140 may further optimize the DFSM 135 to obtain a DFSM 145 , which is optimized DFSM 135 .
  • the DFSM 145 can be further provided to translator 510 .
  • the translator 510 may be configured to translate the optimized DFSM 145 into a set 520 of wires and gates.
  • the edges of DFSM 145 can be represented as wires.
  • the states can be represented as a combinatorial logic of the wires or a simple gate.
  • the set 520 of wires and gates can be used to match inputs, internal states, and outputs.
  • the set 520 of wires and gates can be also used to design, program, or configure integrated circuits, such as but not limited to FPGAs and ACISs.
  • the set 520 of wires and gates can be used to configure programmable logic blocks and reconfigurable reconnects of FPGA 240 (shown in FIG. 2 ) to process HTTP requests.
  • the integrated circuits may receive packets via a network.
  • the packets can include ethernet packets, Optical Transport Network (OTN) packets, Peripheral Component Interconnect Express (PCIE) packets or the like.
  • the packets include an ordered set of inputs in time with a defined beginning, a number of input symbols, and an end.
  • the packets can include a preamble, start frame delimiter, header, protocol specific data, and cyclic redundancy check.
  • the FPGA can be configured to perform operations included in the initial computer code based on wires and gates.
  • the FPGA can be configured to send a reply a received data packet.
  • the FPGA can be configured to match or filter data packets, forward data packets, or store data packets in the FPGA.
  • the FPGA can be also reconfigured based on the information included in the received data packets.
  • the data in packets are clocked at a specific rate.
  • a FPGA Per each dock only a certain input block of a data packet can be received by a FPGA, such that only a certain number of wires can be used in the FPGA.
  • each input block is single 8-bit/8-wire input at each clock cycle.
  • GMII gigabit media-independent interface
  • one separate wire for each state may represent a symbol from 0 to 83.
  • a transition from one state to another state may occur when 0 or 1 possible inputs are matched for each state. The state would not be advancing when inputs are failed to match the whole pattern.
  • a maximum of 84 wires out of the 256 separate wires could possibly be used.
  • the same input value can be used multiple times. For example, 0x55 can appear 7 times at the beginning of a packet. Because one input wire can be used multiple times and because there are states with 0 possible inputs such as in the packet ID field, the number of unique input wires that are used tends to be small. For common cases, the number of unique input wires can be 20 wires or less.
  • Each state arranged in parallel, a single 8-bit symbol or nothing, is matched by combining the wires from the previous state, or signal in the beginning of the packet, and the wires corresponding to the input symbol, or nothing.
  • Each state can be represented as one of the following:
  • packetStart can be the zeroth state, causing the start of the first state.
  • states in which any input is acceptable there is no input wire needed to be looked at.
  • the multiple states that have no input wires looked at may be implemented as a shift register. Any states that are not stored in a flip-flop can be stored in a shift register because these states are not accessed individually.
  • each symbol can be represented as 32 bits at each transition of a clock.
  • the maximum number of wires to represent all possible 32 bits symbols is over 4 billion wires.
  • the length of data packet is the same as in the case of a GMII interface. Assuming that there is only 1 ⁇ 4 of the possible states and that one input symbol is 4 times larger than in the GMII interface, the number of wires is limited to the symbol count length of the packet, a minimum size being 84 bytes or 1 ⁇ 4 that as symbols of 32 bits, 1 ⁇ 8 at 64 bit symbols, and so forth. There can be a fewer number due to redundancies.
  • the decisions may form a tree. Earlier states are shared in the tree. Each unique type of a packet to be matched requires a minimum of 1 additional gate to uniquely match the packet to the gate and have a maximum number of states not shared with other similar types of packets to match. Generally, when the number of packet matching rules is more than a hundred, as few as 1 or 2 additional gates are required to match a packet. In most cases, only 1 additional gate is needed for each additional matching rule.
  • FIG. 6 is a flow chart showing a method 600 for mapping a computer code to wires and gates, according to some example embodiments.
  • the method 600 can be implemented with a computer system.
  • An example computer system is described below with reference to FIG. 4 .
  • the method 600 may commence, in block 602 , with acquiring a code.
  • the code can be written in a programming language.
  • the programming language can a high-level programming language, such as, for example, JavaScript, C, C++, domain specific language, and the like.
  • the code can be written in terms of a technical specification.
  • An example technical specification can include an RFC.
  • the method 600 may generate, based on the code, an FSM.
  • the method 600 may proceed with generating, based on the FSM, a wires and gates representation.
  • the wires and gates representation may include a plurality of wires and a plurality of combinatorial logics.
  • An input of each of the plurality of wires may represent a symbol from a set of symbols of a structured data packet.
  • the size of the symbol can be equal to a number of bits of the structured data packet transferred per clock cycle according to a data transmission protocol.
  • the packet may include an Ethernet packet, OTN packet, or PCIE packet.
  • the data transmission protocol may include a GMII, XGMII, and so forth. States arising from combinational logic may be stored in the flip flops or alternatively shift registers if the individual states from the flip-flops are not directly needed.
  • the method 600 may include configuring, based on the wires and gates representation, a field-programmable gate array.
  • Combinatorial logics that do not depend on input from wires of the plurality of wires can be implemented in a shift register.
  • Other combinatorial logics can be stored in flip-flops.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Logic Circuits (AREA)
  • Architecture (AREA)

Abstract

Methods and systems for mapping computer code to wires and gates are disclosed. An example method may include acquiring a code written in a programming language and generating, based on the code, a finite state machine (FSM). The method may further include generating, based on the FSM, a wires and gates representation, the wires and gates representation including a plurality of wires and a plurality of combinatorial logics. The method may further include configuring, based on the wires and gates representation, a field-programmable gate array. Input of each of the plurality of wires may represent a symbol selected from a set of symbols of a structured data packet. The size of the symbol can be equal to a number of bits of the structured data packet transferred per a clock cycle according to a data transmission protocol.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of U.S. patent application Ser. No. 15/970,884 filed May 4, 2018, now U.S. Pat. No. 10,481,881, which is a continuation-in-part of U.S. patent application Ser. No. 15/630,691 filed Jun. 22, 2017, now U.S. Pat. No. 9,996,328, the subject matter of which is incorporated herein for all purposes.
  • TECHNICAL FIELD
  • This disclosure relates generally to data processing and, more specifically, to methods and systems for mapping a computer code to wires and gates.
  • BACKGROUND
  • The approaches described in this section could be pursued but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
  • Integrated circuits, such as field-programmable gate array (FPGA) or application-specific integrated circuits (ASIC), can be used in many computing applications. For example, integrated circuits can be used in servers and computing clouds to process Hypertext Transfer Protocol (HTTP) and other requests from client devices, which may provide a faster response than standard software-based applications. Despite the advantages of using integrated circuits in computing applications, designing, programming, and configuring integrated circuits remain a difficult task
  • SUMMARY
  • This summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • Embodiments disclosed herein are directed to methods and systems for mapping a computer code to wires and gates. According to an example embodiment, a method includes acquiring a code that is written in a programming language. The method may further include generating, based on the code, a finite state machine (FSM). The method may further include generating, based on the FSM, a wires and gates representation. The wires and gates representation may include a plurality of wires and plurality of combinatorial logics.
  • The method may further include configuring, based on the wires and gates representation, a field-programmable gate array. The method may also include determining that one or more combinatorial logics of the plurality of combinatorial logics does not depend on input from wires of the plurality of wires. The method may further include storing the one or more combinatorial logic in a shift register in response to the determination that one or more combinatorial logics of the plurality of combinatorial logics does not depend on input from wires of the plurality of wires.
  • The method may further include determining that one or more combinatorial logics of the plurality of combinatorial logics depend on input from wires of the plurality of wires. The method further may include storing the one or more combinatorial logic in flip-flops in response to the determination that one or more combinatorial logics of the plurality of combinatorial logics depend on input from wires of the plurality of wires.
  • In certain embodiments, input of each of the plurality of wires may represent a symbol selected from a set of symbols of a structured data packet. The size of the symbol can be selected to be equal to a number of bits of the structured data packet transferred per clock cycle according to a data transmission protocol. A number of gates and a number of wires in the wires and gates representation can be optimized based on a rate of bits transferred per clock cycle of the transferring protocol or structure of the structured data packet. The structured data packet can include an ethernet packet, optical transport network packet, or peripheral component interconnect express packet.
  • The programming language can include a high-level programming language such as JavaScript, C, C++, or a domain specific language. The method may further include optimizing the FSM prior to generating the wires and gates representation. Optimizing the FSM includes minimizing a number of states in the FSM.
  • According to one example embodiment of the present disclosure, a system for mapping a computer code to wires and gates is provided. The system may include at least one processor and a memory storing processor-executable codes, wherein the at least one processor can be configured to implement the operations of the above-mentioned method for mapping a computer code to wires and gates.
  • According to another example embodiment of the present disclosure, the steps of the method for mapping a computer code to wires and gates are stored on a machine-readable medium comprising instructions, which, when implemented by one or more processors, perform the recited steps.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.
  • FIG. 1 is a block diagram showing a system for compiling source code, according to some example embodiments.
  • FIG. 2 is a block diagram showing an example system for processing of a Hypertext Transfer Protocol (HTTP) request, according to an example embodiment.
  • FIG. 3 is a process flow diagram showing a method for compiling source code, according to an example embodiment.
  • FIG. 4 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.
  • FIG. 5 is a block diagram showing a system for mapping a computer code to wires and gates, according to some example embodiments
  • FIG. 6 is a flow chart showing a method for mapping a computer code to wires and gates, according to an example embodiment.
  • DETAILED DESCRIPTION
  • The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with exemplary embodiments. These exemplary embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents.
  • The technology described herein allows translating a computer code from a high-level programming language to wires and gates representation. Some embodiments of the present disclosure may facilitate optimizing the source code according to requirements of a hardware description. Embodiments of the present disclosure may further allow configuring, based on the wires and gates representation, programmable integrated circuits.
  • According to an example embodiment, the method for mapping a computer code to wires and gates may include acquiring a code written in a programming language and generating a FSM based on the acquired code. The method may further include generating, based on the FSM, a wires and gates representation. The wires and gates representation may include a plurality of wires and plurality of combinatorial logics. The method may further include configuring, based on the wires and gates representation, a field-programmable gate array.
  • FIG. 1 is a block diagram showing an example system 100 for compiling source code, according to some example embodiments. The example system 100 may include a parsing expression grammar (PEG) module 110, a converter 120 between abstract syntax tree (AST) and a non-deterministic finite state machine (NFSM), a converter 130 between NFSM and deterministic finite state machine (DFSM), and an optimizer 140. The system 100 can be implemented with a computer system. An example computer system is described below with reference to FIG. 4.
  • In some embodiments of the present disclosure, the PEG module 110 may be configured to receive an input code 105. In some embodiments, the input code 105 may be written in an input programming language. The input programming language may be associated with a grammar 170. In some embodiments, the grammar 170 may be determined by an augmented Backus-Naur Form (ABNF). The PEG module may be configured to convert the input code 105 into an AST 115 based on the grammar 170. The AST 115 may be further provided to converter 120.
  • In some embodiments of the disclosure, the converter 120 may be configured to transform the AST 115 into NFSM 125. Thereafter, NFSM 125 may be provided to the converter 130. The converter 130 may be configured to translate the NFSM 125 into DFSM 135. The DFSM 135 can be provided to optimizer 140.
  • In some embodiments, optimizer 140 may be configured to optimize the DFSM 135 to obtain a DFSM 145. In some embodiments, the optimization may include minimizing a number of states in the DFSM 135. In various embodiments, optimization can be performed by an implication chart method, Hoperoft's algorithm, Moore reduction procedure, Brzozowski's algorithm, and other techniques. Brzozowski's algorithm includes reversing the edges of a DFSM to produce a NFSM and converting this NFSM to a DFSM using a standard powerset construction by constructing only the reachable states of the converted DFSM. Repeating the reversing a second time produces a DFSM with a provable minimum of number of states in the DFSM.
  • In some embodiments, the DFSM 145, which is an optimized DFSM 135, can be further provided to converter 130. The converter 130 may be configured to translate the DFSM 145 into a NFSM 150. The NFSM 150 may be further provided to converter 120. The converter 120 may be configured to translate the NFSM 150 into an AST 155. The AST 155 may be further provided to PEG module 110.
  • In some embodiments, the PEG module 110 may be configured to convert the AST 155 into output code 160 based on a grammar 180. The grammar 180 may specify an output programming language.
  • In some embodiments, the input languages or output languages may include one of high level programming languages, such as but not limited to C, C++, C#, JavaScript, PHP, Python, Perl, and the like. In various embodiments, the input code or output source code can be optimized to run on various hardware platforms like Advanced RISC Machine (ARM), x86-64, graphics processing unit (GPU), a field-programmable gate array (FPGA), or a custom application-specific integrated circuit (ASIC). In various embodiments, the input code or source code can be optimized to run on various operational systems and platforms, such as Linux, Windows, Mac OS, Android, iOS, OpenCL/CUDA, bare metal, FPGA, and a custom ASIC.
  • In certain embodiments, the output programming language can be the same as the input programming languages. In these embodiments, the system 100 can be used to optimize the input code 105 by converting the input code 105 to the DFSM 135, optimizing the DFSM 135 in terms of number of states, and converting the optimized DFSM 135 to output code 160 in the original programming language.
  • In some other embodiments, the input programming language may include a domain specific language (DSL) which is determined by a strict grammar (i.e., ABNF). In these embodiments, the system 100 may be used to convert documents written in a DSL to an output code 160 written in a high-level programming language or a code written in a low-level programming language. In certain embodiments, input code 105 or output code 160 can be written in a presentation language, including, but not limited to, HTML, XML, and XHTML. In some embodiments, input code 105 or output code 160 may include CSS.
  • In some embodiments, the system 100 may further include a database. The database may be configured to store frequently occurring patterns in the input code written in specific programming languages and parts of optimized DFSM corresponding to the frequently occurring patterns. In these embodiments, the system 100 may include an additional module for looking up a specific pattern of the input code 105 in the database. If the database includes an entry containing a specific pattern and corresponding parts of DFSM, then system 100 may be configured to substitute the specific pattern with the corresponding part of DFSM directly, and by skipping steps for converting the specific pattern to the AST and generating the NFSM and the DFSM.
  • In some embodiments, the input code or output code may include a binary assembly executable by a processor.
  • In some embodiments, the input code 105 or output code 160 may be written in a HDL, such as SystemC, Verilog, and Very High Speed Integrated Circuits Hardware Description Language (VHDL). The input code 105 or output code 160 may include bits native to the FPGA as programmed using Joint Test Action Group (JTAG) standards. In certain embodiments, DFSM 135 can be optimized using a constraint solver. The constraint solver may include some requirements on a hardware platform described by the HDL. For example, the requirements may include requirements for a runtime, power usage, and cost of the hardware platform. The optimization of the DFSM 135 can be carried out to satisfy one of the restrictions of the requirements. In certain embodiments, the optimization of the DFSM may be performed to satisfy several requirement restrictions with weights assigned to each of the restrictions. In some embodiments, the DFSM 135 may be formally verified in accordance with a formal specification to detect software-related security vulnerabilities, including but not limited to, memory leak, division-by-zero, out-of-bounds array access, and others.
  • In certain embodiments, the input source can be written in terms of a technical specification. An example technical specification can include a Request for Comments (RFC). In some embodiments, the technical specification may be associated with a specific grammar. Using the specific grammar, the input code, written in terms of the technical specification, can be translated into the AST 115 and further into the DFSM 135. In some embodiments, the DFSM 135 can be optimized using a constraint solver. The constraint solver may include restrictions described in the technical specification.
  • FIG. 2 is a block diagram showing an example system 200 for processing of HTTP requests, according to an example embodiment. The system 200 may include a client 210, the system 100 for compiling source codes, and a FPGA 240.
  • In certain embodiments, the system 100 may be configured to receive a RFC 105 for Internet protocol (IP), Transmission Control Protocol (TCP), and HTTP. The system 100 may be configured to program the RFC into a VHDL code, and, in turn, compile the VHDL code into bits 235 native to FPGA 240. The FPGA 240 may be programmed with bits 235. In an example illustrated by FIG. 2, the FPGA 240 includes a finite state machine, FSM 225, corresponding to bits 235. In other embodiments, the bits 235 may be stored in a flash memory and the FPGA 240 may be configured to request bits 235 from the flash memory upon startup.
  • In some embodiments, the client 210 may be configured to send a HTTP request 215 to the FPGA 240. In some embodiments, the HTTP request 215 can be read by the FPGA 240. The FSM 225 may be configured to recognize the HTTP request 215 and return an HTTP response 245 corresponding to the HTTP request 215 back to the client 210. In certain embodiments, the FPGA 240 may include a fabric of FSM 250-260 to keep customers' application logics for recognizing different HTTP requests and providing different HTTP responses.
  • The system 200 may be an improvement over conventional HTTP servers because the system 200 does not require large computing resources and maintenance of software for treatment of HTTP requests. The system does not need to be physically large and requires a smaller amount of power than conventional HTTP servers.
  • FIG. 3 is a process flow diagram showing a method 300 for compiling source codes, according to an example embodiment. The method 300 can be implemented with a computer system. An example computer system is described below with reference to FIG. 4.
  • The method 300 may commence, in block 302, with acquiring a first code, the first code being written in a first language. In block 304, method 300 may include parsing, based on a first grammar associated with the first language, the first code to obtain a first AST. In block 306, the method 300 may include converting the first AST to a NFSM. In block 308, the method 300 may include converting the first NFSM to a first DFSM. In block 310, the method 300 may include optimizing the first DFSM to obtain the second DFSM. In block 312, the method may include converting the second DFSM to a second NFSM. In block 314, the method 300 may include converting the second NFSM to a second AST. In block 316, the method 300 may include recompiling, based on a second grammar associated with a second language, the AST into the second code, the second code being written in the second language.
  • FIG. 4 shows a diagrammatic representation of a computing device for a machine in the exemplary electronic form of a computer system 400, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed. In various exemplary embodiments, the machine operates as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine can operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a server, a personal computer (PC), a tablet PC, a set-top box (STB), a PDA, a cellular telephone, a digital camera, a portable music player (e.g., a portable hard drive audio device, such as a Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, a switch, a bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • The example computer system 400 includes a processor or multiple processors 402, a hard disk drive 404, a main memory 406, and a static memory 408, which communicate with each other via a bus 410. The computer system 400 may also include a network interface device 412. The hard disk drive 404 may include a computer-readable medium 420, which stores one or more sets of instructions 422 embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 422 can also reside, completely or at least partially, within the main memory 406 and/or within the processors 402 during execution thereof by the computer system 400. The main memory 406 and the processors 402 also constitute machine-readable media.
  • While the computer-readable medium 420 is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, NAND or NOR flash memory, digital video disks, RAM, ROM, and the like.
  • The exemplary embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, C, Python, Javascript, Go, or other compilers, assemblers, interpreters or other computer languages or platforms.
  • FIG. 5 is a block diagram showing an example system 500 for mapping a computer code to wires and gates, according to some example embodiments. The example system 500 may include a parsing expression grammar (PEG) module 110, a converter 120 to convert between AST and NFSM, a converter 130 to convert between NFSM and DFSM, an optimizer 140, and translator 510 to translate from DFSM to wires and gates. The system 100 can be implemented with a computer system. An example computer system is described below with reference to FIG. 4.
  • The PEG module 110, the converter 120, the converter 130, and the optimizer 140 are described above with reference to the system 100 of FIG. 1. The PEG module 110 may receive an input code 105 written in an input programming language. The input programming language can be associated with a grammar 170. The PEG module can be configured to convert the input code 105 into an AST 115. The converter 120 may further transform the AST 115 into a NFSM 125. The converter 130 may be configured to translate the NFSM 125 into a DFSM 135. Optimizer 140 may further optimize the DFSM 135 to obtain a DFSM 145, which is optimized DFSM 135.
  • In some embodiments, the DFSM 145, can be further provided to translator 510. The translator 510 may be configured to translate the optimized DFSM 145 into a set 520 of wires and gates. The edges of DFSM 145 can be represented as wires. The states can be represented as a combinatorial logic of the wires or a simple gate. The set 520 of wires and gates can be used to match inputs, internal states, and outputs. The set 520 of wires and gates can be also used to design, program, or configure integrated circuits, such as but not limited to FPGAs and ACISs. For example, the set 520 of wires and gates can be used to configure programmable logic blocks and reconfigurable reconnects of FPGA 240 (shown in FIG. 2) to process HTTP requests.
  • The integrated circuits (e.g., FPGA) may receive packets via a network. The packets can include ethernet packets, Optical Transport Network (OTN) packets, Peripheral Component Interconnect Express (PCIE) packets or the like. The packets include an ordered set of inputs in time with a defined beginning, a number of input symbols, and an end. For example, the packets can include a preamble, start frame delimiter, header, protocol specific data, and cyclic redundancy check. The FPGA can be configured to perform operations included in the initial computer code based on wires and gates. For example, the FPGA can be configured to send a reply a received data packet. In another example, the FPGA can be configured to match or filter data packets, forward data packets, or store data packets in the FPGA. In yet another example, the FPGA can be also reconfigured based on the information included in the received data packets.
  • Depending on a data transferring protocol, the data in packets are clocked at a specific rate. Per each dock only a certain input block of a data packet can be received by a FPGA, such that only a certain number of wires can be used in the FPGA. There is a strong correlation between the number of bits in the input and the corresponding number of wires and number of gates. For the same computer code, fewer gates and wires are needed for a bigger number of bits in the input. There is a linear dependency between the length of a data packet and a number of gates and wires, if the length of the packet is measured as the number of symbols in the packets. The number of symbols in the packet is inversely related to the number of bits in the input. For example, use of one-hot encoding, 8-bit input, and 256 separate wires may represent one of possible 0-255 numbers of input. In case of a transfer of data in packets via a gigabit media-independent interface (GMII) interface, each input block is single 8-bit/8-wire input at each clock cycle.
  • When using one-hot encoding and 84 states, one separate wire for each state may represent a symbol from 0 to 83. A transition from one state to another state may occur when 0 or 1 possible inputs are matched for each state. The state would not be advancing when inputs are failed to match the whole pattern. Given that there are only 84 possible states and 0 or 1 possible inputs per state, a maximum of 84 wires out of the 256 separate wires could possibly be used. In practice, the same input value can be used multiple times. For example, 0x55 can appear 7 times at the beginning of a packet. Because one input wire can be used multiple times and because there are states with 0 possible inputs such as in the packet ID field, the number of unique input wires that are used tends to be small. For common cases, the number of unique input wires can be 20 wires or less.
  • At each state, arranged in parallel, a single 8-bit symbol or nothing, is matched by combining the wires from the previous state, or signal in the beginning of the packet, and the wires corresponding to the input symbol, or nothing. Each state can be represented as one of the following:
  • 1. firstStateInput<=packetStart AND inputWireN
  • 2. firstStateInput<=packetStart. A case when no input is needed, or any input is acceptable.
  • 3. currentStateInput<=previousStateN AND inputWireN
  • 4. currentStateInput<=previousStateN. A case when no input is needed, or any input is acceptable.
  • In a general case, packetStart can be the zeroth state, causing the start of the first state. For states in which any input is acceptable, there is no input wire needed to be looked at. The multiple states that have no input wires looked at may be implemented as a shift register. Any states that are not stored in a flip-flop can be stored in a shift register because these states are not accessed individually.
  • In the case of transferring data in packets via a 10-gigabit media-independent interface (XGMII) interface, each symbol can be represented as 32 bits at each transition of a clock. When represented with one-hot encoding, the maximum number of wires to represent all possible 32 bits symbols is over 4 billion wires. However, the length of data packet is the same as in the case of a GMII interface. Assuming that there is only ¼ of the possible states and that one input symbol is 4 times larger than in the GMII interface, the number of wires is limited to the symbol count length of the packet, a minimum size being 84 bytes or ¼ that as symbols of 32 bits, ⅛ at 64 bit symbols, and so forth. There can be a fewer number due to redundancies.
  • Similar considerations can be used when using higher speed/symbol size inputs, such as in transferring protocols with rate of 25 MHz, 125 MHz, 156.25 MHz, 644.53125 MHz, 1.5625 GHz, and so forth. Generally, as the width of an input increases, the number of gates decreases.
  • When multiple similar packets are matched, the decisions may form a tree. Earlier states are shared in the tree. Each unique type of a packet to be matched requires a minimum of 1 additional gate to uniquely match the packet to the gate and have a maximum number of states not shared with other similar types of packets to match. Generally, when the number of packet matching rules is more than a hundred, as few as 1 or 2 additional gates are required to match a packet. In most cases, only 1 additional gate is needed for each additional matching rule.
  • FIG. 6 is a flow chart showing a method 600 for mapping a computer code to wires and gates, according to some example embodiments. The method 600 can be implemented with a computer system. An example computer system is described below with reference to FIG. 4.
  • The method 600 may commence, in block 602, with acquiring a code. The code can be written in a programming language. The programming language can a high-level programming language, such as, for example, JavaScript, C, C++, domain specific language, and the like. The code can be written in terms of a technical specification. An example technical specification can include an RFC.
  • In block 604, the method 600 may generate, based on the code, an FSM. In block 606, the method 600 may proceed with generating, based on the FSM, a wires and gates representation. The wires and gates representation may include a plurality of wires and a plurality of combinatorial logics. An input of each of the plurality of wires may represent a symbol from a set of symbols of a structured data packet. The size of the symbol can be equal to a number of bits of the structured data packet transferred per clock cycle according to a data transmission protocol. The packet may include an Ethernet packet, OTN packet, or PCIE packet. The data transmission protocol may include a GMII, XGMII, and so forth. States arising from combinational logic may be stored in the flip flops or alternatively shift registers if the individual states from the flip-flops are not directly needed.
  • In block 608, the method 600 may include configuring, based on the wires and gates representation, a field-programmable gate array. Combinatorial logics that do not depend on input from wires of the plurality of wires can be implemented in a shift register. Other combinatorial logics can be stored in flip-flops.
  • Thus, systems and methods for mapping a computer code to wires and gates are disclosed. Although embodiments have been described with reference to specific example embodiments, it may be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims (1)

1. A computer-implemented method for mapping a computer code to wires and gates, the method comprising: acquiring a code written in a programming language; generating, based on the code, a finite state machine (FSM); and generating, based on the FSM, a wires and gates representation, the wires and gates representation including a plurality of wires and plurality of combinatorial logics.
US16/688,127 2017-06-22 2019-11-19 Mapping a computer code to wires and gates Abandoned US20200348915A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/688,127 US20200348915A1 (en) 2017-06-22 2019-11-19 Mapping a computer code to wires and gates

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15/630,691 US9996328B1 (en) 2017-06-22 2017-06-22 Compiling and optimizing a computer code by minimizing a number of states in a finite machine corresponding to the computer code
US15/970,884 US10481881B2 (en) 2017-06-22 2018-05-04 Mapping a computer code to wires and gates
US16/688,127 US20200348915A1 (en) 2017-06-22 2019-11-19 Mapping a computer code to wires and gates

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/970,884 Continuation US10481881B2 (en) 2017-06-22 2018-05-04 Mapping a computer code to wires and gates

Publications (1)

Publication Number Publication Date
US20200348915A1 true US20200348915A1 (en) 2020-11-05

Family

ID=64693175

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/970,884 Active US10481881B2 (en) 2017-06-22 2018-05-04 Mapping a computer code to wires and gates
US16/688,127 Abandoned US20200348915A1 (en) 2017-06-22 2019-11-19 Mapping a computer code to wires and gates

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/970,884 Active US10481881B2 (en) 2017-06-22 2018-05-04 Mapping a computer code to wires and gates

Country Status (1)

Country Link
US (2) US10481881B2 (en)

Family Cites Families (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US314414A (en) * 1885-03-24 Geobge h
US512554A (en) * 1894-01-09 Frank f
US5036476A (en) * 1988-04-08 1991-07-30 Minolta Camera Kabushiki Kaisha Printer control system
US5452231A (en) * 1988-10-05 1995-09-19 Quickturn Design Systems, Inc. Hierarchically connected reconfigurable logic assembly
US5477451A (en) * 1991-07-25 1995-12-19 International Business Machines Corp. Method and system for natural language translation
US5606690A (en) * 1993-08-20 1997-02-25 Canon Inc. Non-literal textual search using fuzzy finite non-deterministic automata
US5510981A (en) * 1993-10-28 1996-04-23 International Business Machines Corporation Language translation apparatus and method using context-based translation models
US5960200A (en) * 1996-05-03 1999-09-28 I-Cube System to transition an enterprise to a distributed infrastructure
JP3231673B2 (en) * 1996-11-21 2001-11-26 シャープ株式会社 Character and character string search method and recording medium used in the method
US6018735A (en) * 1997-08-22 2000-01-25 Canon Kabushiki Kaisha Non-literal textual search using fuzzy finite-state linear non-deterministic automata
US6223189B1 (en) * 1997-12-31 2001-04-24 International Business Machines Corporation System and method using metalanguage keywords to generate charts
US6308149B1 (en) * 1998-12-16 2001-10-23 Xerox Corporation Grouping words with equivalent substrings by automatic clustering based on suffix relationships
US7185081B1 (en) * 1999-04-30 2007-02-27 Pmc-Sierra, Inc. Method and apparatus for programmable lexical packet classifier
US7188168B1 (en) * 1999-04-30 2007-03-06 Pmc-Sierra, Inc. Method and apparatus for grammatical packet classifier
US7136947B1 (en) * 1999-06-10 2006-11-14 Cadence Design Systems, Inc. System and method for automatically synthesizing interfaces between incompatible protocols
US7000213B2 (en) * 2001-01-26 2006-02-14 Northwestern University Method and apparatus for automatically generating hardware from algorithms described in MATLAB
US7188061B2 (en) * 2001-07-16 2007-03-06 International Business Machines Corporation Simulation monitors based on temporal formulas
US20030196194A1 (en) * 2001-10-11 2003-10-16 Johns Clifford R. Hardware design protocol and system
EP1331630A3 (en) * 2002-01-07 2006-12-20 AT&T Corp. Systems and methods for generating weighted finite-state automata representing grammars
US7240330B2 (en) * 2002-02-01 2007-07-03 John Fairweather Use of ontologies for auto-generating and handling applications, their persistent storage, and user interfaces
US7093023B2 (en) * 2002-05-21 2006-08-15 Washington University Methods, systems, and devices using reprogrammable hardware for high-speed processing of streaming data to find a redefinable pattern and respond thereto
US9965259B2 (en) * 2002-11-20 2018-05-08 Purenative Software Corporation System for translating diverse programming languages
US7058936B2 (en) * 2002-11-25 2006-06-06 Microsoft Corporation Dynamic prefetching of hot data streams
US7464254B2 (en) * 2003-01-09 2008-12-09 Cisco Technology, Inc. Programmable processor apparatus integrating dedicated search registers and dedicated state machine registers with associated execution hardware to support rapid application of rulesets to data
CA2521576A1 (en) * 2003-02-28 2004-09-16 Lockheed Martin Corporation Hardware accelerator state table compiler
US7093231B2 (en) * 2003-05-06 2006-08-15 David H. Alderson Grammer for regular expressions
US7370361B2 (en) * 2004-02-06 2008-05-06 Trend Micro Incorporated System and method for securing computers against computer virus
US7721275B2 (en) * 2004-05-14 2010-05-18 Sap Ag Data-flow based post pass optimization in dynamic compilers
US20060117307A1 (en) * 2004-11-24 2006-06-01 Ramot At Tel-Aviv University Ltd. XML parser
US7702629B2 (en) * 2005-12-02 2010-04-20 Exegy Incorporated Method and device for high performance regular expression pattern matching
US20070226362A1 (en) * 2006-03-21 2007-09-27 At&T Corp. Monitoring regular expressions on out-of-order streams
US7503027B1 (en) * 2006-03-31 2009-03-10 The Mathworks, Inc. Hardware description language code generation from a state diagram
US7627541B2 (en) * 2006-09-15 2009-12-01 Microsoft Corporation Transformation of modular finite state transducers
US8024691B2 (en) * 2006-09-28 2011-09-20 Mcgill University Automata unit, a tool for designing checker circuitry and a method of manufacturing hardware circuitry incorporating checker circuitry
US7912808B2 (en) * 2006-12-08 2011-03-22 Pandya Ashish A 100Gbps security and search architecture using programmable intelligent search memory that uses a power down mode
US7991723B1 (en) * 2007-07-16 2011-08-02 Sonicwall, Inc. Data pattern analysis using optimized deterministic finite automaton
WO2009017131A1 (en) * 2007-08-02 2009-02-05 Nec Corporation System, method, and program for generating nondeterministic finite automaton not including ε transition
US7818311B2 (en) * 2007-09-25 2010-10-19 Microsoft Corporation Complex regular expression construction
US8706964B1 (en) * 2007-09-28 2014-04-22 The Mathworks, Inc. Automatic generation of cache-optimized code
US8180964B1 (en) * 2007-09-28 2012-05-15 The Mathworks, Inc. Optimization of cache configuration for application design
US7904850B2 (en) * 2007-11-30 2011-03-08 Cebatech System and method for converting software to a register transfer (RTL) design
WO2009116646A1 (en) * 2008-03-19 2009-09-24 日本電気株式会社 Finite automaton generating system for checking character string for multibyte processing
US8176085B2 (en) * 2008-09-30 2012-05-08 Microsoft Corporation Modular forest automata
US8484147B2 (en) * 2008-12-19 2013-07-09 Intel Corporation Pattern matching
WO2010127173A2 (en) * 2009-04-30 2010-11-04 Reservoir Labs, Inc. System, apparatus and methods to implement high-speed network analyzers
US20120191446A1 (en) * 2009-07-15 2012-07-26 Proviciel - Mlstate System and method for creating a parser generator and associated computer program
US8601013B2 (en) * 2010-06-10 2013-12-03 Micron Technology, Inc. Analyzing data using a hierarchical structure
US8666931B2 (en) * 2010-07-16 2014-03-04 Board Of Trustees Of Michigan State University Regular expression matching using TCAMs for network intrusion detection
JP5857072B2 (en) * 2011-01-25 2016-02-10 マイクロン テクノロジー, インク. Expansion of quantifiers to control the order of entry and / or exit of automata
US8726253B2 (en) * 2011-01-25 2014-05-13 Micron Technology, Inc. Method and apparatus for compiling regular expressions
EP2668574B1 (en) * 2011-01-25 2021-11-24 Micron Technology, INC. Utilizing special purpose elements to implement a fsm
CN103430148B (en) * 2011-01-25 2016-09-28 美光科技公司 The status packet utilized for element
US9398033B2 (en) * 2011-02-25 2016-07-19 Cavium, Inc. Regular expression processing automaton
US8688608B2 (en) * 2011-06-28 2014-04-01 International Business Machines Corporation Verifying correctness of regular expression transformations that use a post-processor
US8909672B2 (en) * 2011-08-17 2014-12-09 Lsi Corporation Begin anchor annotation in DFAs
US8966457B2 (en) * 2011-11-15 2015-02-24 Global Supercomputing Corporation Method and system for converting a single-threaded software program into an application-specific supercomputer
US9203805B2 (en) * 2011-11-23 2015-12-01 Cavium, Inc. Reverse NFA generation and processing
US9443156B2 (en) * 2011-12-15 2016-09-13 Micron Technology, Inc. Methods and systems for data analysis in a state machine
US8782624B2 (en) * 2011-12-15 2014-07-15 Micron Technology, Inc. Methods and systems for detection in a state machine
US8680888B2 (en) * 2011-12-15 2014-03-25 Micron Technologies, Inc. Methods and systems for routing in a state machine
JP5818695B2 (en) 2012-01-04 2015-11-18 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Code conversion method, program and system
US20130282648A1 (en) * 2012-04-18 2013-10-24 International Business Machines Corporation Deterministic finite automaton minimization
US8806456B2 (en) * 2012-05-31 2014-08-12 New York University Configuration-preserving preprocessor and configuration-preserving parser
JP5985900B2 (en) * 2012-06-22 2016-09-06 ルネサスエレクトロニクス株式会社 Behavioral synthesis device, data processing system including behavioral synthesis device, and behavioral synthesis program
US9235798B2 (en) * 2012-07-18 2016-01-12 Micron Technology, Inc. Methods and systems for handling data received by a state machine engine
US9304968B2 (en) * 2012-07-18 2016-04-05 Micron Technology, Inc. Methods and devices for programming a state machine engine
US9075428B2 (en) * 2012-08-31 2015-07-07 Micron Technology, Inc. Results generation for state machine engines
US9063532B2 (en) * 2012-08-31 2015-06-23 Micron Technology, Inc. Instruction insertion in state machine engines
US9501131B2 (en) * 2012-08-31 2016-11-22 Micron Technology, Inc. Methods and systems for power management in a pattern recognition processing system
BR112015010016A2 (en) 2012-11-07 2017-07-11 Koninklijke Philips Nv compiler, computer, compilation method, and computer program
US9177253B2 (en) * 2013-01-31 2015-11-03 Intel Corporation System and method for DFA-NFA splitting
GB2511072A (en) * 2013-02-22 2014-08-27 Ibm Non-deterministic finite state machine module for use in a regular expression matching system
US9448965B2 (en) * 2013-03-15 2016-09-20 Micron Technology, Inc. Receiving data streams in parallel and providing a first portion of data to a first state machine engine and a second portion to a second state machine
US9311058B2 (en) * 2013-03-15 2016-04-12 Yahoo! Inc. Jabba language
US9262555B2 (en) * 2013-03-15 2016-02-16 Yahoo! Inc. Machine for recognizing or generating Jabba-type sequences
US9489215B2 (en) * 2013-08-01 2016-11-08 Dell Software Inc. Managing an expression-based DFA construction process
US9426165B2 (en) * 2013-08-30 2016-08-23 Cavium, Inc. Method and apparatus for compilation of finite automata
US9507563B2 (en) * 2013-08-30 2016-11-29 Cavium, Inc. System and method to traverse a non-deterministic finite automata (NFA) graph generated for regular expression patterns with advanced features
US9426166B2 (en) * 2013-08-30 2016-08-23 Cavium, Inc. Method and apparatus for processing finite automata
US9733782B2 (en) * 2013-09-13 2017-08-15 Fujitsu Limited Extracting a deterministic finite-state machine model of a GUI based application
US9405652B2 (en) * 2013-10-31 2016-08-02 Red Hat, Inc. Regular expression support in instrumentation languages using kernel-mode executable code
JP6164054B2 (en) 2013-11-08 2017-07-19 富士通株式会社 Information processing apparatus, compiling method, and compiler program
US9652268B2 (en) * 2014-03-28 2017-05-16 Intel Corporation Instruction and logic for support of code modification
US10002326B2 (en) * 2014-04-14 2018-06-19 Cavium, Inc. Compilation of finite automata based on memory hierarchy
US9652453B2 (en) * 2014-04-14 2017-05-16 Xerox Corporation Estimation of parameters for machine translation without in-domain parallel data
US10110558B2 (en) * 2014-04-14 2018-10-23 Cavium, Inc. Processing of finite automata based on memory hierarchy
US10055399B2 (en) * 2014-07-11 2018-08-21 Loring G. Craymer, III Method and system for linear generalized LL recognition and context-aware parsing
WO2016141319A1 (en) * 2015-03-05 2016-09-09 The Mathworks, Inc. Conditional-based duration logic
US10282347B2 (en) * 2015-04-08 2019-05-07 Louisana State University Research & Technology Foundation Architecture for configuration of a reconfigurable integrated circuit
US10846103B2 (en) * 2015-10-06 2020-11-24 Micron Technology, Inc. Methods and systems for representing processing resources
US10048952B2 (en) * 2015-11-11 2018-08-14 Oracle International Corporation Compiler optimized data model evaluation
WO2018236384A1 (en) 2017-06-22 2018-12-27 Archeo Futurus, Inc. Compiling and optimizing a computer code by minimizing a number of states in a finite machine corresponding to the computer code
US9996328B1 (en) 2017-06-22 2018-06-12 Archeo Futurus, Inc. Compiling and optimizing a computer code by minimizing a number of states in a finite machine corresponding to the computer code

Also Published As

Publication number Publication date
US20180373508A1 (en) 2018-12-27
US10481881B2 (en) 2019-11-19

Similar Documents

Publication Publication Date Title
US9996328B1 (en) Compiling and optimizing a computer code by minimizing a number of states in a finite machine corresponding to the computer code
TWI517036B (en) Programmed parallel machine and computer-implemented method, computer readable medium, non-transitory computer-readable medium, computer and system for a compiler
TWI506556B (en) Method and apparatus for compiling regular expressions
JP5763784B2 (en) Grouping states for element usage
JP5857072B2 (en) Expansion of quantifiers to control the order of entry and / or exit of automata
US11106437B2 (en) Lookup table optimization for programming languages that target synchronous digital circuits
US9251304B2 (en) Circuit design evaluation with compact multi-waveform representations
JP2014506693A5 (en)
US8572574B2 (en) Solving hybrid constraints to validate specification requirements of a software module
US20120017119A1 (en) Solving Hybrid Constraints to Generate Test Cases for Validating a Software Module
US20110022617A1 (en) Finite automaton generation system for string matching for multi-byte processing
JPWO2009017131A1 (en) Nondeterministic finite automaton generation system, method and program without ε transition
EP3912025B1 (en) Language and compiler that generate synchronous digital circuits that maintain thread execution order
US10481881B2 (en) Mapping a computer code to wires and gates
KR20210088652A (en) network interface device
WO2019213539A1 (en) Mapping a computer code to wires and gates
WO2018236384A1 (en) Compiling and optimizing a computer code by minimizing a number of states in a finite machine corresponding to the computer code
Wipliez et al. Design ip faster: Introducing the c high-level language
Tiejun et al. An Parallel FPGA SAT Solver Based on Multi‐Thread and Pipeline
Gill et al. Using functional programming to generate an LDPC forward error corrector
Kuga et al. Streaming Accelerator Design for Regular Expression on CPU+ FPGA Embedded System
Soewito et al. Hybrid pattern matching for trusted intrusion detection
Zare et al. Heuristic algorithm for periodic clock optimisation in scheduling‐based latency‐insensitive design
Huhn et al. Optimization SAT-Based Retargeting for Embedded Compression
JP2019191796A (en) High-level synthesis method, high-level synthesis program, and high-level synthesis apparatus

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION