EP1377904A2 - Moteur de chemin de donnees (dpe) - Google Patents

Moteur de chemin de donnees (dpe)

Info

Publication number
EP1377904A2
EP1377904A2 EP01944224A EP01944224A EP1377904A2 EP 1377904 A2 EP1377904 A2 EP 1377904A2 EP 01944224 A EP01944224 A EP 01944224A EP 01944224 A EP01944224 A EP 01944224A EP 1377904 A2 EP1377904 A2 EP 1377904A2
Authority
EP
European Patent Office
Prior art keywords
queue
data structure
memory
thread
java
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01944224A
Other languages
German (de)
English (en)
Inventor
Guillaume Comeau
Andreas Paramonoff
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zucotto wireless Inc
Original Assignee
Zucotto wireless Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/849,648 external-priority patent/US20020012329A1/en
Priority claimed from US09/871,481 external-priority patent/US20010049726A1/en
Application filed by Zucotto wireless Inc filed Critical Zucotto wireless Inc
Publication of EP1377904A2 publication Critical patent/EP1377904A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • H04W8/24Transfer of terminal data
    • H04W8/245Transfer of terminal data from a network towards a terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W80/00Wireless network protocols or protocol adaptations to wireless operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices

Definitions

  • DPE Data Path Engine
  • Java byte-compiled object oriented programming language is available from Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303 as well as others are well known in the art. Although these implementations may resolve portability and security issues in portable devices, they can impose limitations on overall system performance.
  • a semi-compiled/interpreted language like Java, and an associated virtual machine or interpreter running on a conventional portable power-constrained device can consume roughly ten times more power than a native application.
  • Java language and run time environment feature redundancy Java ported onto an existing operating system requires a large memory footprint.
  • Third, the development of a wireless protocol stack for such a system is very difficult given the real-time constraints, which are inherent in the operation of existing processors.
  • FIG. 1 there is seen one prior art system architecture, on which a Java virtual machine (VM) is implemented.
  • VM Java virtual machine
  • One factor that plays a critical role in overall system performance and power consumption of previous Java implementations in traditional systems is the boundary between a processor core 190, peripherals 197, and software representations 11 of the peripherals 197.
  • the most common system architecture follows horizontal layers, which provide abstractions to peripherals. In terms of processing resources, the natural split in these layers results in mediocre efficiency.
  • Known Java hardware accelerator solutions that utilize a VM 10 fail to optimize the path between peripherals 197 and their software representation 11.
  • System 199 communicates across a wireless network in which a frame of data from an external network is received by peripherals 197. Until the frame is wrapped into a Java object 191, the system operates generally in the following steps:
  • a packet of data from an off-chip peripheral 197 (for example a baseband circuit), is received and the packet is stored in a receive FIFO 198 of a processor 190 operating under control of a processor core 196.
  • the receive FIFO 198 triggers an interrupt service routine, which copies the packet to a serial receive buffer 192 of a device driver associated with the peripheral.
  • the packet is now in the realm of an operating system, which may signal a Java application to service receive buffer 192. Since the system 199 follows the usual hardware, operating system, virtual machine paradigm, it is necessary to buffer the packet under the control of an operating system device driver to guarantee latency and prevent FIF0 198 overflow. 3.
  • a Java scheduler is activated to change execution to the Java listener thread associated with the peripheral device.
  • a listener thread that is active, issues native function calls (JNI) to get data out of the receive buffer 192, to allocate a block of memory of corresponding size, and to copy the packet into a Java object 191.
  • JNI native function calls
  • the memory-buffers may be extended by either appending data to the construct (which may reallocate the last chunk of data to fit the new characters) and/or by adding more pre-allocated chunks of data to the construct (which can be either appended or prepended to the list of buffer chunks).
  • appending data to the construct which may reallocate the last chunk of data to fit the new characters
  • pre-allocated chunks of data which can be either appended or prepended to the list of buffer chunks.
  • an apparatus for utilizing information comprises: a memory, the memory comprising at least one data structure; and a plurality of layers, each layer comprising at least one thread, each thread utilizing each data structure from the same portion of the memory.
  • the apparatus may comprise an application layer and a hardware layer, wherein the application layer comprises one of the plurality of layers, wherein the hardware layer comprises one of the plurality of layers, wherein the application layer and hardware layer utilize each data structure from the same portion of memory.
  • At least one of the plurality of layers may comprise a realtime thread.
  • Each data structure may comprise a block object, wherein at least a portion of each block object is comprised of a contiguous portion of the memory.
  • the contiguous portion of the memory may be defined a byte array.
  • the at least one data structure may comprise a block object.
  • the apparatus may comprise a Java or Java-like virtual machine, wherein each thread comprises a Java or Java-like thread, wherein the Java or Java-like thread utilizes the same portion of memory independent of Java or Java-like monitors.
  • the apparatus may comprise interrupt means for disabling interrupts; and a Java or Java-like virtual machine capable of executing each thread, wherein each thread utilizes the same portion of memory after the interrupts are disabled by the interrupt means. All interrupts are disabled before each thread utilizes the same portion of memory.
  • the threads may disable the interrupts via the interrupt means.
  • the information may be received by the apparatus as streamed information, wherein each data structure is preallocated to the memory prior reception of the information.
  • the apparatus may comprise a freelist data structure, wherein each block object is preallocated to the freelist data structure by the apparatus prior to utilization of the information.
  • the apparatus may comprise a protocol stack, the protocol stack residing in the memory, wherein the protocol stack preallocates each block to the freelist data structure.
  • the apparatus further may comprise a virtual machine, the virtual machine utilizing a garbage collection mechanism, the virtual machine running each thread, each thread utilizing the same portion of the memory independent of the garbage collection mechanism.
  • the garbage collection mechanism may comprise a thread, wherein the threads comprise Java-like threads, wherein the threads each comprise a priority, wherein the priority of the Java-like threads is higher than the priority of the garbage collection thread.
  • Each data structure may comprise a block object, and further comprising a freelist data structure and at least one queue data structure, each block object comprising a respective handle, wherein at any given time the respective handle belongs to the freelist data structure or a queue data structure.
  • the apparatus may comprise at least one queue data structure; and at least one frame data structure, each frame data structure comprising an instance of one or more block objects, each block object comprising a respective handle, each queue data structure capable of holding an instance of at least one frame data structure, and each thread using the queue data structure to pass a block handle to another thread.
  • the apparatus may comprise a virtual machine, the virtual machine running each thread; at least one queueendpoint, each queueendpoint comprising at least one of the threads; and at least one queue, each queue comprising ends, each end bounded by a queueendpoint, each queue for holding each of data structures in a data path for use by each queuendpoint, wherein each queue notifies a respective queueendpoint when the queue needs to be serviced by the queueendpoint, wherein a queueendpoint passes instances of each data structure from one queue to another queue by a respective handle belonging to the data structure.
  • a queue may notifie a respective queueendpoint upon the occurrence of a queue empty event, queue not empty event, queue congested event, or queue not congested event.
  • the apparatus may comprise a queue status data structure shared by a queue and a respective queueendpoint, wherein the queue sets a flag in the data status structure to notify the respective queueendpoint when the queue needs to be serviced.
  • an apparatus for utilizing a stream of information in a data path may comprise: a memory, the memory comprising at least one data structure, each data structure comprising a pointer; a plurality of layers, the data path comprising the plurality of layers, the stream of information comprising the at least one data structure, each layer utilizing each data structure via its pointer.
  • Each layer may comprise at least one thread, each thread utilizing each data structure from the same portion of the memory.
  • the apparatus may comprise an interrupt disabling mechanism; and at least one queue, each queue disposed in the data path between a first layer and a second layer, the first layer comprising a producer thread, the second layer comprising a consumer thread, the producer thread for enqueuing each data structure onto a queue, the consumer thread for dequeing each data structure from the queue, wherein prior to dequeing and enqueing each data structure interrupts are disabled.
  • the apparatus may comprise a virtual machine, the virtual machine comprising a garbage collection mechanism, the virtual machine running each thread independent of the garbage collection mechanism.
  • a system for utilizing data structure with a plurality of threads may comprise; an interrupt mechanism for enabling and disabling interrupts; a memory, the memory comprising at least one data structure; and a plurality of threads, the plurality of threads utilizing the data structures after disabling interrupts with the interrupt mechanism.
  • the plurality of threads may utilize each of the data structures from the same portion of memory.
  • a system for accessing streaming information with a plurality of threads may comprise: a memory; and interrupt means for enabling and disabling interrupts; wherein the plurality of threads access the streaming information from the memory by disabling the interrupts via the interrupt means.
  • the system may comprise a memory, wherein the plurality of threads access the streaming information from the same portion of the memory.
  • a method for accessing information in a memory with a plurality of threads may comprise the steps of: transferring information from one thread to another thread via handles to the information; and disabling interrupts via the threads before performing the step of transferring the information.
  • the method may comprise a step of accessing the information with the plurality of threads from the same portion of the memory.
  • FIG. 1 illustrates one prior art system architecture on which a virtual machine (VM) is implemented
  • Figure 2 illustrates control and data paths of a prior art system
  • Figure 3a illustrates a top-level block diagram architecture of an embodiment described herein
  • Figure 3b illustrates an embodiment in which byte-codes are fetched from memory by an MMU, with control and address information passed from a Prefetch Unit;
  • Figure 3c illustrates an embodiment wherein trapped instruction may be transferred to software control
  • Figure 4 illustrates a representation of a software protocol stack
  • Figure 5 illustrates an embodiment of a Data Path Engine
  • FIGS. 6a-e illustrate embodiment of various data structures utilized by the Data Path Engine
  • FIGS 7a-b illustrate embodiments of two subsystems of the Data Path Engine
  • Figure 8 illustrates multiple queues interacting with queueendpoints.
  • Figure 9 illustrates an interaction between FreeList, Frame, Queue, and Block data structures;
  • Figure 10 illustrates an embodiment of a hardware interface to the Data Path Engine
  • Figure 11 illustrates an embodiment as described herein;
  • a circuit 300 may comprise a processor core 302 that may be used to perform operations on data that is directly and dynamically transferred between the circuit 300 and peripherals or devices on or off the circuit 300.
  • the circuit 300 may comprise an instruction execution means for executing instructions, for example, application program instructions, application program threads, hardware threads of execution, and processor -read or -write instructions.
  • the data may comprise instructions of a semi-compiled or interpreted programming language utilizing byte-codes, binary executable data, data transfer protocol packets such as TCP/IP, Bluetooth packets, or streaming data received by a peripheral or device and transferred from the peripheral or device directly to a memory location.
  • data transfer protocol packets such as TCP/IP, Bluetooth packets
  • streaming data received by a peripheral or device and transferred from the peripheral or device directly to a memory location.
  • operations may be performed on the data without the need for further transfers of the data to, or from, the memory.
  • the circuit 300 may comprise a Memory Management Unit (MMU) 350, a Direct Memory Access (DMA) controller 305, an Interrupt Controller 306, a Timing Generation Block (TGB) 353, a memory 362, and a Debug Controller 354.
  • the Debug Controller 354 may include functionality that allows the processor core 302 to upload micro-program instructions to memory at boot-up.
  • the Debug Controller 354 may also allow low level access to the processor core 302 for program debug purposes.
  • the MMU 350 may act as an arbiter to control accesses to an Instruction and Data Cache of memory 373, to external memories, and to DMA controller 305.
  • the MMU 350 may implement the Instruction and Data Cache memory 362access policy.
  • the MMU 350 may also arbitrate DMA 305 accesses between the processor core 302 and peripherals or devices on or off the circuit 300.
  • the DMA 305 may connect to a system bus (SBUS) 355 and may include channels for communicating with various peripherals or devices, including: to a wireless baseband circuit 307, to UART1 356, to UART2 357, to Codec 358, to Host Processor Interface (HPI) 359, and to MMU 350.
  • SBUS system bus
  • HPI Host Processor Interface
  • the SBUS 355 allows one master to poll several slaves for read and write accesses, i.e., one slave per bus access cycle.
  • the processor core 302 may be the SBUS master. In one embodiment, only the SBUS master may request a read or write access to the SBUS 302 at any time.
  • peripherals or devices may be slaves and are memory mapped, i.e. a read/write access to a peripheral or device is similar to a memory access. If a slave has new data for the master to read, or needs new data to consume, it may send an interrupt to the master, which reacts by polling all slaves to discover the interrupting slave and the reason for the interruption.
  • the UARTs 356/357 may open a bi-directional serial communication channel between the processor core 302 and external peripherals.
  • the Codec 358 may provide standard voice coding/decoding for the baseband circuit 307 or other units requiring voice coding/decoding.
  • the circuit 300 may comprise other functionalities, including a Test Access Block (TAB) 360 comprising a JTAG interface and general-purpose input/output interface (GPIO) 361.
  • TAB Test Access Block
  • GPIO general-purpose input/output interface
  • circuit 300 may also comprise a Debug Bus (DBUS) (not shown).
  • DBUS Debug Bus
  • the DBUS may connect peripherals through the GPIO 361 to external debugging devices.
  • the DBUS bus may allow monitoring of the state of internal registers and on-chip memories at run-time. It may also allow direct writing to internal registers and on-chip memories at run time.
  • the processor core 302 may be implemented on a circuit 300 comprising an ASIC.
  • the processor core 302 may comprise a complex instruction set (CISC) machine, with a variable instruction cycle and optimizations for executing software byte-codes of a semi-compiled/interpreted language directly without high level translation or interpretation.
  • the software byte-code instructions may comprise byte-codes supported by the VM functionality of a software support layer (not shown). An embodiment of a software support layer is described in commonly assigned U.S. Patent Application S.N. 09/767,038, filed 22 January 2001.
  • the byte-codes comprise Java or Java-like byte-codes.
  • the processor core 302 may execute the byte-codes.
  • the circuit 300 may employ two levels of programmability/executability; as macroinstructions and as microinstructions.
  • the processor core 302 may execute macroinstructions under control of the software support layer, or each macroinstruction may be translated into a sequence of microinstructions that may be executed directly by the processor core 302.
  • each microinstruction may be executed in one-clock cycle.
  • the software layer may operate within an operating system/environment, for example, a commercial operating system such as the Windows® OS or Windows® CE, both available from Microsoft Corp., Redmond, Washington.
  • the software layer may operate within a real time operating system (RTOS) environment such as pSOS and VxWorks available from Wind River Systems, Inc., Alameda, CA.
  • RTOS real time operating system
  • the software layer may provide its own operating system functionality.
  • the software support layer may implement or operate within or alongside a Java or Java-like virtual machine (VM), portions of which may be implemented in hardware.
  • VM Java or Java-like virtual machine
  • the VM may comprise a Java or Java-like VM embodied to utilize Java 2 Platform, Enterprise Edition (J2EETM), Java 2 Platform, Standard Edition (J2SETM), and/or Java 2 Platform, Micro Edition (J2METM) programming platforms available from Sun Microsystems.
  • J2EETM Java 2 Platform, Enterprise Edition
  • J2SETM Java 2 Platform, Standard Edition
  • J2METM Java 2 Platform, Micro Edition
  • J2SE and J2ME provide a standard set of Java programming features, with J2ME providing a subset of the features of J2SE for programming platforms that have limited memory and power resources (i.e., including but not limited to cell phones, PDAs, etc.), while J2EE is targeted at enterprise class server platforms.
  • byte-codes are fetched from memory 362 by a MMU 350, with control and address information passed from a Prefetch Unit 370.
  • byte-codes may be used as addresses into a look-up memory 374 of a Pre Fetch Unit (PFU) 370, which may be used to store an address of a corresponding sequence of microinstructions that are required to implement the byte-codes.
  • PFU Pre Fetch Unit
  • the address of the start of a microinstruction sequence may be read from look-up memory 374 as indicated by the Micro Program Address.
  • the number of microinstructions (Macro instruction length) required may also be output from the look-up memory 374.
  • Control logic in a Micro Sequencer Unit (MSU) 371 may be used to determine whether the current byte-code should continue to be executed, and whether the current Micro Program address may be used or incremented, or whether a new byte-code should be executed.
  • An Address Selector block 375 in the MSU 371 may handle the increment or selection of the Micro Program Address from the PFU 370.
  • the address output from the Address Selector Block 375 may be used to read a microinstruction word from the Micro Program Memory 376.
  • the microinstruction word may be passed to the Instruction Execution Unit (IEU) 372.
  • the IEU 372 may check trap bits of the microinstruction word to determine if it can be executed directly by hardware, or if it needs to be handled by software. If the microinstruction can be executed by hardware directly, it may be passed to the IEU, register, ALU, and stack for execution. If the instruction triggers a software trap exception, a Software Inst Trap signal may set to true.
  • the Software Inst Trap signal may be fed back to the Pre Fetch Unit 370, where it may be processed and used to multiplex in a trap op-code.
  • the trap op-code may be used to address a Micro Program address, which in turn may be used to address the Micro Program Memory 376 to read a set of microinstructions that are used to handle the trapped instruction and to transfer control to the associated software support layer.
  • Figure 3c illustrates how trapped instruction may be transferred to software control.
  • byte-codes may comprise a conditionally trapped instruction.
  • conditionally trapped instruction may be executed directly in hardware or may trapped and handled in software.
  • the present invention identifies that benefits derive when information is passed between wireless devices by a software protocol stack written partly or entirely in a Java or Java-like language.
  • a software protocol stack written partly or entirely in a Java or Java-like language.
  • an approach could be used to provide a solution implemented partly in native code and partly in a Java or Java-like language, with such an approach it would be very hard to assess overall system effects of design decisions, since only half of the system (native or Java) would be visible.
  • VM software virtual machine
  • use of previous Unix Mbuf constructs would require semaphores and native threads, which would incur extra overhead and complexity.
  • the present invention interfaces with an upper software protocol stack written entirely in Java or Java-like semi-interpreted languages so as to avoid having to cross over native code boundaries multiple times.
  • an all Java or Java-like protocol stack By using an all Java or Java-like protocol stack, however, various system issues need to be addressed, including, synchronization, garbage collection, interrupts as well as aforementioned instruction trapping.
  • the protocol stack 422 may comprise software data structures compatible with the functionality provided by Java or Java-like programming languages.
  • the protocol stack 422 may utilize an API 419 that provides a communication path to application programs (not shown) at the top of the stack, and a lower 488 interface to a baseband circuit 307.
  • the protocol stack also interfaces to a software support layer, the functionality of which is described in previously referenced U.S. Patent Application S.N. 09/767,038, filed on 22 January 2001, wherein is provided a Virtual machine (VM) with no operating system (OS) overhead and wherein Java classes can directly access hardware resources.
  • VM Virtual machine
  • OS operating system
  • the protocol stack 422 may comprise various layers/modules/profiles (hereafter layers) with which received or transmitted information may be processed.
  • the protocol stack 422 may operate on information communicated over a wireless medium, but it is understood that information could also be communicated to the protocol stack over a wired medium.
  • the invention disclosed herein may find applicability to layers embodied as part of other than a wireless protocol stack, for example other types of applications that pass information between layers of software, for example, a TCP/IP stack.
  • the DPE 501 passes information between one or more layers 523a-c of a protocol stack 422.
  • the DPE 501 provides its functionality in a protocol independent manner because it is possible to decouple the management of memory blocks used for datagrams from the handling of those datagrams. Hence, the function of interpreting protocol specific datagrams is delegated to the layers.
  • the present invention identifies that enqueing and dequeing information from an information stream for use by different software layer threads of a protocol stack preferably should occur in a bounded and synchronized manner.
  • the DPE 501 comprises certain data structures that are discussed herein first generally, then below, more specifically.
  • the DPE 501 instantiates the whole DPE instance (for example, QueueEndpoints, Queues, Blocks, FreeList, that will be described below in further detail) at startup.
  • the DPE 501 comprises one or more receive and transmit queues 524a-b, 525a-b as may be specified at startup by the protocol stack 422.
  • the queues may be used to transfer information contained in output 530 and input 531 information streams between layers 523a-c.
  • Each layer 523a-c may comprise at least one thread that takes information from one or more queues 524a-b, 525a-b, that processes the information, and that makes the processed information available to another layer through another queue.
  • threads may comprise realtime threads. More than one protocol layer or queue may be serviced by the same thread.
  • Flow control between layers may be implemented by blocking or unblocking threads based on flow control indications on the queues 524a-b, 525a-b.
  • Flow control is an event, which may occur when a queue becomes close to full and which may be cleared when it falls to a lower level.
  • the DPE 501 manages information embodied as blocks B of memory and links the blocks B together to form frames 526a-b, 527a-b, 528 as shown in Fig. 5.
  • Frames may also be held by queues.
  • a frame may comprise groups of one block, two blocks, four blocks, but may also comprise other numbers of blocks B.
  • the threads comprising a layer may put frames to and take frames from the queues 524a-b, 525a-b.
  • the DPE 501 allows that frames 526a-b, 527a-b, 528 may be passed between software layers, wherein adding, removing, and modifying information in the queues, frames, and blocks B occurs without corruption of the information. Blocks B may be recycled as frames are produced and consumed by the layers.
  • queueendpoints 540a-c may comprise the layers 523a-c and may perform inspect-modify-forward operations on frames 526a-b, 527a-b, 528. For example, queueendpoints may take frames 526a-b, 527a-b, 528 from a queue or queues 524a-b, 525a-b to look at what is inside a frame to make a decision, to modify a frame, to forward a frame to another queue, and/or to consume a frame.
  • the DPE 501 has one thread per layer 523a-c and, thus, one thread per queueendpoint 540a-c. A thread may inspect the queues and may go waiting. A queueendpoint 540a-c may wait on an object.
  • a queueendpoint may optionally wait on itself. Prior to waiting on itself, a queueendpoint 540a-c may register itself to all queues 524a-b, 525a-b that the queueendpoint terminates. When something is put into a queue 524a-b, 525a-b, or a congestion from the queue that was sourced by a queueendpoint 540a-c is cleared, the queue may notify the queueendpoint to wake the queueendpoint up, then the queueendpoint may take remedial action if there is congestion, or it can service the queue that it now has to service.
  • a software data structure may be shared between a queue 524a-b, 525a-b and a queueendpoint 540a-c that indicates a status as to whether or not a particular queue needs to be serviced by a queueendpoint.
  • the structure may be local to the queueendpoint and may be exposed from the queueendpoint to the queues.
  • the software structure may contain a flag to indicate, for example, if a queue is congested, if a queue is not congested, if a queue is empty, or if a queue is full.
  • Java or Java-like languages objects may be synchronized by using synchronized methods.
  • Java or Java-like languages provide monitors that block threads to prevent more than one thread from entering an object and, thus, potentially corrupting data
  • the DPE 501 provides interrupt disabling and enabling mechanism by which a thread may be granted exclusive access to an object.
  • the DPE 501 ensures that information may be transferred between layers in a deterministic manner without needing to trap on instructions (i.e., by not using monitors). In one embodiment, all interrupts are disabled.
  • the DPE 501 relies on a set of classes that enable the mechanism to pass bocks B of data across the thread boundary of a layer.
  • the present invention does so because putting or taking a frame 526a-b, 527a-b, 528 from a queue 524a-b, 525a-b may occur quickly.
  • the contentions that could occur could consume a relatively large amount of time and latency would not be guaranteed (i.e., entering an monitor means locking an object).
  • interrupts are disabled, and once a frame has been put into a queue, interrupts are restored.
  • a queue notifies a respective queueendpoint that something is happening.
  • a queuendpoint may enable and disable interrupts by calling a method called kernel.disable.interrupts-kernel.enable.interrupts.
  • kernel.disable.interrupts-kernel.enable.interrupts At load time a class loader may detect calls to kemel.disable.interrupts-kernel.enable.interrupts methods.
  • invoke instructions that call those methods are replaced by the loader with a disablelnterrupt and enablelnterrupt opcode (and 2 nop opcodes) to fully replace a 3 byte invoke instruction.
  • an invoke sequence that typically would take 30 to 100 clock cycles may be replaced by a process that is performed in about 4 clock cycles.
  • kemel.disable.interrupts-kernel.enable.interrupts may be 10 to 50 times faster in guaranteeing exclusive access to an object.
  • Some protocols using the DPE 501 may sometimes operate under realtime constraints, they cannot allow well-known standard garbage collection techniques to interfere with their execution. Garbage collection allocates and frees memory continuously, thereby being unbounded. To ensure that operations occur in a predefined time window, the DPE 501 pre- allocates blocks B at startup and keeps track of them in a free list 529. Memory may be divided and allocated into fixed size blocks B at start-up. In one embodiment, the memory is divided into small blocks B to avoid memory fragmentation. After creation, frames 526a-b, 527a-b, 528 may be consumed by the protocol stack 422, after which blocks B of memory may be recycled.
  • the size of the queues 524a-b, 525a-b may be determined at startup by the protocol stack 422 so that any one layer 523a-c does not consume too many of the blocks B in the free list 529 and so that there are enough free blocks B for other layers, frames, or queues. Because all blocks B are statically preallocated in the freelist 529, with the present invention garbage collection need not be relied upon to manage blocks of memory. After startup, because the DPE 501 includes a closed reference to all its objects and doesn't have to allocate objects, for example blocks B, and because the DPE's threads operate at a higher priority than the garbage collector thread, it may operate independently and asynchronously of garbage collection.
  • the DPE 501 buffers information transferred between a source and destination and allows information to be passed by one or more queues 524a-b, 525a-b without having to copy the information, thereby freeing up bottlenecks to the processing of the information.
  • Each layer 523a-c may process the information as needed without having to copy or recopy the information. Once information is allocated to a block B, it may remain in the memory location defining the block.
  • Each layer 523a-c may add or remove headers and trailers from frames 526a-b, 527a-b, 528, as well as remove, add, modify blocks B in a frame through methods which are part of the Frame class instantiated in the layers 523a-c.
  • information in an output 530 or input 531 stream may be processed from that block B throughout the layers 523a-c of protocol stack 422, then streamed out of the block B to an application or other software or device.
  • information from a baseband circuit 307 needs be copied to a memory location only once before use by an application, the protocol stack 422, or other software. Because in the DPE 501 different layers and their threads may read and write the same queue and, thus, the same frame and block, methods and blocks of code which access the memory location defining the queue, frame, or block would normally need remain synchronized to guarantee coherency of the DPE 501 when making read-modify-write operations to the memory location.
  • synchronization is the process in Java that allows only one thread at a time to run a method or a block of code on a given instance.
  • the DPE 501 provides that if different threads do read- modify write operations on the same memory location, the information in the memory location, for example, global variables, does not get corrupted.
  • a frame may comprise a plurality of blocks B, each block comprising a fixed block size.
  • a block B may comprise a completely full block of information or a partially full block of information.
  • a byte array comprising a contiguous portion of memory may be an element of Block.
  • a partially filled block B may be referenced by a start and end offset.
  • a frame may no longer comprise contiguous information.
  • a frame may comprise multiple blocks B linked together by a linked list.
  • the first block B in a block chain may reference the frame.
  • Leading and trailing empty blocks B may be removed from a frame as needed.
  • the number of blocks B in a frame may therefore change as processed by different layers.
  • Adding or removing information to or from a block B may be implemented through Block class methods and Block class method data structures.
  • Block class may comprise the following variables:
  • the payload can end anywhere in a block provided it is not before the start of the payload. This allows unused space at the end of the block in a frame.
  • Last block B in a frame if the first block of a frame, null otherwise. This variable may serve two purposes. First it allows efficient access to the tail of the frame. Second, it allows delimiting frames if multiple frames are chained together.
  • information in a block B is at the end of the block.
  • the information could also be at the start of the block.
  • the first time information is written to a block B determines to which end of the block it will be put.
  • the 3 blocks B comprising the frame in
  • Queue data structures may be used to manage frames.
  • an executing thread may put the frame onto a queue to make the frame available for processing by another layer, application, or hardware.
  • a protocol stack may define more than one queue for each layer.
  • the blocks B of a frame may be linked together using the next block reference within Block class and the last block references may be used to delimit the frames.
  • member variables of the Queue Class may include: Private. Maximum size of the queue in blocks. • Private. Flow control low threshold in blocks.
  • Putting to and getting from queues can be a blocking or non-blocking event for threads as specified in a parameter in enqueue() and dequeue() methods of the Queue class that take frames on and off a queue. If non-blocking has been specified and a queue is empty before a get, then a null block reference may be returned. If non-blocking has been specified and a queue is full before a put, then a status of false may be returned. If the access to the queue is blocking, then the wait will always have a loop around it, and a notify all instruction may be used. Waits and notifies can be for queue empty / full or for flow control.
  • a thread may be unblocked if its condition is satisfied, for example, queue-not-empty if waiting on an empty queue and queue-not-full if waiting to put to a full queue.
  • FIGs 7a-b there are seen block diagram representations of subsystems of the DPE implemented as a memory management subsystem, and a frame processing subsystem, respectively.
  • the subsystems may be implemented with the software data structures disclosed herein, including, but not limited to, Block, Frame, Queue, FreeList, QueueEndpoint.
  • Figure 7a shows a representation of a memory management subsystem responsible for the exchange of Block handles/pointers between Queue, FreeList, and Frame.
  • Figure 7b shows a representation of a processing subsystem responsible for the functions of inspecting a frame, modifying a frame, and forwarding a frame with Frame.
  • the Block data structure is used to transfer basic units of information (i.e., blocks B).
  • blocks B basic units of information
  • a block B uniquely belongs either to FreeList if it is free, Frame if it is currently held by a protocol layer, or Queue if it is currently across a thread boundary.
  • More than one block B may be chained together into a block chain to form a frame.
  • An instance of the Frame class data structure is a container class for Block or a chain of Blocks. More than one frame may also be chained together.
  • the Block data structure may comprise two fields to point to the next block B and the last block B in a block chain. The next block B after the last block of a block chain indicates the start of the next block chain.
  • a block chain may comprise a payload of information embodied as information to be transported and a header that identifies what the information is or what to do with it.
  • Queue may be modified with QueueEndpoint.
  • Blocks B in a block chain may be freed or allocated to or from FreeList with QueueEndpoint. All blocks B to be used are allocated at system startup inside FreeList, allowing the memory for chaining blocks B to be available in real time and not subject to garbage collection.
  • the Queue data structure may be used to transfer a block chain from one thread to another in a FIFO manner. Queue exchanges Blocks with Frame by moving a reference to the first block of a chain of Blocks from Frame to Queue or vice versa. Queue is tied to two instances of QueueEndpoints.
  • the Frame data structure comprises a basic container class that allows protocols to inspect, modify, and forward block chains.
  • Frame may be thought of as an add/drop MUX for blocks B. All block chain manipulations may be done through the Frame data structure in order to guarantee integrity of the blocks B.
  • the Frame data structure abstracts Block operations from the protocol stack. To process information provided by more than one frame simultaneously, Frame instances are private members of QueueEndpoint instances.
  • instances of Frame may contain one chain of Blocks. All frames and queues may be allocated at startup, just like blocks; however, unlike blocks B that are allocated as actual memory, Frame and Queue may be instantiated with a null handle that can be used later to point to a chain of blocks.
  • FreeList comprises a container class for free blocks B.
  • FreeList comprises a chain of all free blocks B. There is typically only one FreeList per protocol stack 422. Operations on instances of Frame that allocate or release blocks B interact with the FreeList. All blocks B within the freelist preferably have the same size.
  • the FreeList may cover all layers of a protocol stack, from a physical hardware layer to an application layer. FreeList may be used when allocating, freeing, or adding information to/from a frame. In one embodiment, synchronization may be provided on the instance of FreeList. Every time a block B crosses a thread boundary, interrupts are disabled and then enabled, for example, every time a block B goes into the freelist or a queue, or a queuendpoint, layer, or thread boundary is crossed.
  • FIG 8 there is seen a representation of an illustration of multiple queues interacting with queueendpoint threads.
  • a queueendpoint preferably waits on one object (optionally itself) and all queues notify that object (optimally the queueendpoint).
  • Figures 7b and 9 there is seen a frame processing subsystem responsible for dequeing a frame, inspecting its header, and consuming or forwarding the contents of a frame. A frame may be modified before being forwarded.
  • InnerQueueEndpoint holds handles to instances of Queue, which may contain instances of Frame.
  • InnerQueueEndpoint comprises its own thread to process Frame instances originating from Queue instances. Once it has completed its tasks, an InnerQueueEndpoint thread may wait for something to do. Notifications come from instances of Queue, which notify a destination QueueEndpoint that it just changed from empty to not empty, or a source QueueEndpoint that it crossed a low threshold or it that it changed from congested to not congested.
  • a queue may be bounded by two queueendpoints, and may be serviced by different threads of execution.
  • Instances of Queue may provide an interface for notification that can be used by QueueEndpoint.
  • Instances of Queue may also hold a reference to both queueendpoints, which the DPE 501 can use for notifications when queue events occur.
  • Queue may specify control thresholds (hi-low) as well as a maximum number of blocks B to help to debug for conditions that could deplete the freelist.
  • Flow control ensures that the other end of a communication path is notified if an end can't keep up, i.e., if a queue is filling up it can be emptied.
  • InnerQueueEndpoint is responsible for creating, processing, or terminating block chains.
  • QueueEndpoint class may contain two fields "queueCongested” and "queueNotEmpty".
  • QueueEndpoint may comprise an array with which it can readily access queueCongested and queueNotEmpty, where the status elements of the array are shared with respective queues.
  • a queue may set one of these fields, which may be used to notify a queueendpoint that it has a reason to inspect the queue.
  • QueueEndpoint allows optimizations of queue operations, for example, queueendpoints are able to determine which queue needs to be serviced from notifications provided by a queue.
  • the DPE 501 provides a means by which every queue need not be polled to see if there is something to do based on a queue event.
  • a framer may operate on the information from the FIFO.
  • the framer may comprise hardware or software.
  • a software framer comprises interrupt service threads that are activated by hardware interrupts when information is received by the FIFO from input 531 or output 530 streams.
  • the Frame data structure is filled or emptied with information from an output 530 or input 531stream at the hardware level by the framer in block B sized increments.
  • the queueendpoint closest to the hardware services hardware interrupts and DMA requests from peripherals by a QueueEndpoint interface to the transmit and receive buffers 312, 311 which may be accessed by the software support layer's kernel.
  • QueueEndpoint registers to a particular hardware interrupt by making itself known to the kernel. QueueEndpoint is notified by the interrupts it is registered to.
  • the kernel has a reference to a QueueEndpoint in its interrupt table, which is used to notify a thread whenever a corresponding interrupt occurs.
  • Circuit 300 may utilize a software protocol stack 422 and DPE 501, as described previously herein, when communicating with peripherals or devices.
  • the communications may occur over a baseband circuit 307 that is compliant with a BluetoothTM communications protocol.
  • BluetoothTM is available form the Bluetooth Special Interest Group (SIG) founded by Ericsson, IBM, Intel, Lucent, Microsoft, Motorola, Nokia, and Toshiba, and is available as of this writing at www.bluetooth.com/developer/specification/specification.asp. It is understood that although the specifications for the Bluetooth communications protocol may change from time to time, such changes would still be within the scope and spirit of the present invention.
  • wireless communications protocols are within the scope and skill of the present invention as well as those skilled in the art, including, 802.11 , HomeRF, IrDA, CDMA, GSM, HDR, and so called 3 rd Generation wireless protocols such as those defined in the Third Generation Partnership Project (3GPP).
  • 3GPP Third Generation Partnership Project
  • communications with circuit 300 may also occur using wired communication protocols such as TCP/IP, which are also within the scope of the present invention.
  • wireless or wired data transfer may be facilitated as ISDN, Ethernet, and Cable Modem data transfers.
  • a radio module 308 may be used to provide RF wireless capability to baseband circuit 307.
  • radio module 308 may be included as part of the baseband circuit 307, or may be external to it.
  • circuit 300 may include baseband circuit 307 and processor core 302 functionality on one chip die to conserve power and reduce manufacturing costs. In one embodiment, the circuit 300 may include the baseband circuit 307, processor core 302, and radio module 308 on one chip die.
  • peripheral's/device's functionality may be accomplished through lower level languages.
  • Java "native methods" JNI
  • JNI Java "native methods”
  • the embodiments described herein provide applications or other software residing on or off circuit 300 direct access to the functionality and features of peripherals or devices, for example, access to the data reception/transmission functionality of baseband circuit 307.
  • memory 362 of circuit 300 may be embodied as any of a number of memory types, for example: a SRAM memory 309 and/or a Flash memory 304.
  • the memory 362 may be defined by an address space, the address space comprising a plurality of locations.
  • the software data structures described previously herein (indicated generally by descriptor 310) may be mapped to the plurality of locations.
  • the software data structures 310 may span a contiguous address space of the memory 362.
  • Data received by baseband circuit 307 may be tied to the data structures 310 and may be accessed or used by an application program or other software. In one embodiment data may be accessed at an application layer program level through API 419.
  • the software data structures 310 may comprise objects.
  • the objects may comprise Java objects or Java-like objects.
  • the data structures 310 may comprise one or more Queues, Frames, Blocks, ByteArrays and other software data structures as described herein.
  • circuit 300 may comprise a receive 312 (Rx) and transmit 311 (Tx) buffer.
  • the receive 312 (Rx) and transmit 311 (Tx) buffers may be embodied as part of the baseband circuit 307.
  • information residing in the baseband receive 312 (Rx) and transmit 311 (Tx) buffers may be tied to the data structures 310 with minimal software intervention and minimal physical copying of data, thereby eliminating the need for time consuming translations of the data between the baseband circuit 307 and applications or other software.
  • an application or other software may utilize the received information directly as stored in the locations in memory 362.
  • the stored information may comprise byte-codes.
  • the byte-codes may comprise Java or Java-like byte- codes.
  • information as described herein is not limited to byte-codes, but may also include other data, for example, bytes, words, multi-bytes, and information streams to be processed and displayed to a user, for example, an information stream such as an audio data stream, or database access results.
  • the information may comprise a binary executable file (binary representations of application programs) that may be executed by processor core 302.
  • processor core 302. Unlike prior art solutions, the embodiments described herein enable transparent, direct, and dynamic transfer of data, reducing the number of times the information needs to be copied/recopied before utilization or execution by applications, the protocol stack, other software, and/or the processor core 302.
  • the software data structures 310 in memory 362 may be constructs representing one or more blocks B in queues 524a-b, 525a-b that act as FIFOs for the information streams 530, 531.
  • Data or information received by radio module 308 may be communicated to the baseband circuit 307 from where it may be transferred from the receive 312 buffer to a queue 524a-b, 525a-b by the DMA controller 305; or may originate in a queue 524a-b, 525a-b from where it may be transferred by the DMA controller 305 to the transmit buffer 311 and from the transmit buffer to the radio module 308 for transmission.
  • Setup of a transfer of data may rely on low level software interaction, with "low level software” referring to software instructions used to control the circuit 300, including, the processor core 302, the DMA controller 305, and the interrupt request IRQ controller 306.
  • low level software referring to software instructions used to control the circuit 300, including, the processor core 302, the DMA controller 305, and the interrupt request IRQ controller 306.
  • data in a block B of a queue 524a-b, 525a-b is in memory 362, and the baseband circuit 307 is a peripheral.
  • DMA transfers may occur without software intervention after the low level software specifies a start address, a number of bytes to transfer, a peripheral, and a direction.
  • the DMA controller 305 may fill up or empty a receive- or transmit- buffer when needed until a number of units of data to transfer has been reached.
  • Events requiring the attention of low- level software control may be identified by an IRQ request generated by IRQ controller 306.
  • Type of events, that may generate an IRQ request include: the reception of a control packet, the reception of the first fragment of a new packet of data, and the completion of a DMA transfer (number of bytes to transfer has been reached).
  • the baseband receive buffer 312 may hold data received by the radio module 308 until needed.
  • circuit 300 may comprise a framer 313.
  • the framer 313 may be embodied as hardware of the baseband circuit 307 and/or may comprise part of the low level software. The framer 313 may be used to detect the occurrence of events, which may include, the reception of a control packet or a first fragment of a new packet of data in the receive buffer 312. Upon detection, the framer 313 may generate an IRQ request.
  • an application or other software in memory 362 may use high level software protocols to listen to a peer application, for example, a web server application on an external device acting as an access point for communicating over a Bluetooth link to the baseband circuit 307.
  • Low- level software routines may be used to set up a data transfer path between the baseband circuit 307 and the peer application.
  • Data received from a peer application may comprise packets, which may be received in fragments.
  • the framer 313 may inspect the header of a fragment to determine how to handle it.
  • low- level software may perform control functions such as establishing or tearing down connections indicated by the start or end of a data packet.
  • the framer 313 may generate an interrupt allowing the low level software to allocate the fragment in an input stream to a block B.
  • the framer may then issue DMA 305 requests to transfer all the fragments of the packet from the baseband receive buffer 312 to the same block B. If a block in the queue 525a-b fills up, the DMA 305 may generate an interrupt and the low level software may allocate another block B to a queue.
  • the framer 312 may generate another interrupt to transfer the data to another block B in the same queue.
  • the baseband circuit 307 transmit buffer 311 may receive data from an application or other software executing under control of the processor core 302, and when received, may send the data to the radio module 308 in its entirety or in chunks at every transmit opportunity. In a time division multiplexed system, a transmit time slot may be viewed as a transmit opportunity.
  • the low level software may configure the DMA 305 and tie that queue to the baseband transmit buffer 311.
  • the baseband transmit buffer 311, if empty, may issue a request to get filled up by the DMA 305.
  • the baseband transmit buffer 311 may issue another DMA request until the first block B that was allocated in the queue in the transmit chain has been completely transferred to the buffer, at which point the DMA 305 may request an interrupt.
  • the low level software may service the interrupt by providing the DMA 305 with another block B as filled with data from an application or other software.
  • the processor core 302 may be switched into a power saving mode between reception or transmission of two data packets.
  • a web application program when transmitting, may communicate using high level software protocols via baseband circuit 307 with other applications, software, or peripherals or devices, for example, a web server application located on an external device. Layered on top of this communication may be a high level HTTP protocol.
  • the external device may be a mobile wireless device or an access point providing a wireless link to web servers, the Internet, other data networks, service provider, or another wireless device.
  • the memory 362 may comprise a Flash memory 304, which could be used to store application programs, VM executive, or APIs.
  • the contents of the Flash memory 304 may be executed directly or loaded by a boot loader into RAM 309 at boot-up.
  • an updated application, VM, and/or API provided by an external radio module 308 could be uploaded to RAM 309.
  • the updated software could be stored to the Flash memory 304 for subsequent use (from RAM or Flash memory) upon subsequent boot-up.
  • updated applications, software, APIs, or enhancements to the VM may be stored in Flash memory or RAM for immediate use or later use without a boot-up step.
  • circuit 300 and a DMA 305 are configured to allow the transfer of data from a peripheral or device directly into a software data structure.
  • data Once data is transferred into the data structure 310, it may be utilized by an application program, other software, or hardware without any further movement of the data from its location in memory 362.
  • the data comprises Java or Java-like byte-codes
  • the byte-codes may be executed directly from the their location in memory.
  • a data transfer may occur in the following steps:
  • a packet of data from a peripheral or device may be received and stored in receive buffer 312 of a device or peripheral.
  • the peripheral or device may comprise an on or off circuit 300 peripheral (on circuit shown).
  • the peripheral or device may comprise baseband circuit 307.
  • Reception of data in the receive buffer 312 may generate a DMA 305 request.
  • the DMA request may flush receive buffer 312 directly into data structure 391.
  • the processor core 302 may be notified to hand the data off to an application or other software.
  • a DMA 305 is described herein in one embodiment as being used to control the direct transfer and execution of data from a peripheral or device with a minimal number of intervening processor core 302 instruction steps, it is understood that the DMA 305 comprises one possible means for transferring of data to the memory, and that other possible physical methods of data transfer between a peripheral or device and the memory 362 could be implemented by those skilled in the art in accordance with the description provided herein.
  • One such embodiment could make use of an instruction execution means, for example, the processor core 302, to execute instructions to perform a read of data provided by a peripheral or device and to store the data temporarily prior to writing the data to the memory 362, for example, in a programmable register in the processor core 302.
  • the programmable register could also be used to write data directly to a data structure 310 in memory 362 to effectuate operations using the data in as few processor instruction steps as possible.
  • the processor core 302 may need to execute two instructions per unit of data stored in the peripheral or device receive buffer 312, for example, per word.
  • the two instructions may include an instruction to read the unit of data from the peripheral or device and an instruction to write the unit of data from the temporary position to memory 302.
  • the methodology of Figure 2 requires the transfer of a unit of data from a peripheral or device to memory, including at least the following steps: a transfer of the data from the FIFO 198 to a register in the processor core 196, a transfer of the data from the core to the receive buffer 192, a transfer of the data from the buffer to the processor core 196, and finally, a transfer of the data from the core into a Java object 191, which would necessitate the execution of at least four processor instructions (read-write-read-write) per unit of data.
  • a software data structure 391 may comprise a Block data structure, as described herein previously.
  • the Block data structure may comprise a Java or Java-like software data structure, for example, a Block object.
  • the Block object may comprise a ByteArray object.
  • the Block object's handle/pointer may be referenced and saved to a FreeList data structure. The handle may be used to access the ByteArray object. With the ByteArray object, pushed to the top of the stack (TOS), the base address of the ByteArray object may be referenced by a pointer.
  • TOS top of the stack
  • the (TOS) value may be stored in a memory mapped DMA buffer base address register.
  • circuit 300 may include registers that may be read and written using an extended byte-code instruction not normally supported by standard Java or Java-like virtual machine instruction sets, for example, with an instruction providing functionality similar to a PicoJava register store instruction.
  • the predefined size may be written to a DMA "word count register" to specify how many transfers to conduct every time the DMA is triggered to service a peripheral or device, for example, the baseband circuit 307.
  • the word count register would need to be initialized only once, whereas the DMA buffer base address register would need to be modified for every new Block object, for example: void Native setUpDMA( nameOfByteArray, sizeOfByteArray ) ⁇ write nameOfByteArray to the DMA memory buffer register write sizeOfByteArray to the DMA word count register return
  • setUpDMA ( aByteArray, sizeOF(aByteArray))
  • a ByteArray data structure may be set up to receive data from a peripheral or device in the following steps: a-An application or other software 394 may obtain a handle of, or reference to, a Byte Array date in a current execution context as a local variable. b-The handle may be pushed onto a stack 393, for example, onto a stack cache or onto a stack in memory, thereby becoming the top of stack (TOS) element. c-The TOS element may be written to an appropriate DMA 305 buffer base address register. d-A peripheral or device 395 may initiate a DMA transfer, writing information to or from the peripheral or device directly into the pre-instantiated ByteArray data structure as specified by the DMA buffer base address register.
  • a-An application or other software 394 may obtain a handle of, or reference to, a Byte Array date in a current execution context as a local variable.
  • b-The handle may be pushed onto a stack 393, for example, onto a stack cache
  • circuit 300 may operate as or with a wireless device, a wired device, or a combination thereof.
  • the circuit 300 may be implemented to operate with or in a fixed device, for example a processor based device, computer, or the like, architectures of which are many, varied, and well known to those skilled in the art.
  • the circuit 300 may be implemented to work with or in a portable device, for example, a cellular phone or PDA, architectures of which are many, varied, and well known to those skilled in the art.
  • the circuit 300 may be included to function with and/or as part of an embedded device, architectures of which are many, varied, and well known to those skilled in the art.
  • While some embodiments described herein may be used with data comprising Java or Java-like data and byte-codes, and Java or Java-like objects or data structures including, but not limited, those used in J2SE, J2ME, PicoJava, PersonalJava and EmbeddedJava environments available from Sun Microsystems Inc, Palo Alto, it is understood that with appropriate modifications and alterations, the scope of the present invention encompasses embodiments that utilize other similar programming environments, codes, objects, and data structures, for example, C# programming language as part of the .NET and .NET compact framework, available from Microsoft Corporation Redmond, Washington; Binary Run-time Environment for Wireless (BREW) from Qualcomm Inc., San Diego; or the MicrochaiVM environment from Hewlett-Packard Corporation, Palo Alto, California.
  • C# programming language as part of the .NET and .NET compact framework, available from Microsoft Corporation Redmond, Washington; Binary Run-time Environment for Wireless (BREW) from Qualcomm Inc., San Diego; or the MicrochaiVM environment from Hewlett-
  • the Windows operating systems described herein are also not meant to be limiting, as other operating systems/environments may be contemplated for use with the present invention, for example, Unix, Macintosh OS, Linux, DOS, PalmOS, and Real Time Operating Systems (RTOS) available from manufacturers such as Acorn, Chorus, GeoWorks, Lucent Technologies, Microware, QNX, and WindRiver Systems, which may be utilized on a host and/or a target device.
  • RTOS Real Time Operating Systems
  • the operation of the processor and processor core described herein is also not meant to be limiting as other processor architectures may be contemplated for use with the present invention, for example, a RISC architecture, including, those available from ARM Limited or MIPS Technologies, Inc.
  • wireless communications protocols and circuits for example, HDR, DECT, iDEN, iMode, GSM, GPRS, EDGE, UMTS, CDMA, TDMA, WCDMA, CDMAone, CDMA2000, IS-95B, UWC-136, IMT-2000, IEEE 802.11, IEEE 802.15, WiFi, IrDA, HomeRF, 3GPP, and 3GPP2, and other wired communications protocols, for example, Ethernet, HomePNA, serial, USB, parallel, Firewire, and SCSI, all well known by those skilled in the art may also be within the scope of the present invention.
  • the present invention should, thus, not be limited by the description contained herein, but by the claims that follow.

Abstract

L'invention concerne un procédé et un dispositif permettant d'exécuter des opérations sur des données transférées entre un périphérique directement dans une structure de données stockée dans une mémoire. La structure de données peut comprendre une structure de données Java ou de type Java.
EP01944224A 2000-06-02 2001-06-01 Moteur de chemin de donnees (dpe) Withdrawn EP1377904A2 (fr)

Applications Claiming Priority (23)

Application Number Priority Date Filing Date Title
US871481 1997-06-09
US20896700P 2000-06-02 2000-06-02
US208967P 2000-06-02
US21340800P 2000-06-22 2000-06-22
US213408P 2000-06-22
US21781100P 2000-07-12 2000-07-12
US217811P 2000-07-12
US22004700P 2000-07-21 2000-07-21
US220047P 2000-07-21
US22854000P 2000-08-28 2000-08-28
US228540P 2000-08-28
US23932000P 2000-10-10 2000-10-10
US239320P 2000-10-10
US25755300P 2000-12-22 2000-12-22
US257553P 2000-12-22
US26755501P 2001-02-09 2001-02-09
US267555P 2001-02-09
US28271501P 2001-04-10 2001-04-10
US282715P 2001-04-10
US849648 2001-05-04
US09/849,648 US20020012329A1 (en) 2000-06-02 2001-05-04 Communications apparatus interface and method for discovery of remote devices
US09/871,481 US20010049726A1 (en) 2000-06-02 2001-05-31 Data path engine
PCT/US2001/017817 WO2001095096A2 (fr) 2000-06-02 2001-06-01 Moteur de chemin de donnees (dpe)

Publications (1)

Publication Number Publication Date
EP1377904A2 true EP1377904A2 (fr) 2004-01-07

Family

ID=27582724

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01944224A Withdrawn EP1377904A2 (fr) 2000-06-02 2001-06-01 Moteur de chemin de donnees (dpe)

Country Status (3)

Country Link
EP (1) EP1377904A2 (fr)
AU (1) AU2001266656A1 (fr)
WO (1) WO2001095096A2 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10915296B2 (en) 2000-11-01 2021-02-09 Flexiworld Technologies, Inc. Information apparatus that includes a touch sensitive screen interface for managing or replying to e-mails
US20020051200A1 (en) 2000-11-01 2002-05-02 Chang William Ho Controller for device-to-device pervasive digital output
US10860290B2 (en) 2000-11-01 2020-12-08 Flexiworld Technologies, Inc. Mobile information apparatuses that include a digital camera, a touch sensitive screen interface, support for voice activated commands, and a wireless communication chip or chipset supporting IEEE 802.11
US6947995B2 (en) 2000-11-20 2005-09-20 Flexiworld Technologies, Inc. Mobile and pervasive output server
US20020097418A1 (en) 2001-01-19 2002-07-25 Chang William Ho Raster image processor and processing method for universal data output

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6034963A (en) * 1996-10-31 2000-03-07 Iready Corporation Multiple network protocol encoder/decoder and data processor
WO1999026377A2 (fr) * 1997-11-17 1999-05-27 Mcmz Technology Innovations Llc Architecture adaptable de communication entre reseaux presentant une capacite elevee
US6385643B1 (en) * 1998-11-05 2002-05-07 Bea Systems, Inc. Clustered enterprise Java™ having a message passing kernel in a distributed processing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0195096A3 *

Also Published As

Publication number Publication date
AU2001266656A1 (en) 2001-12-17
WO2001095096A3 (fr) 2003-10-30
WO2001095096A2 (fr) 2001-12-13

Similar Documents

Publication Publication Date Title
US20010049726A1 (en) Data path engine
US20020103942A1 (en) Wireless java device
US20020016869A1 (en) Data path engine
US5742825A (en) Operating system for office machines
US20050060705A1 (en) Optimizing critical section microblocks by controlling thread execution
US20080263554A1 (en) Method and System for Scheduling User-Level I/O Threads
CN112639741A (zh) 用于控制联合共享的存储器映射区域的方法和装置
US7526579B2 (en) Configurable input/output interface for an application specific product
EP4002119A1 (fr) Système, appareil et procédé de transmission en continu de données d'entrée/sortie
US7913255B2 (en) Background thread processing in a multithread digital signal processor
CN112491426B (zh) 面向多核dsp的服务组件通信架构及任务调度、数据交互方法
EP3067796A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations procédé, support d'enregistrement, dispositif de calcul, procédé de calcul
US20070088938A1 (en) Shared interrupt control method and system for a digital signal processor
US6986028B2 (en) Repeat block with zero cycle overhead nesting
CN110874336A (zh) 一种基于申威平台的分布式块存储低延迟控制方法及系统
US9235417B2 (en) Real time instruction tracing compression of RET instructions
EP1377904A2 (fr) Moteur de chemin de donnees (dpe)
US7680967B2 (en) Configurable application specific standard product with configurable I/O
US7434223B2 (en) System and method for allowing a current context to change an event sensitivity of a future context
JP2001034484A (ja) デジタル処理信号プロセッサによりリアルタイムタスクを実行する方法
EP1548591B1 (fr) Méthode, appareils et jeu d'instructions d'un accélérateur pour communications orientés objet
Welsh et al. U‐Net/SLE: A Java‐based user‐customizable virtual network interface
Mauroner et al. EventQueue: An event based and priority aware interprocess communication for embedded systems
Mor et al. Improving the Scaling of an Asynchronous Many-Task Runtime with a Lightweight Communication Engine
US20240103842A1 (en) Apparatuses, Devices, Methods and Computer Programs for Modifying a Target Application

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020301

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE GB IT

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20040103