WO2000074368A2 - Couche plate-forme pour pilotes de peripherique - Google Patents

Couche plate-forme pour pilotes de peripherique Download PDF

Info

Publication number
WO2000074368A2
WO2000074368A2 PCT/US2000/015416 US0015416W WO0074368A2 WO 2000074368 A2 WO2000074368 A2 WO 2000074368A2 US 0015416 W US0015416 W US 0015416W WO 0074368 A2 WO0074368 A2 WO 0074368A2
Authority
WO
WIPO (PCT)
Prior art keywords
device driver
memory access
direct memory
providing
set forth
Prior art date
Application number
PCT/US2000/015416
Other languages
English (en)
Other versions
WO2000074368A3 (fr
Inventor
Steve Brooks
Jason Murray
David Richards
Original Assignee
Bsquare Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bsquare Corporation filed Critical Bsquare Corporation
Priority to AU54638/00A priority Critical patent/AU5463800A/en
Publication of WO2000074368A2 publication Critical patent/WO2000074368A2/fr
Publication of WO2000074368A3 publication Critical patent/WO2000074368A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/102Program control for peripheral devices where the programme performs an interfacing function, e.g. device driver

Definitions

  • the present invention relates generally to computer peripheral device driver controls, particularly to the Microsofttm WINDOWS CEtm operating system and device drivers compatible therewith and, more specifically, to a platform layer providing a universal interface in a WINDOWS CE operating system. Description of Related Art.
  • DOS disk operating system
  • WINDOWS 98tm disk operating system
  • WINDOWS NTtm WINDOWS CEtm
  • applications programs such as graphics programs, word processing programs, spreadsheet programs, and the like.
  • the operating systems also use integrating programs to connect and control a variety of hardware devices to the computer, such as printers, scanners, digital cameras, and the like.
  • Device drivers can generally be categorized as graphics-oriented (e.g., for WINDOWS operating systems using the Microsoft provided graphics windowing event subsystem, "GWES"), such as video monitors, or all other peripheral devices, such as serial port devices, audio drivers, NDIS, and the like.
  • graphics-oriented e.g., for WINDOWS operating systems using the Microsoft provided graphics windowing event subsystem, "GWES”
  • GWES graphics windowing event subsystem
  • FIGURE 1 is a graphical representation of the current WINDOWS CE operating system architecture 100 (more simply referred to as the "CE/OS” hereinafter).
  • the Windows CE Kernel 102 program provided by the Microsoft company houses a library of operating system function subroutines. Basically, it provides the standard hardware abstraction to support the WINDOWS CE Kernel 102, which dictates the required exported primitives.
  • exported primitives shall mean basic functional units implemented by a software module and available for use by external software modules; and, “abstraction” shall mean a software representation of physical hardware or processes.
  • a Platform Layer 101 contains all source code and files required to run the CE/OS on a given Hardware Platform 110, typically, a sub-notebook computer, sometimes referred to as a "palmtop" computer.
  • the Platform Layer 101 is the only hardware platform-dependent component of the CE/OS.
  • the Platform Layer 101 includes loader technology for placing a CE/OS graphical user interface (GUI) image on the target Hardware Platform 110, an Original
  • OEM Equipment Manufacturer
  • OAL Equipment Manufacturer
  • OEM-OAL functions and structures provide a hardware abstraction that allows the CE/OS Kernel 102 to run only any platform.
  • Peripheral DEVICE(s) 104 and GWES device(s) 106 are linked to the Hardware Platform 110 directly via respective device drivers 103, 211, 213, 215, 217. Therefore, each individual
  • DEVICE 104 and GWES 106 requires a CE/OS compatible driver 103, 211, 213, 215, 217 for
  • each device driver 103, 211, 213, 215, 217 must run as a Dynamic Link Library ("DLL") using industry standard inter-process control mechanisms - such as messages, call-backs, synchronization objects, shared memory. That means that the device driver 103, 211, 213, 215, 217 written by the OEM to the OAL 108 must include a Microsoft provided routine called the CE Device Driver Kit ("CEDDK"). It is still another rquirement of CE/OS that the Windows CE Kernel 102 must be linked with a library containing power up/down functions, interrupt functions, and serial port debug functions, and other common peripheral device functions as would be known in the art.
  • DLL Dynamic Link Library
  • CEDDK CE Device Driver Kit
  • the present invention provides a method for facilitating communication between computer platforms, each of the computer platforms using a like predetermined operating system, and computer peripheral devices, each of the devices having a respective individual device driver system, via a platform layer interface system, the platform layer interface system including a device driver adaptation subsystem associated with the predetermined operating system.
  • the method includes the steps of: providing a device driver adaptation subsystem having a library of computer hardware functional abstractions wherein the functional abstractions are substantially common to a plurality of differing types of the computer platforms; providing a modified platform layer interface system having the device driver adaptation subsystem interfaced with the device driver adaptation subsystem; and interfacing communications between respective the hardware platforms and the computer peripheral devices, respectively, via the modified platform layer interface system using the device driver adaptation subsystem such that each the device driver system written to the device driver adaptation subsystem is transportable across a plurality of differing types of the computer platforms.
  • the present invention provides a computerized apparatus for conducting interfaced operations between a computer platform, having a predetermined computer operating system wherein the operating system has a computerized platform layer interface system including adaptation mechanisms for conforming computer peripheral devices to operate in conjunction with the computer platform, and computer peripheral devices, each having a device driver system.
  • the computerized apparatus includes: mechanisms for providing a device driver adaptation subsystem having a library of computer hardware functional abstractions wherein the functional abstractions are substantially common to a plurality of differing types of each the computer platform; mechanisms for providing a modified platform layer interface system having the device driver adaptation subsystem therein interfaced with the device driver adaptation subsystem; and mechanisms for interfacing communications between the computer platform and the computer peripheral devices connected thereto, respectively, via the modified platform layer interface system using the device driver adaptation subsystem such that each the device driver system written to the device driver adaptation subsystem is transportable across the plurality of differing types of the computer platforms.
  • the present invention provides a computer-to-peripheral platform layer construct for interfacing peripheral device drivers to a variety of types of computer hardware platforms, each of the platforms having a common operating system, including: associated with the operating system, predetermined adaptation layer mechanisms for adapting a peripheral device to a predetermined one of the computer platforms; and interface mechanisms for providing an interface between the drivers and each of the types of computer hardware platforms, the interface mechanisms, including memory management mechanisms for allocating and for mapping memory substantially concurrently to a call for memory by each of the drivers, interrupt mechanisms for handling system interrupt allocation and for connecting each of the driver to any free interrupt vector in the operating system substantially concurrently to a system interrupt for each of the drivers, direct memory access mechanisms for abstracting platform direct memory access requirements for adaptation by each of the drivers, and input-output mechanisms for interfacing the platform layer construct and each of the platforms, wherein the construct provides a transparent interface having localized computer hardware functionality such that any of the device drivers programmed to the construct is transportable to a plurality of hardware platforms.
  • a further advantage of the present invention is that it allows all platform layer implementation code to be structured similarly, making installation and documentation easier.
  • FIGURE 1 (Prior Art) is a graphical depiction of a generic Hardware Platform adapted for use with the Microsoft WINDOWS CE Operating System.
  • FIGURE 2 is a graphical depiction of the present invention as conformed to the
  • FIGURE 3 is a graphical depiction of the BCEDDK component of the present invention as shown in FIGURE 2.
  • FIGURE 4 is a flow chart illustrating a physical memory allocation subroutine in accordance with the present invention as shown in FIGURE 3.
  • FIGURE 5 is a flow chart illustrating an interrupt processing subroutine in accordance with the present invention as shown in FIGURE 3.
  • FIGURE 6 is a flow chart illustrating a slave direct memory access, without auto-initialization, subroutine in accordance with the present invention as shown in FIGURE 3.
  • FIGURE 7 is a flow chart illustrating a slave direct memory access, with auto-initialization, subroutine in accordance with the present invention as shown in FIGURE 3.
  • FIGURE 8 is a flow chart illustrating a BusMaster direct memory access, with auto-initialization, subroutine in accordance with the present invention as shown in FIGURE 3.
  • FIGURE 2 is a graphical depiction of the present invention CE/OS device driver system 200, providing a universal interface - also referred to hereinafter as the bSquaretm Platform Layer, or "BPL," 201 - conformed to the WINDOWS CE Operating System Structure for a generic Hardware Platform 110.
  • BPL bSquaretm Platform Layer
  • a microfiche appendix to this Detailed Description provides programming details in terms of object code, syntax, parameters, returns, and the like.
  • the existing Windows CE Kernel 102, a peripheral DEVICE 104, and GWES peripheral 106 are given to be the same exemplary apparatus as shown in the prior art of FIGURE 1.
  • the BPL 201 provides an interface that imposes structure on the CE/OS platform layer.
  • the BPL 201 incorporates the OAL 108, as in the prior art of FIGURE 1, as a static library.
  • the BPL 201 further includes a restructured CEDDK, referred to also as the bSquare-CEDDK, or more simply the "BCEDDK," 207 component.
  • component is defined as referring to any library (dynamic or static) that performs well-defined operations and exports a well-defined applications program interface ("API") set.
  • the BPL architecture divides CE/OS devices 104, 106 as being in one of several basic functional categories:
  • CPU Device Drivers 211, 213 are drivers for devices that exist as part of the CPU chip set core or are integrated on the same silicon chip as the CPU - that is, CPU Devices are CPU- dependent and exist only when the CPU is installed on the target Hardware Platform 110, e.g., a video controller included on a CPU chip;
  • Bus Device Drivers 215, 217 are drivers for peripheral devices that reside on an architected bus (e.g., ISA, PCI and USB), but are CPU independent; and, 3) Wired Device Drivers 203, 205 are peripheral devices connected to the CPU over a non-standard bus and are intimately tied to the Platform 110.
  • common drivers such as "Serial Port Driver 1 and Driver 2" of FIGURE 1 which communicated directly with the Hardware Platform 110 now have an indirect interface through API's of the present invention provided by the BCEDDK 207, namely shown as "CPU Device Serial Driver” 211 and "CPU Device Keyboard or Mouse Driver” 213 of FIGURE 2.
  • Hardware Platforms that share the same CPU share the same CPU Device Drives. Therefore, any core CPU driver code developed with BPL is usable on all Hardware Platforms using that CPU.
  • Bus Device Drivers e.g. "Bus Device Ethernet Driver 1 " and "Bus Device Display Driver,” now have an indirect interface through the API's provided by the BCEDDK 207.
  • BCEDDK 207 component By providing the BCEDDK 207 component, once a particular Bus Device Driver is written with the BPL 201, it may be shared by simply recompiling the driver for a different instruction set architecture.
  • FIGURE 3 is a graphical depiction of the components as integrated into the BCEDDK 207. Subroutines corresponding to further information found in the microfiche appendix are designated by being enclosed in brackets, [ ] , hereinafter.
  • the BCEDDK 207 component provides localization of functionality such that OEMS of a Hardware Platform 110 or a peripheral device 104, 106 need program only to the BCEDDK.
  • the BCEDDK 207 component provides a library of application program interfaces
  • BCEDDK compatible.
  • the BCEDDK 207 provides several functionalities as individual components of the BPL 201 used by the CE/OS Device Driver 301 :
  • a Memory Management component 303 (see also microfiche Design Guide Appendix A., Memory Manager Functions, for syntax, fields, parameters, and the like),
  • DMA Direct Memory Access
  • microfiche Design Guide Appendix D provides data structures used by the BPL, specifically the attributes of a specific device 104, 106 corresponding to a retrievable adapter object, as defined hereinafter.
  • Memory Management 303 component Creating specific mappings between physical and virtual memory is one of the most common memory operations performed by device drivers; memory mapped registers and memory buffers in physical memory must be mapped into a virtual address space to be accessed.
  • CE/OS device drivers run as a DLL in a user space process where memory accesses map into the virtual address space.
  • the Hardware Platform 110 generally with its own CPU memory management unit (“MMU")
  • MMU CPU memory management unit
  • the BCEDDK Memory Management 303 component provides universal functions necessary for allocating and mapping memory. Memory Management 303 centralizes a particular Device Driver 301 requirements for random access memory. Without this function, namely under a strict CE/OS only scheme, a block of memory in the Hardware Platform 110 would always have to be reserved for device drivers.
  • step 401 a device driver initialization, step 401, provided by the device
  • a subroutine, [MmAllocatePhysicalMemory] obtains and allocates a range of physically contiguous, cache-aligned memory from a non-paged memory pool of the Hardware Platform 110; once obtained, this function sends the Driver 301 a mapped, virtual, address pointer to the base virtual address for the allocated memory to the Device Driver 301 and writes a pointer to a physically contiguous block of memory of the requested size, step 405.
  • This block of memory is positioned below the maximum acceptable physical address and conforms to the requested alignment requirement. Because the virtual pointer returned is associated directly with a physical address range, the pages comprising the buffer are locked. If no memory is currently available, a NULL flag is set and the Device Driver 301 notified. A [MmFreePhysicalMemory] subroutine releases a range of previously allocated memory.
  • a memory descriptor list (“Mdl”) is a data structure that completely describes a contiguous virtual buffer in terms of the physical pages that comprise the buffer. The Mdl keeps track of the virtual base of the buffer, the buffer's size, and the offset into the first physical page where the buffer begins.
  • a Device Driver 301 uses Mdl's when it needs to know the physical pages that make up a virtual buffer. The Mdls dictate virtual-to-physical and physical-to-virtual address tracking.
  • a [MmCreateMdl] subroutine allocates a new memory descriptor list, describing either a virtual, contiguous user buffer or a common buffer, by initializing an information data header, locking the corresponding buffer, and filling in the corresponding physical pages for each virtual page in the described buffer.
  • a pointer is provided to the Device Driver 301 of the initialized Mdl. The physical pages in the buffer described by the Mdl are locked.
  • a [MmFreeMdl] subroutine merely frees a previously allocated Mdl. Once the Mdl is created, the Device Driver 301 can use other functions.
  • a [MmGetMdlByteCount] retrieves and informs the Device Driver 301 of the length of the buffer described by the Mdl.
  • a [MmGetMdlByteOffset] subroutine retrieves any offset in the page of the buffer described by the
  • a [MmGetMdlStartVa] subroutine retrieves starting virtual address - the virtual address of the buffer that the Mdl describes rounded to the nearest page - of the Mdl and provides a pointer to that starting virtual address to the Device Driver 301.
  • a [MmGetMdlVirtualAddress] subroutine retrieves the virtual address of the buffer described by the Mdl and returns a pointer to that virtual address to the Device
  • the Device Driver 301 requires information for mapping from virtual memory to physical memory. That is, memory mapped registers and memory buffers in physical memory must be mapped to a process' virtual address space to be accessed.
  • a [MmMapIoSpace] subroutine initiated by the Device Driver 301 call for mapping, specifies the starting physical address and size of the I/O range to map and whether the memory is a cache.
  • a virtual pointer to the base virtual address that maps to the base physical addresses for the specified range is returned to the Device Driver 301 ; as this pointer is associated directly with a physical address range, the pages comprising the buffer are locked. If space for mapping the range is insufficient, a NULL signal is returned in place of the pointer.
  • a [MmUnmapIoSpace] releases a specified range of physical addresses mapped by
  • each system interrupt signal (“SYSINTR”) is assigned to a certain bus interrupt.
  • SYSINTR system interrupt signal
  • Windows CE/OS limits the number of SYSINTRs to twenty-four; that is, in the standard implementation of CE/OS compatible applications it is not possible to provide drivers with access to more than twenty-four interrupts on one Hardware Platform 110.
  • BPL allows Device Drivers 301 to access to an arbitrary number of interrupts, allowing a Device Driver 301 to allocate and connect to any free interrupt vector in the OS.
  • a driver is only interested in vectors for the particular architected bus on which it runs.
  • BCEDDK provides an Interrupt Allocation 305 abstraction to relate the bus to SYSINTR's.
  • An [InterruptConnect] subroutine allocates a SYSINTR and associates it with a specified Hardware Platform 110 system interrupt vector; it returns a valid SYSINTR value to the Device
  • Device Driver 301 requests a SYSINTR by specifying which bus interrupts the Driver wants to
  • BCEDDK 207 updates a table that associates a SYSINTR with a particular bus interrupt, step 503.
  • the Device Driver 301 associates the SYSINTR with an event and enters a wait state, step 505.
  • BCEDDK 207 is called whenever a CPU's interrupt occurs, step 507.
  • BCEDDK 207 determines which bus interrupt occurred and what SYSINTR that interrupt is associated with, reporting to the Driver if it is the associated SYSINTR, step 509.
  • the Device Driver 301 releases the wait state and processes the SYSINTR, step 511. Once processed, step 513, the Device Driver 301 returns to the wait state for the next associated SYSINTR.
  • BCEDDK 207 specifies an interrupt in terms of a bus-related interrupt vector rather than in terms of a system interrupt vector.
  • Standard OAL 108 timers for Interrupt, Disable, Done, Enable, and Initialize are adapted for BPL 201. Additional TIMERS 307 are provided for specific BCEDDK 207 functions. To provide an interface for the hardware timers, the BCEDDK 207 defines both a timer object and a set of routines to manipulate that object.
  • the timer object consists of four main parts.
  • Hardware timer Only one timer object may be allocated for each hardware timer, which is specific to the CPU and Platform 110. Each timer has a given granularity, e.g., of 100 ns, subject to CPU and Platform 110 limitations. Behavior of timer objects during suspended power state is Platform 110 dependent, as hardware timer behavior depends on the Platform's power state.
  • Up counter Represents the number of given timer object intervals, e.g., 100 ns, the timer runs. When the timer object is stopped, the up counter is frozen until the timer object is restarted. In the preferred embodiment, the up-counter is a 64-bit unsigned integer.
  • Period Specifies at what frequency a timer will generate interrupts. This value must be set to ensure that the system is not overloaded with interrupts. In the preferred embodiment, the maximum period is 232 * 100 ns (429.5 seconds).
  • SYSINTR The interrupt value allocated when the timer object has been running continuously for multiple periods. If the timer object is not allocated, the SYSINTR value is irrelevant. The timer object has three states: running, stopped, and unallocated. If unallocated, the timer object does nothing. If allocated, the timer object is either running or stopped.
  • An [InterruptConnectTimer] subroutine When running, its up counter adds up each elapsed timer object interval, generating interrupts on the specified SYSINTR at the specified period. When stopped, it retains its current value but no longer generates interrupts.
  • An [InterruptConnectTimer] subroutine is provided for allocating a new SYSINTR for a system timer. It is used to connect to timer interrupt sources.
  • An [InterruptDisconnectTiiner] subroutine disassociates the specified SYSINTR from its corresponding timer. The SYSINTR and timer can then be reallocated.
  • An [InterruptStartTimer] subroutine starts the timer associated with the specified SYSINTR. This routine is only valid to call on a SYSINTR allocated by the [InterruptConnectTimer].
  • An [InterruptStopTimer] subroutine stops the timer associated with the specified SYSINTR. The timer stops generating interrupts but all resources remain allocated.
  • An [InterruptQueryTimer] subroutine reads the timer associated with the specified SYSINTR and returns how many intervals the timer has run (e.g., a given number of 100 ns intervals).
  • [InterruptConnectTimer] and [InterruptDisconnectTimer] subroutines operate much like [InterruptConnect] and [InterruptDisconnect], the difference being that the
  • [InterruptConnectTimer] subroutine does not need a specified interrupt vector in order to connect, but will allocate from a pool of timer objects. Once the timer object has been allocated, and a SYSINTR corresponding to that object returned, the SYSINTR must be associated with an event by calling [Interruptlnitialize]. Only after a timer object has been allocated and its SYSINTR associated with an event is it valid to call the other timer routines. Following is an example that sets up a periodic interrupt (e.g., of 5 ms) and frees the timer object via the present invention.
  • EXAMPLE 1 EXAMPLE 1:
  • the Device Driver 301 calls [InterruptConnectTimer] to allocate a timer object and a SYSINTR.
  • the Device Driver 301 calls [Interruptlnitialize] with the returned SYSINTR and an event at which it is to be set when the SYSINTR occurs.
  • the Device Driver 301 calls [InterruptStartTimer] with the SYSINTR and a period of 50000.
  • the timer object will begin generating interrupts every 5 ms.
  • the timer automatically resets and will generate another interrupt in 5 ms regardless of any processing by the 1ST that is handling the event associated with the SYSINTR.
  • the Device Driver 301 calls [InterruptStopTimer] with the SYSINTR.
  • Hardware Platform 110 CPU to move the data as it would degrade overall system performance.
  • Direct Memory Access is therefore employed.
  • Packet and Common Buffer BusMaster DMA or Packet and Common Buffer Slave DMA where slave modes use the Hardware Platform 110 system, or an auxiliary processor, DMA controller and BusMaster modes arbitrate for a system bus by having device 104, 106 proprietary DMA hardware to move data between Platform 110 memory and the device 104, 106.
  • DMA code is generally complex.
  • the CE/OS does not provide any mechanism for allocating DMA channels and setting up DMA operations for slave mode DMA.
  • a Device Driver 301 that needs to use a slave mode DMA must make assumptions about the Hardware Platform 110.
  • BCEDDK 207 removes Hardware Platform 110 dependencies from the DMA code, providing a universal DMA Abstraction 309 by platform-dependent representations that are transparent to the Device Driver 301 in the form of adapter objects, map registers, and adapter channels.
  • Adapter objects are data structures that describe the available hardware that the Device
  • a map register represents a mapping from a bus-related address to a system-accessible physical address. [The actual function that a map register performs is hardware platform-dependent; some platforms do not use map registers while others use hardware register to map bus addresses to system-accessible physical addresses and others maintain a virtual map.]
  • a Device Driver 301 To actually perform a DMA operation via BCEDDK 207, a Device Driver 301 must allocate an adapter channel or a common buffer, where an adapter channel allocation represents ownership of a system DMA controller channel and the availability of map registers. Such allocation prevents multiple drivers from trying to use the same DMA controller at the same time.
  • BCEDDK 207 maintains the complete state of the system DMA controller, divorcing the Device Driver 301 from controller accesses and making the driver platform independent. Packet-based slave mode DMA does not use the controller standard Auto-Initialize mode.
  • the DMA controller is set up to transfer on each page in the buffer being sent or received.
  • the packet-based slave mode DMA process can be generalized as follows and as shown in FIGURE 6.
  • the Device Driver 301 initialization routine, step 401 is called with the Driver iiiforming BCEDDK 207 of the packet-based slave mode DMA choice and which DMA channel the Driver will use, step 601; this handshake also negotiates the maximum DMA transfer size during one
  • step 603. the Device Driver 301 signals readiness to transfer data packets to or from a buffer, step 605. If the DMA channel is not available, step 607-NO, a WAIT state is entered, step 609. When no other Driver is using the DMA channel, step 607- YES, BCEDDK 207 acknowledges access availability. The Driver calculates whether all or some of the data can be transferred based upon the handshake negotiation. When ready, the Device Driver 301 instructs the BCEDDK 207 to begin the DMA transaction on the slave DMA controller, step 611. If the Driver's buffer is outside of a memory area that the DMA controller can reach, the BCEDDK 207 copies the buffered data to accessible memory.
  • the BCEDDK 207 programs the DMA controller to make the transfer, step 613.
  • step an interrupt is sent to the Device Driver 301.
  • the next transfer is initiated, step 615-NO, or, step 615- YES, the Driver waits, step 617, for its next DMA operation.
  • the common buffer slave DMA routine uses the system DMA controller with the Auto- Initialize mode enabled as depicted in FIGURE 7.
  • the Driver sets up the DMA controller to continually read from a common buffer that is accessible from both the CPU of the Hardware Platform 110 and the Device Driver 301.
  • the common buffer slave mode DMA operation can be generalized as follows.
  • the Device Driver 301 initialization routine, step 401, is performed.
  • the Driver informs BCEDDK 207 of the slave mode DMA with Auto- Initialize choice, step 701.
  • the Drive call includes which DMA channel the Driver will use; this handshake also negotiates the maximum DMA transfer size during one DMA operation, step 703, and Device Driver 301 requests allocation of a buffer accessible from the Driver and the bus.
  • the Device Driver 301 signals readiness to BCEDDK 207, step 705. If the channel is occupied, step 707-NO, a WAIT state is initiated, step 709.
  • step 707- YES BCEDDK 207 authorizes exclusive access to the Driver.
  • the Device Driver 301 signals the BCEDDK 207 to begin the DMA transmission on using the slave DMA controller, step 711. Again, the BCEDDK 207 programs the slave controller, step
  • step 713 The transmission initiates and continues, step 715, until the end of the buffer in which case the DMA controller will reset to the beginning of the buffer; the Device Driver 301, having exclusive access at this time, is free to copy data into or out of the buffer as need. Once finished, the Driver 301 instructs BCEDDK 207 to stop DMA control and WAITS for the next DMA operation, step 717.
  • the following is an example of the Device Driver 301 performing a packet-based Slave DMA via the present invention.
  • the Device Driver 301 calls [HalGetAdapter] to allocate an adapter object.
  • the driver specifies a DEVICE DESCRIPTION structure.
  • the driver For a packet-based slave DMA device, the driver must set Master to FALSE and Auto-Initialize to FALSE.
  • the Device Driver 301 calls [MmCreateMdl] to create an Mdl.
  • the function locks down the virtual buffer and determines the physical pages that comprise the buffer to access.
  • the BCEDDK 207 uses these physical age numbers to set up the hardware.
  • the Device Driver 301 calls [Hal Allocate AdapterChannel] when ready to set up the device 104, 106 for DMA transfer.
  • the device 104, 106 must have exclusive access to needed map registers and the DMA controller. When the driver requests these from [Hal Allocate AdapterChannel], the function blocks until the resources are available.
  • the Device Driver 301 calls [HalMapTransfer] to set up the system DMA hardware to perform the DMA. 5. Step 4 is repeated until the entire buffer has been transferred.
  • the Device Driver 301 calls [HalFreeAdapterChannel] .
  • the Device Driver 301 calls [MmFreeMdl] when finished with the MDL.
  • the transfer operation for packet-based Slave DMA operation in accordance with the present invention is exemplified as follows.
  • the Device Driver 301 calls [HalMapTransfer] to instruct the hardware to perform a DMA operation.
  • Inputs to the function include the adapter object, the map registers, the virtual buffer MDL, and an index specifying how much of the buffer has been transferred.
  • the Device Driver 301 now repeats the calls to [HalMapTransfer] and [HalFlushAdapterBuffers] as needed until the entire buffer is transferred. On each of the repeat calls to [HalMapTransfer], the Driver 301 should provide the virtual index value returned from the preceding call to [HalMapTransfer].
  • the following is an example of the Driver 301 performing a common buffer Slave DMA call via the present invention.
  • the Device Driver 301 Upon loading, the Device Driver 301 calls [HalGet Adapter] to allocate an adapter object. The driver specifies a DEVICE_DESCRIPTION structure. For a common buffer slave DMA device, the Driver 301 must set Master to FALSE and Auto-Initialize to TRUE. Also at startup, the Device Driver 301 calls [HalAllocateCommonBuffer] to allocate a buffer common to both the CPU and the device. 2. If [HalAllocateCommonBuffer] returns NULL, the Device Driver 301 should free resources, unload, and report failure.
  • the Device Driver 301 calls [MmCreateMdl] to create an MDL for the common buffer.
  • the function locks down the virtual buffer and determines the physical pages that comprise the buffer to access.
  • BCEDDK 207 uses these physical page numbers to set up the hardware.
  • the Device Driver 301 calls [Hal Allocate AdapterChannel] when ready to set up the device for DMA transfer.
  • the device 104, 106 must have exclusive access to needed map registers and the DMA controller, which the Driver 301 requests from [Hal Allocate AdapterChannel]. The function blocks until these resources are available.
  • the Device Driver 301 calls [HalMapTransfer] to set up the system DMA hardware to perform the DMA. 7.
  • the Device Driver 301 calls [HalFreeAdapterChannel] to release the adapter channel.
  • the Device Driver 301 calls [MmFreeMdl] when finished with the MDL.
  • [MmFreeMdl] is an example of a transfer operation for a common buffer Slave DMA operation in accordance with the present invention.
  • the Device Driver 301 calls [HalMapTransfer] one time to instruct the hardware to perform a DMA operation. Slave DMA devices ignore the address returned from [HalMapTransfer] .
  • BusMaster DMA Mode CE/OS does not provide any mechanism for setting up BusMaster DMA data transfers.
  • a Device Driver 301 that need to use BusMaster DMA operations must make assumptions about the
  • Hardware Platform 110 it is running on.
  • BCEDDK 207 merely obtains mappings from bus addresses to system memory.
  • the Device Driver 301 sets up its DMA controller for a transfer of each page of the data being sent or received.
  • the BusMaster DMA process can be generalized as follows and as illustrated in FIGURE 8.
  • the Device Driver 301 initializes, step 401.
  • the Device Driver 301 informs the BCEDDK 207 that it is about to perform a BusMaster DMA operation, step 801; again, this handshake also negotiates the maximum DMA transfer size during one DMA operation.
  • the Driver can choose whether to allocate its own buffer, step 803-YES, or have BCEDDK 207 allocate a physical buffer, steps 803-NO and 805 (see FIGURE 4).
  • BCEDDK 207 takes control of the selected buffer, step 807 (providing the address, insuring that the buffer used in the DMA transaction is smaller than the maximum DMA transfer size negotiated if allocating its own buffer).
  • BCEDDK 207 establishes mapping registers or calculates Hardware Platform 110 specific offsets and reports where the buffer is addressable on the Device's bus, step 809.
  • step 811-NO the request and mapping steps are repeated (for packet-based BusMaster DMA, if the Device has scatter/gather support, the driver sets up its DMA controller to transfer multiple pages; if not, the Driver transfers one page at a time), before transfer, step 813.
  • One the DMA is completed, a
  • step 815 is entered until the next BusMaster DMA call, step 801.
  • a Device Driver 301 must create an adapter object for each slave DMA controller and for each BusMaster device in the operating system.
  • the following is an example of a typical calling sequence for packet-based BusMaster DMA to or from a virtual buffer.
  • the Device Driver 301 Upon loading, the Device Driver 301 calls [HalGetAdapter] to allocate an adapter object. The Driver 301 specifies a DEVICE_DESCRIPTION structure. For a packet-based BusMaster DMA device, the driver must set Master to TRUE. 2. To start DMA to or from a virtual buffer, the Device Driver 301 calls [MmCreateMdl] to create an MDL. The function locks down the virtual buffer and determines the physical pages that comprise the buffer to access. When the actual DMA transfer is about to commence, the BCEDDK 207 uses these physical page numbers to set up the hardware.
  • the Device Driver 301 calls [Hal Allocate AdapterChannel] when ready to set up the device for DMA transfer.
  • the device 104 must have exclusive access to needed map registers, which the Driver 301 requests from [Hal Allocate AdapterChannel]. The function blocks until the registers are available.
  • the Device Driver 301 calls [HalMapTransfer] to set up map registers for the DMA transfer.
  • Step 4 is repeated until the entire buffer has been transferred.
  • the Device Driver 301 calls [HalFreeMapRegisters] to release the map registers allocated through [Hal Allocate AdapterChannel].
  • the Device Driver 301 calls [MmFreeMdl] when finished with the MDL.
  • [MmFreeMdl] is an example of a transfer operation in accordance with the present invention for packet-based BusMaster DMA.
  • the driver the Device Driver 301 calls [HalMapTransfer] with a value of NULL for an adapter object.
  • the DMA hardware is in the device 104 itself so [HalMapTransfer] only sets up the map registers. [HalMapTransfer] returns the bus address and the length of the region mapped. The Driver 301 can use this information to setup its DMA hardware.
  • the Driver 301 calls [HalMapTransfer] multiple times. In its first call, the Driver 301 should provide a virtual pointer to the head of the buffer being transferred. After calling this function, the Driver 301 must call [HalFlushAdapterBuffers] to ensure that any cached data has been flushed. The Device Driver 301 now repeats the calls to [HalMapTransfer] and [HalFlushAdapterBuffers] as needed until the available map registers have been exhausted or until the scatter/gather hardware is full.
  • the Driver 301 should instruct the device 104 to start its DMA. On each of the repeat calls to [HalMapTransfer], the Driver 301 should provide the virtual index value returned from the preceding call to [HalMapTransfer]. 3. If the device 104 does not have scatter/gather support, the Driver 301 calls
  • the Device Driver 301 calls [HalGetAdapter] to allocate an adapter object.
  • the Driver 301 specifies a DEVICE_DESCRIPTION structure (appendix D).
  • the Driver 301 For a common buffer BusMaster DMA device, the Driver 301 must set Master to TRUE.
  • the Driver 301 calls HalAllocateCommonBuffer] to allocate a buffer common to both the CPU and the device.
  • the virtual pointer returned from [HalAllocateCommonBuffer] can be used by the Driver 301 to move data directly to or from the buffer with the CPU. If [HalAllocateCommonBuffer] returns NULL, the Driver 301 should free resources, unload, and report failure.
  • the Driver 301 can use the logical bus address of the common buffer to set up the DMA device.
  • the Input-Output Access Routines 311 are defined on and adapted from the common operating system, e.g., the Windows NT and Windows CE programs directly. Again, further details are provided in microfiched Design Guide Appendix C as well as being available commercially from the operating system supplier, e.g., see the Microsoft Windows CE InfoViewer Documentation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

L'invention se rapporte à une interface de type couche plate-forme pour pilotes de périphériques - également désignée dans ce qui suit par couche plate-forme bSquaretm ou 'BPL' - qui rend conforme un système d'exploitation d'ordinateur, par exemple un système d'exploitation tel que WINDOWS CE - et constitue un système transportable permettant de relier des pilotes de périphériques à un ensemble de plates-formes matérielles d'ordinateurs. Cette couche BPL constitue une interface qui impose la structure de la couche plate-forme du système d'exploitation avec plusieurs fonctionnalités mises en oeuvre par des composants individuels de la couche BPL utilisés par les pilotes tels que notamment: 1) un composant de gestion de la mémoire, 2) un composant d'attribution des interruptions, 3) un composant de synchronisation des interruptions, 4) un composant pour l'accès direct en mémoire ('DMA'), et notamment des opérations de bus maître et esclave et 5) un composant de routines d'accès en entrée-sortie.
PCT/US2000/015416 1999-06-01 2000-06-01 Couche plate-forme pour pilotes de peripherique WO2000074368A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU54638/00A AU5463800A (en) 1999-06-01 2000-06-01 Device driver platform layer

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US32407399A 1999-06-01 1999-06-01
US09/324,073 1999-06-01

Publications (2)

Publication Number Publication Date
WO2000074368A2 true WO2000074368A2 (fr) 2000-12-07
WO2000074368A3 WO2000074368A3 (fr) 2001-08-09

Family

ID=23261947

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/015416 WO2000074368A2 (fr) 1999-06-01 2000-06-01 Couche plate-forme pour pilotes de peripherique

Country Status (2)

Country Link
AU (1) AU5463800A (fr)
WO (1) WO2000074368A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2862459A1 (fr) * 2003-11-14 2005-05-20 Hewlett Packard Development Co Attribution de connecteur dma
EP1540472A1 (fr) * 2002-08-26 2005-06-15 Interdigital Technology Corporation Interface de programmeur d'application (api) pour systeme d'exploitation (os) de dispositifs sans fil
CN115481397B (zh) * 2022-08-31 2023-06-06 中国人民解放军战略支援部队信息工程大学 基于内存结构逆向分析的代码注入攻击取证检测方法与系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5265252A (en) * 1991-03-26 1993-11-23 International Business Machines Corporation Device driver system having generic operating system interface
US5430845A (en) * 1990-06-07 1995-07-04 Unisys Corporation Peripheral device interface for dynamically selecting boot disk device driver

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5430845A (en) * 1990-06-07 1995-07-04 Unisys Corporation Peripheral device interface for dynamically selecting boot disk device driver
US5265252A (en) * 1991-03-26 1993-11-23 International Business Machines Corporation Device driver system having generic operating system interface

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAKER A.: 'The windows NT device driver book: A guide for programmers', 1997, PRENTICE HALL XP002937685 pages 76-77, 86-89, 258-267, 350-360 *
MCLEMAN J.: 'Alternative models for windows CE drivers' PORTABLE DESIGN April 1999, pages 26 - 27, XP002937686 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1540472A1 (fr) * 2002-08-26 2005-06-15 Interdigital Technology Corporation Interface de programmeur d'application (api) pour systeme d'exploitation (os) de dispositifs sans fil
EP1540472A4 (fr) * 2002-08-26 2005-12-21 Interdigital Tech Corp Interface de programmeur d'application (api) pour systeme d'exploitation (os) de dispositifs sans fil
US7409682B2 (en) 2002-08-26 2008-08-05 Interdigital Technology Corporation Software porting layer
US7506340B2 (en) 2002-08-26 2009-03-17 Interdigital Technology Corporation Operating system (OS) abstraction layer
US7526777B2 (en) 2002-08-26 2009-04-28 Interdigital Technology Corporation Wireless device operating system (OS) application programmer's interface (API)
FR2862459A1 (fr) * 2003-11-14 2005-05-20 Hewlett Packard Development Co Attribution de connecteur dma
US7188195B2 (en) 2003-11-14 2007-03-06 Hewlett-Packard Development Company, L.P. DMA slot allocation
CN115481397B (zh) * 2022-08-31 2023-06-06 中国人民解放军战略支援部队信息工程大学 基于内存结构逆向分析的代码注入攻击取证检测方法与系统

Also Published As

Publication number Publication date
AU5463800A (en) 2000-12-18
WO2000074368A3 (fr) 2001-08-09

Similar Documents

Publication Publication Date Title
US5935228A (en) Method for automatically enabling peripheral devices and a storage medium for storing automatic enable program for peripheral devices
US20060253682A1 (en) Managing computer memory in a computing environment with dynamic logical partitioning
US5953516A (en) Method and apparatus for emulating a peripheral device to allow device driver development before availability of the peripheral device
US7526578B2 (en) Option ROM characterization
RU2532708C2 (ru) Способ и устройство для осуществления операции ввода/вывода в среде виртуализации
US6591358B2 (en) Computer system with operating system functions distributed among plural microcontrollers for managing device resources and CPU
US5758182A (en) DMA controller translates virtual I/O device address received directly from application program command to physical i/o device address of I/O device on device bus
US6363409B1 (en) Automatic client/server translation and execution of non-native applications
US5623692A (en) Architecture for providing input/output operations in a computer system
EP0752646B1 (fr) Réalisation d'interface de gestion périphérique par accès de données
CN101308466B (zh) 数据处理方法
US5721947A (en) Apparatus adapted to be joined between the system I/O bus and I/O devices which translates addresses furnished directly by an application program
EP1734444A2 (fr) Échange de données entre un système d'exploitation hôte et un système d'exploitation de commande via des E/S mappées en mémoire
US7840773B1 (en) Providing memory management within a system management mode
JP2002517034A (ja) エミュレーションコプロセッサ
US5918050A (en) Apparatus accessed at a physical I/O address for address and data translation and for context switching of I/O devices in response to commands from application programs
WO1999012095A1 (fr) Procede et dispositif destines a une execution concurrente de systemes d'exploitation
KR100764921B1 (ko) 장치 이뉴머레이션을 위한 가상 rom
US8930568B1 (en) Method and apparatus for enabling access to storage
US5696990A (en) Method and apparatus for providing improved flow control for input/output operations in a computer system having a FIFO circuit and an overflow storage area
JP2002530778A (ja) 複数の仮想ダイレクトメモリアクセスチャネルをサポートするためのダイレクトメモリアクセスエンジン
JP2002539524A (ja) 周辺デバイス割込みを処理するための装置および方法
KR100265679B1 (ko) 리얼 타임 제어 시스템
US5640591A (en) Method and apparatus for naming input/output devices in a computer system
US7130982B2 (en) Logical memory tags for redirected DMA operations

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP