EP1872576A2 - Integriertes drahtloses multimedia-übertragungssystem - Google Patents
Integriertes drahtloses multimedia-übertragungssystemInfo
- Publication number
- EP1872576A2 EP1872576A2 EP06750566A EP06750566A EP1872576A2 EP 1872576 A2 EP1872576 A2 EP 1872576A2 EP 06750566 A EP06750566 A EP 06750566A EP 06750566 A EP06750566 A EP 06750566A EP 1872576 A2 EP1872576 A2 EP 1872576A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- video data
- video
- data
- audio
- media
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/06—Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/70—Media network packetisation
Definitions
- the present invention relates generally to methods and systems for the wireless real time transmission of data from a source to a monitor.
- the present invention further relates generally to the substantially automatic configuration of wireless devices.
- Prior attempts at enabling the integration of computing devices with televisions have focused on a) transforming the television into a networked computing appliance that directly accesses the Internet to obtain media, b) creating a specialized hardware device that receives media from a computing device, stores it, and, through a wired connection, transfers it to the television, and/or c) integrating into the television a means to accept storage devices, such as memory sticks.
- these conventional approaches suffer from having to substantially modify existing equipment, i.e. replacing existing computing devices and/or televisions, or purchasing expensive new hardware.
- both approaches have typically required the use of multiple physical hard-wired connections to transmit graphics, text, audio, and video.
- the present invention relates generally to methods and systems for the wireless real time transmission of data from a source to a monitor.
- the present invention further relates generally to the substantially automatic configuration of wireless devices.
- the present invention is a method of capturing media from a source and wirelessly transmitting said media, comprising the steps of: playing said media, comprising at least audio data and video data, on a computing device; capturing said video data using a mirror display driver; capturing said audio data from an input source; compressing said captured audio and video data; and transmitting said compressed audio and video data using a transmitter.
- the method further comprises the step of receiving said media at a receiver, decompressing said captured audio and video data, and playing said decompressed audio and video data on a display remote from said source.
- the transmitter and receiver establish a connection using TCP and the transmitter transmits packets of video data using UDP.
- the media further comprises graphics and text data and wherein said graphics and text data is captured together with said video data using the mirror display driver.
- the method further comprising the step of processing said video data using a CODEC.
- the CODEC removes temporal redundancy from the video data using a motion estimation block.
- the CODEC converts a frame of video data into 8*8 blocks or 4*4 blocks of pixels using a DCT transform block.
- the CODEC codes video content into shorter words using a VLC coding circuit.
- the CODEC converts back spatial frequencies of the video data into the pixel domain using an IDCT block.
- the CODEC comprises a rate control mechanism for speeding up the transmission of media.
- the present invention comprises a program stored on a computer-readable substrate for capturing media, comprising at least video data, from a source and wirelessly transmitting said media, comprising a mirror display driver operating in a kernel mode for capturing said video data; a CODEC for processing said video data; and a transmitter for transmitting said processed video data.
- the program further comprises a virtual display driver.
- the transmitter establishes a connection with a receiver using TCP and the transmitter transmits packets of video data using UDP.
- the media further comprises graphics and text data and said mirror display driver captures graphics and text data together with said video data.
- the CODEC comprises a motion estimation block for removing temporal redundancy from the video data.
- the CODEC comprises a DCT block for converting a frame of video data into 8*8 or 4*4 blocks of pixels.
- the CODEC comprises a VLC coding circuit for coding video content into shorter words.
- the CODEC comprises an IDCT block for converting back spatial frequencies of the video data into the pixel domain.
- the CODEC comprises a rate control mechanism for speeding up the transmission of media.
- Figure 1 depicts a block diagram of the integrated wireless media transmission system of the present invention
- Figure 2 depicts the components of a transmitter of one embodiment of the present invention
- Figure 3 depicts a plurality of software modules comprising one embodiment of a software implementation of the present invention
- Figure 4 depicts the components of a receiver of one embodiment of the present invention
- Figure 5 is a flowchart depicting an exemplary operation of the present invention.
- Figure 6 depicts one embodiment of the TCP/UDP RT hybrid protocol header structures of the present invention
- Figure 7 is a flowchart depicting exemplary functional steps of the TCP/UDP RT transmission protocol of the present invention
- Figure 8 depicts a block diagram of an exemplary codec used in the present invention
- Figure 9 is a functional diagram of an exemplary motion estimation block used in the present invention.
- Figure 10 depicts one embodiment of the digital signal waveform and the corresponding data transfer;
- Figure 11 is a block diagram of an exemplary video processing and selective optimization of the IDCT block of the present invention
- Figure 12 is a block diagram depicting the components of the synchronization circuit for synchronizing audio and video data of the present invention
- Figure 13 is a flowchart depicting another embodiment of synchronizing audio and video signals of the present invention.
- Figure 14 depicts another embodiment of the audio and video synchronizaton circuit of the present invention.
- Figure 15 depicts an enterprise configuration for automatically downloading and updating the software of the present invention
- Figure 16 is a schematic diagram depicting the communication between a transmitter and plurality of receivers
- Figure 17 depicts a block diagram of a Microsoft Windows framework for developing display drivers
- Figure 18 depicts a block diagram of an interaction between a GDI and a display driver.
- Figure 19 depicts a block diagram of a DirectDraw architecture.
- the present invention is an integrated wireless system for transmitting media wirelessly from one device to another device in real time.
- the present invention will be described with reference to the aforementioned drawings.
- the embodiments described herein are not intended to be exhaustive or to limit the invention to the precise forms disclosed. They are chosen to explain the invention and its application and to enable others skilled in the art to utilize the invention.
- a computing device 101 such as a conventional personal computer, desktop, laptop, PDA, mobile telephone, gaming station, set-top box, satellite receiver, DVD player, personal video recorder, or any other device, operating the novel systems of the present invention communicates through a wireless network 102 to a remote monitor 103.
- the computing device 101 and remote monitor 103 further comprise a processing system on a chip capable of wirelessly transmitting and receiving graphics, audio, text, and video encoded under a plurality of standards.
- the remote monitor 103 can be a television, plasma display device, flat panel LCD, HDD, projector or any other electronic display device known in the art capable of rendering graphics, audio and video.
- the processing system on chip can either be integrated into the remote monitor 103 and computing device 101 or incorporated into a standalone device that is in wired communication with the remote monitor 103 or computing device 101.
- An exemplary processing system on a chip is described in PCT/US2006/00622, which is also assigned to the owner of the present application, and incorporated herein by reference.
- Computing device 200 comprises an operating system 201 capable running the novel software systems of the present invention 202 and a transceiver 203.
- the operating system 201 can be any operating system including but not limited to MS Windows 2000, MS Windows NT, MS Windows XP, Linux, OS/2, Palm-based operating systems, cell phone operating systems, iPod operating systems, and MAC OS.
- the computing device 200 transmits media using appropriate wireless standards for the transmission of graphics, text, video and audio signals, for example, IEEE 802.11a, 802. Hg, Bluetooth2.0, HomeRF 2.0, HiperLAN/2, and Ultra Wideband, among others, along with proprietary extensions to any of these standards.
- the software 300 comprises a module for the real-time capture of media 301, a module for managing a buffer for storing the captured media 302, a codec 303 for compressing and decompressing the media, and a module for packaging the processed media for transmission 304.
- the computing device receives media from a source, whether it be downloaded from the Internet, real-time streamed from the Internet, transmitted from a cable or satellite station, transferred from a storage device, or any other source.
- the media is played on the computing device via suitable player installed on the computing device .
- the software module 301 captures the data in real time and temporarily stores it in the buffer before transmitting it to the CODEC.
- the CODEC 303 compresses it and prepares it for transmission.
- the receiver 400 comprises a transceiver 401, a CODEC 402, a display device 403 for rendering video and graphics data and an audio device 404 for rendering the audio data.
- the transceiver 401 receives the compressed media data, preferably through a novel transmission protocol used by the present invention.
- the novel transmission protocol is a TCP/UDP hybrid protocol.
- the TCP/UDP hybrid protocol for the real-time transmission of packets combines the security services of TCP with the simplicity and lower processing requirements of UDP.
- the content received by the receiver is then transmitted to the CODEC 402 for decompression.
- the CODEC decompresses the media and prepares the video and audio signals, which are then transmitted to the display device 403 and speakers 404 for rendering.
- the personal computer plays 501 the media using appropriate media player on its console.
- media player can include players from Apple (iPod) , RealNetworks (RealPlayer) , Microsoft (Windows Media Player), or any other media player.
- the software of the present invention captures 502_ the real time video directly from the video buffer. The captured video is then compressed 503 using the CODEC. Similarly, the audio is captured 504 using the audio software operating on the computing device and is compressed using the CODEC.
- the software of the present invention captures video through the implementation of software modules comprising a mirror display driver and a virtual display driver.
- the mirror display driver and virtual display driver are installed as components in the kernel mode of the operating system running on the computer that hosts the software of the present invention.
- a mirror display driver for a virtual device mirrors the operation of a physical display device driver by mirroring the operations of the physical display device driver.
- a mirror display driver is used for capturing the contents of a primary display associated with the computer while a virtual display driver is used to capture the contents of an "extended desktop" or a secondary display device associated with the computer.
- the operating system renders graphics and video content onto the video memory of a virtual display driver and a mirror display driver. Therefore, any media being played by the computer using, for example, a media player is also rendered on one of these drivers.
- An application component of the software of the present invention maps the video memory of virtual display driver and mirror display driver in the application space. In this manner, the application of the present inventions obtains a pointer to the video memory.
- the application of the present invention captures the real-time images projected on the display (and, therefore, the real-time graphics or video content that is being displayed) by copying the memory from the mapped video memory to locally allocated memory.
- the mirror display driver and virtual display driver operate in the kernel space of a Microsoft operating system, such as a Windows 2000/NT compatible operating system.
- a Microsoft Windows framework such as a Microsoft Windows 2000/NT compatible operating system.
- Win32 GDI Graphics Display
- the GDI 1702 issues graphics output requests. These requests are routed to software operating in the kernel space, including a kernel-mode GDI 1705.
- Kernel-mode GDI 1705 is an intermediary support between a kernel-mode graphics driver 1706 and an application 1701. Kernel-mode GDI 1705 sends these requests to an appropriate miniport 1709 or graphics driver, such as a display driver 1706 or printer driver [not shown] .
- the miniport driver 1709 is written for one graphics adapter (or family of adapters) .
- the 1706 can be written for any number of adapters that share a common drawing interface. This is because the display driver draws, while the miniport driver performs operations such as mode sets and provides information about the hardware to the driver. It is also possible for more than one display driver to work with a particular miniport driver.
- the active component in this architecture is the Win32-GDI process 1702 and the application 1701. The rest of the components 1705-1710 are called from the Win32-GDI process 1702.
- the video miniport driver 1709 generally handles operations that interact with other kernel components 1703. For example, operations such as hardware initialization and memory mapping require action by the NT I/O subsystem. Video miniport driver 1709 responsibilities include resource management, such as hardware configuration, and physical device memory mapping. The video miniport driver 1709 is specific to the video hardware. The display driver 1706 uses the video miniport driver 1709 for operations that are not frequently requested; for example, to manage resources, perform physical device memory mapping, ensure that register outputs occur in close proximity, or respond to interrupts. The video miniport driver 1709 also handles mode set interaction with the graphics card, multiple hardware types
- mapping the video register into the display driver's 1706 address space There are certain functions that a driver writer should implement in order to write to a miniport. These functions are exported to the video port with which the miniport interacts .
- the driver writer specifies the absolute addresses of the video memory and registers, present on the video card, in miniport. These addresses are first converted to bus relative addresses and then to virtual addresses in the address space of the calling process.
- the display driver's 1706 primary responsibility is rendering.
- the Graphics Device Interface (GDI) 1705 interprets these instructions and calls the display driver 1706.
- the display driver 1706 then translates these requests into commands for the video hardware to draw graphics on the screen.
- the display driver 1706 can access the hardware directly.
- GDI 1705 handles drawing operations on standard format bitmaps, such as on hardware that includes a frame buffer.
- a display driver 1706 can hook and implement any of the drawing functions for which the hardware offers special support.
- the driver 1706 can push functions back to GDI 1705 and allow GDI 1705 to do the operations.
- the display driver 1706 has direct access to video hardware registers.
- the VGA display driver for x86 systems uses optimized assembly code to implement direct access to hardware registers for some drawing and text operations.
- display driver 1706 performs other operations such as surface management and palate management.
- GDI 1801 issues a DrvEnableDriver command 1810 to the display driver 1802.
- GDI 1801 then issues a DrvEnablePDEV command 1811 to the display driver 1802.
- GDI 1801 receives a EngCreatePalette command 1812 from the display driver 1802.
- GDI 1801 issues a DrvCompletePDEV command 1813 to the display driver 1802.
- GDI 1801 then issues a DrvEnableSurface command 1814 to the display driver 1802.
- GDI 1801 receives a EngCreateDevicSurface command 1815 from the display driver 1802 and a EngModifySurface command 1816 from the display driver 1802.
- the software architecture 1900 represents Microsoft's DirectDraw, which includes the following components:
- User-mode DirectDraw that is loaded and called by DirectDraw applications. This component provides hardware emulation, manages the various DirectDraw objects, and provides display memory and display hardware management services.
- Kernel-mode DirectDraw the system-supplied graphics engine that is loaded by a kernel-mode display driver. This portion of DirectDraw performs parameter validation for the driver, making it easier to implement more robust drivers. Kernel-mode DirectDraw also handles synchronization with GDI and all cross-process states.
- the DirectDraw portion of the display driver which, along with the rest of the display driver, is implemented by graphics card hardware vendors. Other portions of the display driver handle GDI and other non-DirectDraw related calls.
- DirectDraw 1900 When DirectDraw 1900 is invoked, it accesses the graphics card directly through the DirectDraw driver 1902.
- DirectDraw 1900 calls the DirectDraw driver 1902 for supported hardware functions, or the hardware emulation layer (HEL) 1903 for functions that must be emulated in software.
- GDI 1905 calls are sent to the driver.
- the display driver At initialization time and during mode changes, the display driver returns capability bits to DirectDraw 1900. This enables DirectDraw 1900 to access information about the available driver functions, their addresses, and the capabilities of the display card and driver (such as stretching, transparent bits, display pitch, and other advanced characteristics) . Once DirectDraw 1900 has this information, it can use the DirectDraw driver to access the display card directly, without making GDI calls or using the GDI specific portions of the display driver.
- the virtual display driver and mirror display driver are derived from the architecture of a normal display driver and include a miniport driver and corresponding display driver.
- conventional display drivers there is a physical device, either attached to PCI bus or AGP slot.
- Video memory and registers are physically present on the video card, which are mapped in the address space of the GDI process or the capturing application using DirectDraw. In the present embodiment, however, there is no physical video memory.
- the operating system assumes the existence of a physical device (referred to as a virtual device) and its memory by allocating memory in the main memory, representing video memory and registers.
- a chunk of memory such as 2.5 MB, is reserved from the non- paged pool memory. This memory serves as video memory. This memory is then mapped in the virtual address space of the GDI process (application in case of a graphics draw operation) .
- the display driver ' of the present invention requests a pointer to the memory, the miniport returns a pointer to the video memory reserved in the RAM. It is therefore transparent to the GDI and display device interface (DDI) (or application in case of direct draw) whether the video memory is on a RAM or a video card. DDI or GDI perform the rendering on this memory location.
- the miniport of the present invention also allocates a separate memory for overlays. Certain applications and video players like Power DVD, Win DVD etc uses overlay memory for video rendering.
- rendering is performed by the DDI and GDI .
- GDI provides the generic device independent rendering operations while DDI performs the device specific operation.
- the display architecture layers GDI over DDI and provides a facility that DDI can delegate if s responsibilities to GDI.
- the display driver of the present invention because there is no physical device, there are no device specific operations. Therefore, the display driver of the present
- DDI provides GDI with the video memory pointer and GDI perform the rendering based on the request received from the Win32 GDI process.
- the rendering operations are delegated to the HEL (Hardware emulation layer) by DDI.
- the present invention comprises a mirror driver which, when loaded, attaches itself to a primary display driver. Therefore, all the rendering calls to the primary display driver are also routed to the mirror driver and whatever data is rendered on the video memory of the primary display driver is also rendered on the video memory of the mirror driver. In this manner, the mirror driver is used for computer display duplication.
- the present invention comprises a virtual driver which, when loaded, operates as .an extended virtual driver.
- the virtual driver When the virtual driver is installed, it is shown as a secondary driver in the display properties of the computer and the user has the option on extend the display on to this display driver.
- the mirror driver and virtual driver support the following resolutions: 640 * 480, 800 * 600, 1024 * 768, and 1280 * 1024. For each of these resolutions, the drivers support 8, 16, 24, 32 bit color depths and 60 and 75 Hz refresh rates. Rendering on the overlay surface is done in YUV 420 format.
- a software library is used to support the capturing of a computer display using the mirror or virtual device drivers.
- the library maps the video memory allocated in the mirror and virtual device drivers in the application space when it is initialized. In the capture function, the library copies the mapped video buffer in the application buffer. In this manner, the application has a copy of the computer display at that particular instance.
- the library maps the video buffer in the application space.
- a pointer is also mapped in the application space which holds the address of the overlay surface that was last rendered. This pointer is updated in the driver.
- the library obtains a notification from the virtual display driver when rendering on the overlay memory starts. The display driver informs the capture library of the color key value.
- a software module copies the last overlay surface rendered using the pointer which was mapped from the driver space. It does the YUV to RGB conversion and pastes the RGB data, after stretching to the required dimensions, on the rectangular area of the main video memory where the color key value is present.
- the color key value is a special value which is pasted on the main video memory by the GDI to represent the region on which the data rendered on the overlay should be copied.
- overlays In use on computers operating current Windows/NT operating systems, overlays only apply to the extended virtual device driver and not the mirror driver because, when the mirror driver is attached, DirectDraw is automatically disabled.
- audio is captured using through an interface used by conventional computer-based audio players to play audio data.
- audio is captured using Microsoft Windows Multimedia API, which is a software module compatible with Microsoft Windows and NT operating systems.
- Microsoft Windows Multimedia Library provides an interface to the applications to play audio data on an audio device using waveOut calls. Similarly, it also provides interfaces to record audio data from an audio device.
- the source for recording device can be line In, microphone, or any other source designation.
- the applications can specify the format (sampling frequency, bits per sample) in which it wants to record the data.
- An application opens the audio device using wavelnOpen ( ) function. It specifies the audio format in which to record, the size of audio data to capture at a time and callback function to call when the specified size to audio data is available
- the application passes a number of empty audio buffers to -the windows audio subsystem using wavelnAddBuffer ( ) call.
- the Windows audio subsystem calls the callback function through wnicn it passes rne au ⁇ io ⁇ ata to Liie appiicaLx ⁇ u m uxie oi cne audio buffers which were passed by the application.
- the application copies the audio data into its local buffer and, if it needs to continue capturing again, passes the empty audio buffer to the Windows audio subsytem through wavelnAddBuffer ( )
- a stereo mix option is selected in a media playback application and audio is captured in the process.
- Audio devices typically have the capability to route audio, being played on an output pin, back to an input pin. While named differently on different systems, it is generaT-ly referred to as a "stereo mix". If the stereo mix option is selected in the playback option, and audio is recorded from the default audio device using waveln call, then everything that is being played on the system can be recorded, i.e the audio being played on the system can be captured. It should be appreciated that the specific approach is dependent on the capabilities of the particular audio device being used and that one of ordinary skill in the art would know how to capture the audio stream in accordance with the above teaching. It should also be appreciated that, to prevent the concurrent playback of audio from the computer and the remote device, the local audio (on the computer) should be muted, provided that such muting does not also mute the audio routing to the input pin.
- a virtual audio driver referred to as a virtual audio cable (VAC)
- VAC virtual audio cable
- a feature of VAC is that, by default, it routes all the audio going to its audio output pin to its input pin. Therefore, if VAC is selected as a default playback device,
- the media is then transmitted 505 simultaneously in a synchronization manner wirelessly to a receiver, as previously described.
- the receivex which is in data communication with the remote monitoring device receives 506 the compressed media data.
- the media data is then uncompressed 507 using the CODEC.
- the data is then finally rendered 508 on the display device.
- any transmission protocol may be employed. However, it is preferred to transmit separate video and audio data streams, in accordance with a hybrid TCP/UDP protocol, that are synchronized using a clock or counter. Specifically, a clock or counter sequences forward to provide a reference against which each data stream is timed.
- the TCP/UDP hybrid protocol 600 comprises of a TCP packet header 601 of size equivalent to 20 TCP, 20 IP and a physical layer header and UDP packet header 602 of size equivalent to 8 TCP, 20 IP and a physical layer header.
- FIG. 7 is a flow diagram that depicts the functional steps of the TCP/UDP real-time (RT) transmission protocol implemented in the present invention.
- the transmitter and receiver as previously described, establish 701 connection using TCP and the transmitter sends 702 all the reference frames using TCP. Thereafter, the transmitter uses 703 the same TCP port, which was used to establish connection in step 701, to send rest of the real-time packets but switches 704 to the UDP as transport protocol. While transmitting real-time packets using UDP, the transmitter further checks for the presence of an RT packet that is overdue for transmission. The transmitter discards 705 the overdue frame at the transmitter itself between IP and MAC. However an overdue reference frame/packet is always sent. Thus, the TCP/UDP protocol significantly reduces collisions while substantially improving the performance of RT traffic and network throughput.
- the TCP/UDP protocol is additionally adapted to use ACK spoofing as a congestion-signaling method for RT transmission over wireless networks.
- Sending RT traffic over wireless networks can be sluggish.
- TCP conventionally requires the reception of an ACK signal from the destination/receiver before resuming the transmission of the next block or frame of data.
- IP networks specifically wireless, there remain high probabilities of the ACK signals getting lost due to network congestion, particularly so in RT traffic.
- this congestion control causes breakage of connection over wireless networks owing to scenarios such as non-receipt of ACK signals from the receiver.
- the present invention uses ACK spoofing for RT traffic sent over networks.
- ACK spoofing if the receiver does not receive any ACK within a certain period of time, the transmitter generates a false ACK for the TCP, so that it resumes sending process.
- the connection between the transmitter and receiver is broken and a new TCP connection is opened to the same receiver. This results in clearing congestion problems associated with the previous connection. It should be appreciated that this transmission method is just one of several transmission methods that could be used and is intended to describe an exemplary operation.
- the block diagram depicts the components of the CODEC of the integrated wireless system.
- the CODEC 800 comprise a motion estimation block 801 which removes the temporal redundancy from the streaming content, a DCT block 802 which converts the frame into 8*8 blocks of pixels to perform DCT, a VLC coding circuit 803 which further codes the content into shorter words, an IDCT block 804 converts back the spatial frequencies to the pixel domain, and a rate control mechanism 805 for speeding up the transmission of media.
- the motion estimation block 801 is used to compress the video by exploiting the temporal redundancy between the adjacent frames of the video.
- the algorithm used in the motion estimation is preferably a full search algorithm, where each block of the reference fame is compared with the current frame to obtain the best matching block.
- the full search algorithm takes every point of a search region as a checking point, and compares all pixels between the blocks corresponding to all checking points of the reference frame and the block of the current frame. Then the best checking point is determined to obtain a motion vector value.
- Figure 9 depicts the functional steps of the one embodiment of the motion estimation block.
- the checking points A and Al shown in the figure respectively correspond to the blocks 902 and 904 in a reference frame. If the checking point A is moved left and downward by one pixel, it becomes the checking point Al . In this way, when, the block 902 is shifted left and downward by one pixel, it results in the block 904.
- the comparison technique is performed by computing the difference in the image information of all corresponding pixels and then summing the absolute values of the differences in the image information. Finally, the sum of absolute difference (SAD) is performed. Then, among all checking points, the checking point with the lowest SAD is determined to be the best checking point.
- the block that corresponds to the best checking point is the block of the reference frame, which matches best with the block of the current frame that is to be encoded. And these two blocks obtain a motion vector.
- the picture is coded using a discrete cosine transform (DCT) via the DCT block 802.
- the DCT coding scheme transforms pixels (or error terms) into a set of coefficients corresponding to the amplitudes of specific cosine basis functions.
- the discrete cosine transform (DCT) is typically regarded as the most effective transform coding technique for video compression and is applied to the sampled data, such as digital image data, rather than to a continuous waveform.
- the transform converts N (point) highly correlated input spatial vectors in the form of rows and columns of pixels into N point DCT coefficient vectors including rows and columns of DCT coefficients in which high frequency coefficients are typically zero-valued.
- Energy of a ⁇ spatial vector which is defined by the squared values of each element of the vector, is preserved by the DCT transform so that all energy of a typical, low-frequency and highly-correlated spatial image is compacted into the lowest frequency DCT coefficients.
- the human psycho visual system is less sensitive to high frequency signals so that a reduction in precision in the expression of high frequency DCT coefficients results in a minimal reduction in perceived image quality.
- 8*8 block resulting from the DCT block is divided by a quantizing matrix to reduce the magnitude of the DCT coefficients.
- the information associated to the highest frequencies less visible to human sight tends to be removed.
- the result is reordered and sent to the variable length-coding block 803.
- VLC block 803 is a statistical coding block that assigns codewords to the values to be encoded. Values of high frequency of occurrence are assigned short codewords, and those of infrequent occurrence are assigned long codewords. On an average, the more frequent shorter codewords dominate so that the code string is shorter than the original data.
- VLC coding which generates a code made up of DCT coefficient value levels and run lengths of the number of pixels between nonzero DCT coefficients, generates a highly compressed code when the number of zero-valued DCT coefficients is greatest.
- the data obtained from the VLC coding block is transferred to the transmitter at an appropriate bit rate. The amount of data transferred per second is known as bit rate.
- Figure 10 depicts the exemplary digital signal waveform and data transfer.
- the vertical axis 1001 represents voltage and the horizontal axis 1002 represents time.
- the digital waveform has a pulse width of N and a period (or cycle) of 2N where N represents the bit time of the pulse (i.e., the time during which information is transferred) .
- the pulse width, N may be in any units of time such as nanoseconds, microseconds, picoseconds, etc.
- the maximum data rate that may be transmitted in this manner is 1/ ⁇ sT transfers per second, or one bit of data per half cycle (the quantity of time labeled N) .
- the fundamental frequency of the digital waveform is 1/2N hertz.
- simplified rate control is employed which increases the bit rate of the data by 50% compared to MPEG2 using the method described above. Consequently in less time there is large chunk of data being transferred to the transmitter making the process real time.
- the compressed data is then transmitted, in accordance with the above-described transmission protocol, and wirelessly received by the receiver.
- compressed video information must be quickly and efficiently decoded.
- the aspect of the decoding process which is used in the preferred embodiment, is inverse discrete cosine transformation (IDCT) .
- IDCT Inverse discrete cosine .transform
- a commonly used two-dimensional data block size is 8*8 pixels, which furnishes a good compromise between coding efficiency and hardware complexity.
- the inverse DCT circuit performs an inverse digital cosine transform on the decoded video signal on a block- by-block basis to provide a decompressed video signal.
- the circuit 1100 includes a preprocess DCT coefficient block (hereinafter PDCT) 1101, an evaluate coefficients block 1102, a select IDCT block 1103, a compute IDCT block 1104, a monitor frame rate block 1105 and an adjust IDCT parameters block 1106.
- PDCT preprocess DCT coefficient block
- the wirelessly transmitted media received from the transmitter, includes various coded DCT coefficients, which are routed to the PDCT block 1101.
- the PDCT block 1101 selectively sets various DCT coefficients to a zero value to increase processing speed of the inverse discrete cosine transform procedure with a slight reduction or no reduction in video quality.
- the DCT coefficient-evaluating block 1102 then receives the preprocessed DCT coefficient from the PDCT 1101.
- the evaluating circuit 1102 examines the coefficients in a DCT coefficient block before computation of the inverse discrete cosine transform operation.
- an inverse discrete cosine transform (IDCT) selection circuit 1103 selects an optimal IDCT procedure for processing of the coefficients.
- the computation of the coefficients is done by the compute IDCT block 1104.
- several inverse discrete cosine transform (IDCT) engines are available for selective activation by the selection circuit 1103.
- the inverse discrete cosine transformed coefficients are combined with other data prior to display.
- the monitor frame - rate block 1105_ thereafter determines an appropriate frame rate of the video system, for example by reading a system clock register (not shown) and comparing the elapsed time with a prestored frame interval corresponding to a desired frame rate.
- the adjust IDCT parameter block 1106 then adjusts parameters including the non-zero coefficient threshold, frequency and magnitude according to the desired or fitting frame rate.
- the abovementioned IDCT block computes an inverse discrete cosine transform in accordance with the appropriate selected IDCT method. For example, an 8*8 forward discrete cosine transform (DCT) is defined by the following equation:
- x(i,j) is a pixel value in an 8*8 image block in spatial domains i and j
- X (u,v) is a transformed coefficient in an 8*8 transform block in transform domains u,v.
- IDCT inverse discrete cosine transform
- An 8*8 IDCT is considered to be a combination of a set of 64 orthogonal DCT basis matrices, one basis matrix for each two- dimensional frequency (v, u) . Furthermore, each basis matrix is considered to be the two-dimensional IDCT transform of each single transform coefficient set to one. Since there are 64 transform coefficients in an 8*8 IDCT, there are 64 basis matrices.
- the IDCT kernel K(v, u) also called a DCT basis matrix, represents a transform coefficient at frequency (v, u) according to the equation:
- the IDCT is computed by scaling each kernel by the transform coefficient at that location and summing the scaled kernels.
- the spatial domain matrix S is obtained using the equation, as follows
- the synchronization circuit 1200 comprises a buffer 1201 having the video and audio media, first socket 1202 for transmitting video and second socket 1203 for transmitting audio, first counter 1204 and second counter 1205 at the transmitter 1206 and first receiver 1207 for video data, second receiver 1208 for audio data, first counter 1209, second counter 1210, mixer 1211 and a buffer 1212 at receiver end 1213.
- the buffered audio and video data 1201 at the transmitter 1206 after compression is transmitted separately on the first socket 1202 and the second socket 1203.
- the counters 1204, 1205 add an identical sequence number both to the video and audio data prior to transmission.
- the audio data is preferably routed via User Datagram Protocol (UDP) whereas the video data via Transmission Controlled Protocol (TCP) .
- UDP User Datagram Protocol
- TCP Transmission Controlled Protocol
- the UDP protocol and the TCP protocol implemented by the audio receiver block 1208 and the video receiver block 1207 receives the audio and video signals.
- the counters 1209, 1210 determine the sequence number from the audio and video signals and provide it to the mixer 1211 to enable the accurate mixing of signals.
- the mixed data is buffered 1212 and then rendered by the remote monitor.
- the flowchart depicts another embodiment of synchronizing audio and video signals of the integrated wireless system of the present invention.
- the receiver receives 1301 a stream of encoded video data and encoded audio data wirelessly.
- the receiver then ascertains 1302 the time required to process the video portion and the audio portion of the encoded stream.
- the receiver determines 1303 the difference in time to process the video portion of the encoded stream as compared to the audio portion of the encoded stream.
- the receiver subsequently establishes 1304 which processing time is greater (i.e., the video processing time or the audio processing time) .
- the video presentation is delayed 1305 by the difference determined, thereby synchronizing the decoded video data with the decoded audio data.
- the audio presentation is not delayed and played at its constant rate 1306.
- Video presentation tries to catch up the audio presentation by discarding video frames after regular intervals .
- the data is then finally rendered 1307 on the remote monitor. Therefore, audio "leads" video meaning that the video synchronizes itself with the audio.
- the decoded video data is substantially synchronized with the decoded audio data.
- substantially synchronized means, that while there may be a slight, theoretically measurable difference between the presentation of the video data and the presentation of the corresponding audio data, such a small difference in the presentation of the audio and video data is not likely to be perceived by a user watching and listening to the presented video and audio data.
- a typical transport stream is received at a substantially constant rate.
- the delay that is applied to the video presentation or the audio presentation is not likely to change frequently.
- the aforementioned procedure may be performed periodically (e.g., every few seconds or every 30 received video frames) to be sure that the delay currently being applied to the video presentation or the audio presentation is still within a particular threshold (e.g., not visually or audibly perceptible) .
- the procedure may be performed for each new frame of video data received from the transport stream.
- the synchronization circuit 1400 at the transmitter end 1401 comprises buffer 1402 having media data, multiplexer 1403 for combining the media data signals, such as graphics, text, audio, and video signals, and a clock 1404 for providing the timestamps to the media content for synchronization.
- the demultiplexer 1406 using clock 1407 devolves the data stream into the individual media data streams .
- the timestamps provided by the clocks help synchronize the audio and video at the receiver end.
- the clock is set at the same frequency as that of receiver.
- the audio and video, which is demultiplexed is routed to the speakers 1408 and display device 1409 for rendering.
- the present invention provides a system and method of automatically downloading, installing, and updating the novel software of the present invention on the computing device or remote monitor.
- No software CD is required to install software programs on the remote monitor, the receiver in the remote monitor, the computing device, or the transmitter in the computing device.
- a personal computer communicating to a wireless projector is provided, although the description is generic and will apply to any combination of computing device and remote monitor. It is assumed that both the personal computer and wireless projector are in data communication with a processing system on chip, as previously described.
- the wireless projector runs a script to configure itself as an access point.
- the WP-AP sets the SSID as QWPxxxxxx where x xxxxxx' is lower 6 bytes of AP' s MAC Address.
- the WP-AP sets its IP Address as 10.0.0.1.
- WP-AP starts an HTTP server.
- WP-AP starts the DHCP server, with following settings in the configuration file
- the WP-AP starts a small DNS server, configured to reply 10.0.0.1 (i.e. WP-AP' s address) for any DNS query.
- the IP Address in the response will be changed if the WP-AP' s IP Address is changed.
- the default page of HTTP server has a small software program, such as a Java Applet, that conducts the automatic software update.
- the WP-AP through its system on chip and transceiver, communicates its presence as an access point.
- the user's computing device has a transceiver capable of wirelessly transmitting and receiving information in accordance with known wireless transmission ⁇ protocols and standards.
- the user's computing device recognizes the presence of the wireless projector, as an access point, and the user instructs the computing device to join the access point through graphical user interfaces that are well known to persons of ordinary skill in the art.
- the user opens a web browser application on the computing device and types into a dialog box and any URL, or permits the browser to revert to a default URL.
- the opening of the web browser accesses the default page of WP-AP HTTP server and results in the initiation of the software program (e.g. Java Applet) .
- the software program e.g. Java Applet
- the software program checks if the user' s browser supports it in order to conduct an automatic software update.
- the rest of the example will be described in relation to Java but it should be appreciated that any software programming language could be used.
- the_ applet will. _check if the software and drivers necessary to implement the media transmission methods described herein are already installed. If already present, then the Java Applet compares the versions and automatically initiates installation if the computing device software versions are older than the versions on the remote monitor.
- Java is not supported by the browser, the user' s web page is redirected to an installation executable, prompting the user to save it or run it.
- the page will also display instructions of how to save and run the installation.
- the installation program also checks if the user has already installed the software and whether the version needs to be upgraded or not. In this case user will be advised to Install Java.
- the start addresjs -for WP-AP' s DNS server is 10.0.0.2.
- WP-AP runs the DHCP client for its Ethernet connection and obtains IP, Gateway, Subnet and DNS addresses from the DHCP Server on the local area network. If the DHCP is disabled then it uses static values.
- the installation program installs the application, uninstaller, and drivers.
- the application is launched automatically. On connection, the application obtains the DNS address of WP-AP' s Ethernet port, and sets it on the local machine.
- WP-AP enables IP Forwarding and sets the firewall such that it only forwards packets from the connected application to the Ethernet and vice versa.
- WP-AP These settings enable the user to access the Ethernet local area network of WP- AP and access the Internet.
- the firewall makes sure that only the user with his/her application connected to the WP-AP can access LAN/Ethernet.
- WP-AP disables IP Forwarding and restores the firewall settings.
- the application _running on the user system sets the DNS setting to .10.0.0.1.
- the DNS setting is set to DHCP.
- the user is prompted to select if the computing device will act as a gateway or not.
- the appropriate drivers, software, and scripts are installed.
- the wireless projector access point has a pre-assigned IP Address of 10.A.B.I and the gateway system has as a pre-assigned IP Address of 10.A.B.2 where A and B octets can be changed by the user.
- the WP-AP is booted.
- the user's computing device scans for available wireless networks and selects QWPxxxxxx.
- the computing device's wireless configuration should have automatic TCP/IP configuration enabled, i.e. ⁇ Obtain an IP address automatically' and y Obtain DNS server address automatically' options should be checked.
- the computing device will automatically get an IP address from 10.0.0.3 to 10.0.0.254.
- the default gateway and DNS will be set as 10.0.0.1.
- the user opens the browser, and, if Java is supported, the automatic software update begins. If Java is not supported, the user will be prompted to save the installation and will have to run it manually. If the computing device will not act as a gateway to a network, such as the Internet, during the installation, the user selects ⁇ No' to the Gateway option.
- the installation runs a script to set the DNS as 10.0.0.2. So that next DNS query gets appropriately directed.
- An application link is created on the desktop.
- the user runs the application that starts transmitting the exact contents of the user's screen to the Projector. If required, user can now _ change the WP-AP configuration (SSID, Channel, IP Address Settings: second and third octet can be changed of 10.0.0.x.).
- the computing device will act as a gateway to a network, such as the Internet, during the installation, the user selects 'Yes' to the Gateway option when prompted.
- the installation then enables Internet sharing (IP Forwarding) on the Ethernet interface (sharing is an option in the properties of network interface in both Windows 2000 and Windows XP) , sets the system's wireless interface IP as 10.0.0.2,. sets the system's wireless interface netmask as 255.255.255.0 r and sets the system's wireless interface gateway as 10.0.0.1.
- An application link is created on the desktop.
- the user runs the application that starts transmitting the exact contents of the user' s screen to the Projector. If required, user can now change the WP-AP configuration (SSID, Channel, IP Address Settings: second and third octet can be changed of 10.0.0.x.).
- the present invention enables the real-time transmission of media from a computing device to one or more remote monitoring . devices or other computing devices.
- FIG 16 another arrangement of the integrated wireless multimedia system of the present invention is depicted.
- the transmitter 1601 wirelessly transmits the media to a receiver integrated into, or in data communication with, multiple devices 1601, 1602, and 1603 for real-time rendering.
- the abovementioned software can also be used in both the mirror capture mode and the extended mode. In mirror capture mode, the real time streaming of the content takes place with the identical content being displayed both at the transmitter and the receiver end. However, in an extended mode, user can work on some other application at the transmitter side and the transmission can continue as a backend process.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Mobile Radio Communication Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US67343105P | 2005-04-21 | 2005-04-21 | |
PCT/US2006/014559 WO2006113711A2 (en) | 2005-04-21 | 2006-04-18 | Integrated wireless multimedia transmission system |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1872576A2 true EP1872576A2 (de) | 2008-01-02 |
EP1872576A4 EP1872576A4 (de) | 2010-06-09 |
Family
ID=37115862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06750566A Withdrawn EP1872576A4 (de) | 2005-04-21 | 2006-04-18 | Integriertes drahtloses multimedia-übertragungssystem |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP1872576A4 (de) |
JP (1) | JP2008539614A (de) |
CN (1) | CN101273630A (de) |
AU (1) | AU2006236394B2 (de) |
CA (1) | CA2603579A1 (de) |
WO (1) | WO2006113711A2 (de) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100158146A1 (en) * | 2007-04-05 | 2010-06-24 | Sharp Kabushiki Kaisha | Communication scheme determining apparatus, transmission apparatus, reception apparatus, ofdm adaptive modulation system and communication scheme determining method |
KR20090030681A (ko) | 2007-09-20 | 2009-03-25 | 삼성전자주식회사 | 영상처리장치, 디스플레이장치, 디스플레이 시스템 및 그제어방법 |
US8645579B2 (en) | 2008-05-29 | 2014-02-04 | Microsoft Corporation | Virtual media device |
JP2014063259A (ja) * | 2012-09-20 | 2014-04-10 | Fujitsu Ltd | 端末装置,及び処理プログラム |
WO2014077837A1 (en) * | 2012-11-16 | 2014-05-22 | Empire Technology Development, Llc | Routing web rendering to secondary display at gateway |
TWI511104B (zh) | 2014-10-07 | 2015-12-01 | Wistron Corp | 互動式電子白板操作方法以及使用該方法的裝置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030017846A1 (en) * | 2001-06-12 | 2003-01-23 | Estevez Leonardo W. | Wireless display |
EP1463283A2 (de) * | 2003-03-04 | 2004-09-29 | Kabushiki Kaisha Toshiba | Informationsverarbeitungsgerät und Programm |
US20050036509A1 (en) * | 2003-06-03 | 2005-02-17 | Shrikant Acharya | Wireless presentation system |
US20060010392A1 (en) * | 2004-06-08 | 2006-01-12 | Noel Vicki E | Desktop sharing method and system |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001333410A (ja) * | 2000-05-22 | 2001-11-30 | Sony Corp | メディアデータの提供を最適化するためのメタデータ使用方法及びシステム |
US6647061B1 (en) * | 2000-06-09 | 2003-11-11 | General Instrument Corporation | Video size conversion and transcoding from MPEG-2 to MPEG-4 |
US9317241B2 (en) * | 2000-10-27 | 2016-04-19 | Voxx International Corporation | Vehicle console capable of wireless reception and transmission of audio and video data |
US20040205116A1 (en) * | 2001-08-09 | 2004-10-14 | Greg Pulier | Computer-based multimedia creation, management, and deployment platform |
-
2006
- 2006-04-18 CA CA002603579A patent/CA2603579A1/en not_active Abandoned
- 2006-04-18 CN CN200680013457.XA patent/CN101273630A/zh active Pending
- 2006-04-18 JP JP2008507808A patent/JP2008539614A/ja active Pending
- 2006-04-18 EP EP06750566A patent/EP1872576A4/de not_active Withdrawn
- 2006-04-18 AU AU2006236394A patent/AU2006236394B2/en not_active Ceased
- 2006-04-18 WO PCT/US2006/014559 patent/WO2006113711A2/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030017846A1 (en) * | 2001-06-12 | 2003-01-23 | Estevez Leonardo W. | Wireless display |
EP1463283A2 (de) * | 2003-03-04 | 2004-09-29 | Kabushiki Kaisha Toshiba | Informationsverarbeitungsgerät und Programm |
US20050036509A1 (en) * | 2003-06-03 | 2005-02-17 | Shrikant Acharya | Wireless presentation system |
US20060010392A1 (en) * | 2004-06-08 | 2006-01-12 | Noel Vicki E | Desktop sharing method and system |
Non-Patent Citations (3)
Title |
---|
RAMAN A ET AL: "Low-cost wireless projector interface device using TI TMS320DM270" 2004 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)- TAIPEI, TAIWAN, IEEE - PISCATAWAY, NJ, USA, vol. 1, 27 June 2004 (2004-06-27), - 30 June 2004 (2004-06-30) page 321, XP031259602 ISBN: 978-0-7803-8603-7 * |
SANG-HYONG KIM ET AL: "A reliable layered data transmisson method for MPEG-4 seamless streaming service" THE 2004 JOINT CONFERENCE OF THE 10TH ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS AND THE 5TH INTERNATIONAL SYMPOSIUM ON MULTI-DIMENSIONAL MOBILE COMMUNICATIONS PROCEEDINGS, BEIJING, CHINA 29 AUG.-1 SEPT. 2004, PISCATAWAY, NJ, USA,IEEE, US LNKD, vol. 2, 29 August 2004 (2004-08-29), pages 696-699, XP010765052 ISBN: 978-0-7803-8601-3 * |
See also references of WO2006113711A2 * |
Also Published As
Publication number | Publication date |
---|---|
JP2008539614A (ja) | 2008-11-13 |
AU2006236394A1 (en) | 2006-10-26 |
CN101273630A (zh) | 2008-09-24 |
AU2006236394B2 (en) | 2010-06-17 |
EP1872576A4 (de) | 2010-06-09 |
WO2006113711A2 (en) | 2006-10-26 |
CA2603579A1 (en) | 2006-10-26 |
WO2006113711A3 (en) | 2007-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10721282B2 (en) | Media acceleration for virtual computing services | |
US7069573B1 (en) | Personal broadcasting and viewing method of audio and video data using a wide area network | |
US20080201751A1 (en) | Wireless Media Transmission Systems and Methods | |
US9800939B2 (en) | Virtual desktop services with available applications customized according to user type | |
US20080288992A1 (en) | Systems and Methods for Improving Image Responsivity in a Multimedia Transmission System | |
JP4257967B2 (ja) | ユニバーサルなステートレスのデジタルおよびコンピュータ・サービスを提供するためのシステムと方法 | |
AU2006236394B2 (en) | Integrated wireless multimedia transmission system | |
US11197051B2 (en) | Systems and methods for achieving optimal network bitrate | |
KR20060007044A (ko) | 무선 디지털 비디오 프리젠테이션을 위한 방법 및 시스템 | |
CN102387187A (zh) | 服务器、客户端及利用其远程播放视频文件的方法和系统 | |
US8432966B2 (en) | Communication apparatus and control method for communication apparatus | |
KR20180086112A (ko) | 웹 브라우저에서 미디어를 재생하고 탐색하는 장치 및 방법 | |
US10404606B2 (en) | Method and apparatus for acquiring video bitstream | |
US20170019870A1 (en) | Method and apparatus for synchronization in a network | |
CN113014950A (zh) | 一种直播同步的方法、系统和电子设备 | |
US11140442B1 (en) | Content delivery to playback systems with connected display devices | |
CA2410748A1 (en) | Audio-video-over-ip method, system and apparatus | |
JP5627413B2 (ja) | 放送受信装置及び放送受信システム | |
JP6137257B2 (ja) | 放送受信システム及び放送受信方法 | |
US20240205469A1 (en) | Apparatus and method for processing cloud streaming low latency playback | |
JP2003037837A (ja) | 画像データ転送装置及び画像データ転送方法、情報処理装置及び情報処理方法、情報表示装置及び情報表示方法、画像情報配信システム及び画像情報配信方法、プログラム格納媒体、並びにプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20071005 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: SAQIB, MALIK Inventor name: RAULT, PATRICK Inventor name: COWAN, ANTHONY, J. Inventor name: ABHISHEK, JOSHI Inventor name: USMAN, MOHAMMAD Inventor name: SIDDIQUI, MUDEEM, I. Inventor name: AHMED, SHERJIL |
|
DAX | Request for extension of the european patent (deleted) | ||
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: COWAN, ANTHONY, J. Inventor name: ABHISHEK, JOSHI Inventor name: AHMED, SHERJIL Inventor name: SIDDIQUI, MUDEEM, I. Inventor name: RAULT, PATRICK Inventor name: USMAN, MOHAMMAD Inventor name: SAQIB, MALIK |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20100511 |
|
17Q | First examination report despatched |
Effective date: 20121106 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20121101 |