WO2012162079A2 - Digital rack interface pod system and method incorporating lossless, partial video refresh feature - Google Patents

Digital rack interface pod system and method incorporating lossless, partial video refresh feature Download PDF

Info

Publication number
WO2012162079A2
WO2012162079A2 PCT/US2012/038293 US2012038293W WO2012162079A2 WO 2012162079 A2 WO2012162079 A2 WO 2012162079A2 US 2012038293 W US2012038293 W US 2012038293W WO 2012162079 A2 WO2012162079 A2 WO 2012162079A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
video frame
pixel locations
ones
locations
Prior art date
Application number
PCT/US2012/038293
Other languages
French (fr)
Other versions
WO2012162079A3 (en
WO2012162079A4 (en
Inventor
Albert-Mark N. OJERIO
Chad E. LYLE
Original Assignee
Avocent
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avocent filed Critical Avocent
Publication of WO2012162079A2 publication Critical patent/WO2012162079A2/en
Publication of WO2012162079A3 publication Critical patent/WO2012162079A3/en
Publication of WO2012162079A4 publication Critical patent/WO2012162079A4/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • G06F3/1462Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay with means for detecting differences between the image stored in the host and the images displayed on the remote displays
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/24Keyboard-Video-Mouse [KVM] switch

Definitions

  • the present disclosure relates to systems and methods for performing digital video compression, and more particularly to a system and method for providing a lossless, partial video refresh feature that provides an even greater degree of digital video compression than conventional compression algorithms are capable of.
  • a remote access appliance In a data center environment, often a remote access appliance is used to access and communicate with a wide variety of electronic devices, for example servers, power distribution units (PDUs), electronic sensors, just to name such electronic components.
  • the remote access appliance may form an appliance that allows keyboard and mouse commands from a remote computer to be transmitted to the server, and video signals to be transmitted from the server back to the remotely located computer.
  • a rack interface pod is used to interface the remote access appliance to the server.
  • the rack interface pod may typically be a digital rack interface pod and may be coupled between an Ethernet port of the remote access appliance and the serial and video ports of the server.
  • one rack interface pod is used per server.
  • the rack interface pod operates to digitize analog video signals being output from the video port of the server and to place the digitized video signals into Ethernet protocol format.
  • the digitized video signals are then transmitted to an end user's electronic device, typically a display terminal for displaying the video information represented by the digitized video signals.
  • Such transmission of the digitized video signals often occurs over a network before the digitized video signals are received at a user's display system.
  • the transmission of digitized video information requires some bandwidth of the network.
  • a scheme, by which the digitized video may be compressed by a digital rack interface pod, without incurring any loss of digitized data, would be highly valuable in reducing the bandwidth of the network which is used when transmitting the digitized video signals. In turn, this would help free up valuable network bandwidth for other devices making use of the network.
  • the present disclosure relates to a method for compressing digital video data being transmitted from a first electronic device to a second electronic device.
  • the method may comprise analyzing a first video frame made up of a grid of pixel locations, where each pixel location is defined by a row number and a column number, and each pixel location includes a pixel value.
  • a second video frame may be analyzed which is made up of the same grid of pixel locations as the first video frame, and where certain ones of the pixel locations in the second video frame have pixel values that differ from the pixel values of corresponding pixel locations in the first video frame.
  • a byte stream may be transmitted, which is comprised of compressed digital video data, to the second electronic device from the first electronic device.
  • the byte stream may be compressed digital video data including commands identifying which ones of the pixel locations in the second video frame have pixel values that are identical to pixel values in corresponding ones of the pixel locations in the first video frame.
  • the byte stream may also include commands which identify which specific ones of the pixel locations in the second video frame have a pixel value that differs from a pixel value present in corresponding ones of the pixel locations of the first video frame.
  • the present disclosure relates to a method for compressing digital video data being transmitted from a first electronic device to a second electronic device.
  • the method may comprise analyzing a first video frame made up of a plurality of pixel locations, where each pixel location is defined by a row number and a column number, and each pixel location includes a pixel value.
  • a second video frame may be analyzed which is made up of the same pixel locations as the first video frame, where certain ones of the pixel locations have pixel values that differ from the pixel values of corresponding pixel locations in the first video frame.
  • Ones of the pixel locations that have identical ones of the pixel values in the first and second video frames are identified.
  • a no change (NC) command may be used to identify individual pixel locations, as well as groups of contiguous pixel locations, in the second video frame which have pixel values that match the pixel values in corresponding pixel locations of the first video frame, with each NC command representing a first byte size of data.
  • a Make Pixel (MP) command may be generated for each pixel location in the second video frame that has a pixel value that differs from the pixel value in the corresponding pixel location of the first video frame.
  • Each MP command may represent a second byte size of data greater than the first byte size of data.
  • the NC commands and the MP commands may be transmitted in a byte stream to the second electronic device. The byte stream may be used by the second electronic device to modify the first video frame to match the second video frame.
  • the present disclosure relates to a digital rack interface pod apparatus that interfaces a first electronic device to a second electronic device.
  • the digital rack interface pod may comprise an application specific integrated circuit (ASIC).
  • the ASIC may include a processor, a memory in communication with the processor, a video compression engine, and a digital video compression algorithm used by the video compression engine.
  • the algorithm may be adapted to perform a partial refresh operation to compress digital video data being transmitted from the digital rack interface pod to a remote electronic device.
  • the video compression engine may be configured to use the digital video compression algorithm to analyze sequential first and second digital video frames and to identify ones of the pixel locations in the second digital video frame that have pixel values that match corresponding ones of the pixel locations in the first digital video frame.
  • the video compression engine may also be used to generate no change (NC) commands which are transmitted as part of a digital byte stream to the remote electronic device.
  • NC commands cause no change to the groups of contiguous ones of the pixel locations in the first digital video frame when the first digital video frame is refreshed to form the second digital video frame.
  • FIG. 1 is a perspective view of a digital rack interface pod ("DRIP") in accordance with one embodiment of the present disclosure
  • Figure 2 is a perspective view of one embodiment of the DRIP of Figure 1 ;
  • Figure 3 is a high level block diagram of major subsystems of the DRIP of Figure 1 ;
  • Figure 3A is a high level block diagram of major components of the ASIC shown in Figure 3;
  • Figure 4 is a high level block diagram shown a flow of data between the remote access appliance and the server through the DRIP;
  • FIG. 5 is a high level block diagram of the DRIP shown how data packets flow between the two Ethernet ports on the DRIP;
  • FIGS 6A and 6B are high level block diagrams showing how the two DRIP Ethernet ports may be configured to act as the port communicating directly with the DRIP;
  • Figure 7 is a flowchart illustrating one example of a sequence of operations that may be used to detect for communications from the DRIP on either one of the two Ethernet ports of the DRIP;
  • Figure 8 illustrates an example of a Prior Art "client" frame of data made up an 8x8 grid of pixel locations, with each pixel location having a specific pixel value representing video data
  • Figure 9 illustrates a Prior Art "Live” frame of data that will be used to refresh the data frame shown in Figure 8, and where the Live frame of data uses the same 8x8 grid of pixel locations as the Client data frame shown in Figure 8 with shading in the pixel locations indicating where pixel values have changed from those present in the Client data frame;
  • Figure 10 illustrates a Prior Art sequence of the commands used to represent the changes present in the Live data frame, for each row of the Live data frame;
  • Figure 1 1 illustrates the Live data frame of Figure 10 but with the commands used in accordance with a lossless, partial video refresh system and method in accordance with the present disclosure
  • FIG. 12 a flowchart is shown illustrating various operations that may be performed by the lossless, partial refresh algorithm of the present disclosure.
  • FIG. 1 there is shown a digital rack interface pod (“DRIP") 10 in accordance with one embodiment of the present application.
  • DRIP digital rack interface pod
  • the digital rack interface pod will be referred to throughout the following discussion as the "DRIP" 10.
  • the DRIP 10 is used to interface a remote access appliance 12 with a server 14.
  • the DRIP 10 could be potentially be used to interface to any component having a video source and/or USB keyboard/mouse port.
  • a remote computer 16 or other form of computing device may be in communication with the remote access appliance 12 either via a wide area network, such as the Internet, or via a hard wired connection.
  • the computer 16 may be located remotely from the remote access appliance 12 or may be located in close proximity to the remote access appliance 12.
  • the DRIP 10 may be interfaced to the remote access appliance by a suitable cable 18, such as a Cat5 cable, that connects to an Ethernet port 20 of the remote access appliance 12.
  • the other end of the cable 18 may be coupled to a first port 22 of the DRIP 10.
  • the first port 22 may be formed by any suitable jack, but in one preferred form the first port is formed by an RJ-45 jack.
  • the DRIP also may include a second port 24, which may also be formed by an RJ-45 jack.
  • the first and second ports 22 and 24 may be formed by as a single, custom modular jack.
  • the second port 24 may be interfaced to an Ethernet port 26 of the server 14 by a suitable cable 28.
  • the cable 28 may also be a Cat5 cable.
  • the Ethernet port 26 may also be formed by an RJ-45 jack.
  • the Ethernet port 26 is in communication with a service processor 30 of the server 14.
  • the service processor monitors a plurality of important operating parameters of the server 14 such as temperature, cooling fan speeds, power status, operating system status, just to name a few.
  • the service processor 30 provides information relating to these metrics to the DRIP 30 via signals output from its associated Ethernet port 26.
  • the DRIP 10 also includes a cable assembly 32 having a portion 34 that interfaces to a serial port, for example a USB port 36, of the server 14, and a portion 38 that interfaces to a video port 40 (typically a VGA port) of the server 14.
  • a major function of the DRIP 10 is to receive keyboard and mouse commands from the computer 16 and to format such commands into serial format, and then to forward them in serial format to the server 14 via cable portion 34.
  • Another major function of the DRIP 10 is to receive analog video signals from the server's video port 40, digitize and format them into Ethernet protocol, and then output the video information in Ethernet format from first port 22 to the remote access appliance 12.
  • the cable assembly 32 may also include a portion 42 having a suitable plug or jack 44 for receiving DC power from an external DC power transformer. This provides the ability to power the DRIPs 10 internal components via the external DC power transformer. Typically, however, the DRIP 10 will be powered by DC power supplied from the USB port 36 of the server 14. [0028] Referring briefly to Figure 2, an illustration of one embodiment of the DRIP 10 is shown.
  • the DRIP 10 includes a housing 48 within which the first and second ports 22 and 24, respectively, are housed.
  • the housing 48 may include one or more LEDs 50 for status notification (e.g., flashing during firmware upgrade), or to locate the DRIP 10 by turning on an LED on demand by a user.
  • Cable portion 34 may include a USB connector 52 and cable portion 38 may include a VGA connector 54.
  • Cable portion 42 may include the jack 44, which is shown as a barrel jack in this implementation.
  • a conventional ferrite bead 56 may be included to help reduce RF emissions from the cable 38.
  • a principal feature of the DRIP 10 is its ability to handle communications between the remote access appliance 12 and the service processor 30 of the server 14.
  • Previously used rack interface pods have required the use of separate cable, shown by dashed lines 46, that interfaced to a separate port on a remote access appliance.
  • the communications between the service processor 26 and the remote access appliance 12 did not pass through the DRIP 10.
  • this arrangement can necessitate significant extra cabling (i.e., such as cable 46). This is because the DRIP 10 is typically located in close physical proximity to the server 14, and often within 2-4 feet of the server's USB port 36 and video port 40. So the additional cabling required to interface the DRIP 10 to the Ethernet port 26 is quite minimal (typically less than 6 feet).
  • the remote access appliance 12 may be located at some distance, possibly up to 300 feet or more, from the server 14. For example, if the server 14 was located about 250 feet from the remote access appliance 12, then cable 46 shown in Figure 1 would require a length of at least about 250 feet. Cable 18 shown in Figure 1 would also typically require a similar length. If the remote access appliance 12 was interfaced to 40 servers, with each server needing to have its Ethernet port interfaced to the remote access appliance 12, then 40 lengths of cabling 46 would be required with each having a length of about 250 feet. In effect, by enabling the DRIP 10 to handle the Ethernet communications between the remote access appliance 12 and the server 14, the amount of cabling required to accomplish this interfacing is dramatically reduced.
  • the degree of reduction may be on the order of about one-half of the cabling that would otherwise be required if the DRIP 10 was not able to handle the Ethernet communications to/from the service processor 30. It will also be appreciated that the dramatic reduction in cabling required when the DRIP 10 is used makes for a significantly less cluttered installation.
  • the DRIP 10 may include a dual Ethernet jack 60 which includes the first and second ports 22 and 24.
  • the dual Ethernet jack 60 communicates with an application specific integrated circuit (hereinafter "ASIC") 62.
  • the ASIC 62 includes ports 64 and 66 for receiving the Ethernet communications present on ports 22 and 24.
  • the ASIC has access to a double data rate (DDR) memory 68 for storing information during processing.
  • DDR double data rate
  • a conventional USB hub 70 interfaces the ASIC 62 to the USB cable portion 34 of cable assembly 32.
  • the ASIC 62 is also interfaced to the video cable portion 38 of cable assembly 32.
  • a power subsystem 72 receives DC power from the jack 44 over cable 42 when the jack 44 is coupled to an external DC power transformer, or DC power over cable portion 34 of the cable assembly 32 when the DRIP 10 is being powered in its typical operation state by power from the server's USB port 36.
  • FIG. 3A illustrates a high level block diagram of major components of the ASIC 62.
  • the ASIC 62 may include a processor 100, a plurality of embedded memories 102 and an encryption engine 104.
  • the encryption engine 104 is used for hardware encryption of data communicated to the appliance 12.
  • a DDR controller 106 communicates with a DDR memory 1 14 and with a video compression engine 108.
  • the video compression engine 108 is in communication with a video capture subsystem 1 10.
  • the video compression engine 108 provides compression of live PC video such that the compressed video data can be stored into memory and transferred across a bus or network using minimal memory and bandwidth.
  • the video compression engine 108 may store and use a new lossless, partial video refresh algorithm.
  • the video capture subsystem 1 10 performs synchronization, detection and scaling of a video signal received by the ASIC 62.
  • the video capture subsystem 1 10 receives an output from a video analog-to-digital conversion subsystem 1 12, which in turn receives an analog video input signal.
  • the video capture 1 10 subsystem takes analog output from the server 14 and converts it to digital format.
  • a flash controller 1 16 communicates with an embedded flash memory 1 18 to handle the execution of application code for boot up of the DRIP 10.
  • An I2C bus interface controller communicates with a DDC interface bus, which informs the server 14 what video capabilities are supported by the DRIP 10 (e.g., screen resolution).
  • FIG. 4-7 a description of how the DRIP 10 is monitoring for, as well as routing, Ethernet packets received from the appliance 12 will be presented.
  • port 22 has been labeled "APP ETH”
  • port 24 has been labeled “SP ETH”.
  • Ethernet packets flow bidirectionally between the appliance 12 and the DRIP 10, as indicated by bidirectional arrow 80.
  • a bidirectional serial communications link exists between the DRIP USB port and the server's 14 USB port, as evidenced by bidirectional arrow 82.
  • Video signals are received from the RGB (i.e., video) port 40 of the server 14, as indicated by arrow 84.
  • Ethernet packets are also transmitted bidirectionally between the server 14 Ethernet port 26 and the SP ETH port 24 of the DRIP 10, as indicated by arrow 86.
  • Arrow 88 indicates that Ethernet packets originating from the server's Ethernet port 26, or being transmitted to the port 26, may be transmitted between the two ports 22 and 24 of the DRIP 10.
  • Ethernet packets are able to flow in both directions between the APP ETH port 22 and the SP ETH port 24. All packets that are received from the server 14 at SP ETH port 24 of the DRIP 10 are routed within the DRIP 10 to its port APP ETH 22. However, for packets received from the appliance 12 at the DRIP's APP ETH port 22, only those packets that have a MAC address that is not equal to the MAC address of the DRIP's APP ETH port 22 will be routed to the SP ETH port 24. Those Ethernet packets that are not routed to the APP ETH port 24 will have the DRIP's 10 MAC address. These packets will be converted to serial protocol signals.
  • the appliance 12 may be coupled to either the Ethernet port 22 or the Ethernet port 24.
  • the DRIP 10 is able to dynamically and automatically assign either of the ETH1 port 22 or the ETH2 port 24 as "APP ETH" (i.e., the port that the DRIP 10 recognizes as being directly coupled to the appliance 12.
  • FIG. 7 a flowchart 200 is shown illustrating one example of various operations of a discovery and communications protocol referred to as "ADDP" (Avocent DRIP Discovery Protocol) that may be performed by the DRIP 10 in detecting a communication from the appliance 12.
  • ADDP Automatic DRIP Discovery Protocol
  • This sequence of operations assumes that the DRIP has not yet been coupled via a Cat5 or other suitable cable to the output port 20 of the appliance 12.
  • Ethernet port 22 is set to ⁇ 1 " and Ethernet port 24 is set to ⁇ 2”.
  • the ASIC 62 sets a variable "n” equal to "1 " and a variable "n' " equal to "2".
  • ETHn is then enabled as "APP ETH" on the DRIP 10 and the other Ethernet port on the DRIP is disabled.
  • port 22 i.e., ETH1
  • port APP ETH the port which the DRIP 10 will initially start monitoring for communications coming from the appliance 12.
  • a timeout counter is initialized and then the timeout counter is started at operation 210.
  • the ASIC 62 of the DRIP 10 begins monitoring port ETHn of the DRIP 10 (i.e., port ETH1 or port 22, which is now acting as the APP ETH port) for a signal from the DRIP 10. If a communication is received from the appliance 12 at port ETHn, then port ETHn is maintained as the APP ETH port, as indicated at operation 214, and port ETHn' is maintained as port SP ETH port (i.e., assigned to communicate with the Ethernet port 26 of the server 14), as indicated at operation 216.
  • the timeout counter is reset to zero.
  • the ASIC 62 restarts the timeout counter and operation 212 is repeated.
  • the timeout counter is then reset to zero at operation 208, the timeout counter is restarted at operation 210 and operation 212 is repeated by the ASIC 62.
  • the ASIC 62 will be checking for communications from the appliance 12 at port 24 (which is now assigned as the APP ETH port). If a communication from the appliance 12 is detected at operation 212 before the timeout counter times out at operation 22, then the DRIP 10 will continue using Ethernet port 24 as the APP ETH port.
  • the ASIC 62 will toggle the ports 22 and 24 when the timeout counter has timed out. In this manner the ASIC 62 will intermittently be looking at one Ethernet port (22 or 24) on the DRIP 10 and then the other until it detects a communication from the appliance 12.
  • the ASIC 62 thus provides the advantage that the user is able to hook up the output Ethernet port 20 of the appliance 12 to either of the two Ethernet ports 22 or 24 on the DRIP 10, and the ASIC 62 will still be able detect when it is receiving a communication from the appliance 12. If the user should disconnect the Ethernet output port 20 of the appliance 12 from, for example port 22, and then reconnect it using port 24 of the DRIP 10, the ASIC 62 will be able to detect this change as it periodically toggles the two ports 22 and 24 looking for a communication from the appliance 12. Once it receives a communication from the appliance 12 on one of the Ethernet ports 22 or 24, it dynamically makes the needed adjustment of the port assignments on the DRIP 10 so that that port will thereafter be designed as the APP ETH port on the DRIP 10.
  • Ethernet port assignment implemented on the DRIP 10 by the ASIC 62 is not only automatic but it is dynamic as well. If the Ethernet cables are swapped at run- time, the Ethernet port assignments will be swapped by the ASIC 62 without the need for the DRIP 10 or the appliance 12 to be rebooted.
  • FIG. 8-10 a brief review of a prior art "full refresh” digital video compression (“DVC”) algorithm will be provided.
  • the DVC algorithm may be embodied in the ASIC 62, and more specifically may be stored in the video compression engine 108 of Figure 3A.
  • Figure 8 shows "Client" video data frame 100 (which may be referred to as a "first” video frame) and
  • Figure 9 shows a Live video data frame 102 (which may be referred to a "second” video frame).
  • Live it will be understood that video data frame 102 is a new data frame that follows video frame 100.
  • the video frames 100 and 102 in this example are made up of eight rows and eight columns of pixel locations (i.e., an 8x8 grid), with each pixel location including a number representing data that makes up the pixel value for that specific pixel location.
  • the first row of frame 100 may be viewed as having pixel locations 1 -8, row two has pixel locations 9-16, and so forth.
  • video frame 100 i.e., the Client frame
  • video frame 102 i.e., the Live data frame
  • Cross hatching in Figures 8 and 9 indicate which pixel locations have a pixel value in frame 102 that differs from its corresponding pixel location in frame 100.
  • Figure 10 indicates, at the bottom of each pixel location (i.e., pixel box), the type of command that may be applied during a conventional full refresh operation.
  • the commands that may be used in a conventional full refresh operation are "Copy Above" (CA), "Copy Left” (CL) and "Make Pixel” (MP), as defined below:
  • CA "Copy Above”
  • CL Copy Left
  • MP Make Pixel
  • FIG 10 the results of the conventional, prior art DVC algorithm are apparent.
  • the various shadings indicate where the CA, CL and MP commands are used.
  • the result is that the Live video frame 102 may be represented by 73 bytes of data with the full refresh scheme. With no compression, 128 bytes would have been required (i.e., a MP command, requiring 2 bytes, for each one of the 64 pixel locations).
  • FIG. 1 1 the result of a lossless, partial refresh digital video compression (DVC) operation (i.e., algorithm) in accordance with the present disclosure is shown.
  • the lossless, partial refresh compression algorithm may be embodied in the video compression engine 108 in place of the conventional DVC algorithm and makes use of the Make Pixel (MP), Copy Above (CA) and CL commands discussed above, as well as an additional command: "No Change" (NC).
  • the NC command is used to indicate that no pixel change is present for a specific pixel location, as well to indicate a horizontally contiguous group of pixel locations where no pixel change is present.
  • this command may be used to designate a string of pixel locations in one or more contiguous rows of pixel locations in the second video frame 104 which have the same pixel values as corresponding pixel locations in the first video frame 100.
  • the NC command only requires a single byte of data, similar to the CA and CL commands.
  • Figure 1 1 shows the rows of the Live video frame 102 where the NC, MP, CA and CL commands are applied.
  • the NC command is used to denote 8 contiguous pixel locations (pixel locations 15-16 in row 2 through pixel locations 17-22 in row 3) with a single (one byte) NC command, indicating that the values of pixels in these pixel locations in frame 102 have not changed from those present in the corresponding locations of Client video frame 100.
  • the lossless, partial video refresh algorithm of the present disclosure enables the Live video frame 102 to be expressed using only 31 total bytes of information.
  • the 31 bytes of information is transmitted in a byte stream from the DRIP 10 to the user's electronic device (for example laptop 16 in Figure 1 ) for display.
  • a flowchart 200 illustrating various operations that may be performed by the lossless, partial refresh algorithm is shown.
  • all pixel values in all pixel locations may be read in the new video frame (e.g., Live video frame 102).
  • all pixel locations are identified where a pixel value differs from the pixel value in the corresponding pixel location of the previously transmitted video frame (i.e., Client frame 100), and then NC commands may be generated.
  • NC commands may be generated.
  • all instances of two or more horizontally contiguous pixel locations in the new video frame (i.e., Client video frame 102) that have the same pixel values may be identified, and then all the CL commands may be generated.
  • the byte stream for the compressed new video frame (e.g., Live video frame 102) is generated with all the MP, NC, CL and CA commands inserted at the appropriate locations in the byte stream.
  • the byte stream is then transmitted from the DRIP 10 to the user's electronic device (e.g., laptop 16 in Figure 1 ).
  • the lossless, partial video refresh algorithm described herein may significantly reduce the number of bytes in a lossless, DVC byte stream (e.g., in an Ethernet byte stream) that may be required to refresh a display of video data on a user's display system, as compared to the compression done by a traditional DVC algorithm. This can provide significant savings in network bandwidth when such video data is being transmitted over a network.
  • the example given herein provides a greater than 50% reduction in the number of bytes required to perform a partial refresh of the video data on the user's electronic device.
  • a particular advantage of the lossless, partial video refresh algorithm described herein is the ability to remove artifacts that may appear in a given video frame without requiring as much of the video data to be retransmitted to the user's device to recreate (i.e., refresh) the video frame being viewed. To the user it appears that the exact same video data is being displayed, but after the partial refresh the artifacts that have otherwise been present will be eliminated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An apparatus and method for lossless compression of digital video data being transmitted from a first electronic device to a second electronic device. Pixel locations of grids of first and second video frames are analyzed to determine which pixel locations of the second video frame have pixel values that have not changed from those in corresponding ones of the first video frame. Commands are sent identifying individual pixel locations in the second video frame that have pixel values that have not changed from those present in corresponding ones of the first video frame. The commands are used when updating the first video frame to match the second video frame. Additional commands identify those pixel locations in the second video frame that have pixel values that differ from corresponding ones of the pixel locations in the first video frame.

Description

DIGITAL RACK INTERFACE POD SYSTEM AND METHOD INCORPORATING LOSSLESS, PARTIAL VIDEO REFRESH FEATURE
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Application No. 61 /488,570 filed on May 20, 201 1 . The disclosure of the above application is incorporated herein by reference.
FIELD
[0002] The present disclosure relates to systems and methods for performing digital video compression, and more particularly to a system and method for providing a lossless, partial video refresh feature that provides an even greater degree of digital video compression than conventional compression algorithms are capable of.
BACKGROUND
[0003] The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
[0004] In a data center environment, often a remote access appliance is used to access and communicate with a wide variety of electronic devices, for example servers, power distribution units (PDUs), electronic sensors, just to name such electronic components. In some instances the remote access appliance may form an appliance that allows keyboard and mouse commands from a remote computer to be transmitted to the server, and video signals to be transmitted from the server back to the remotely located computer. Typically a rack interface pod is used to interface the remote access appliance to the server. The rack interface pod may typically be a digital rack interface pod and may be coupled between an Ethernet port of the remote access appliance and the serial and video ports of the server. Typically one rack interface pod is used per server. The rack interface pod operates to digitize analog video signals being output from the video port of the server and to place the digitized video signals into Ethernet protocol format. The digitized video signals are then transmitted to an end user's electronic device, typically a display terminal for displaying the video information represented by the digitized video signals. Such transmission of the digitized video signals often occurs over a network before the digitized video signals are received at a user's display system. Thus, it will be appreciated that the transmission of digitized video information requires some bandwidth of the network. A scheme, by which the digitized video may be compressed by a digital rack interface pod, without incurring any loss of digitized data, would be highly valuable in reducing the bandwidth of the network which is used when transmitting the digitized video signals. In turn, this would help free up valuable network bandwidth for other devices making use of the network.
SUMMARY
[0005] In one aspect the present disclosure relates to a method for compressing digital video data being transmitted from a first electronic device to a second electronic device. The method may comprise analyzing a first video frame made up of a grid of pixel locations, where each pixel location is defined by a row number and a column number, and each pixel location includes a pixel value. A second video frame may be analyzed which is made up of the same grid of pixel locations as the first video frame, and where certain ones of the pixel locations in the second video frame have pixel values that differ from the pixel values of corresponding pixel locations in the first video frame. A byte stream may be transmitted, which is comprised of compressed digital video data, to the second electronic device from the first electronic device. The byte stream may be compressed digital video data including commands identifying which ones of the pixel locations in the second video frame have pixel values that are identical to pixel values in corresponding ones of the pixel locations in the first video frame. The byte stream may also include commands which identify which specific ones of the pixel locations in the second video frame have a pixel value that differs from a pixel value present in corresponding ones of the pixel locations of the first video frame.
[0006] In another aspect the present disclosure relates to a method for compressing digital video data being transmitted from a first electronic device to a second electronic device. The method may comprise analyzing a first video frame made up of a plurality of pixel locations, where each pixel location is defined by a row number and a column number, and each pixel location includes a pixel value. A second video frame may be analyzed which is made up of the same pixel locations as the first video frame, where certain ones of the pixel locations have pixel values that differ from the pixel values of corresponding pixel locations in the first video frame. Ones of the pixel locations that have identical ones of the pixel values in the first and second video frames are identified. A no change (NC) command may be used to identify individual pixel locations, as well as groups of contiguous pixel locations, in the second video frame which have pixel values that match the pixel values in corresponding pixel locations of the first video frame, with each NC command representing a first byte size of data. A Make Pixel (MP) command may be generated for each pixel location in the second video frame that has a pixel value that differs from the pixel value in the corresponding pixel location of the first video frame. Each MP command may represent a second byte size of data greater than the first byte size of data. The NC commands and the MP commands may be transmitted in a byte stream to the second electronic device. The byte stream may be used by the second electronic device to modify the first video frame to match the second video frame.
[0007] In still another aspect the present disclosure relates to a digital rack interface pod apparatus that interfaces a first electronic device to a second electronic device. The digital rack interface pod may comprise an application specific integrated circuit (ASIC). The ASIC may include a processor, a memory in communication with the processor, a video compression engine, and a digital video compression algorithm used by the video compression engine. The algorithm may be adapted to perform a partial refresh operation to compress digital video data being transmitted from the digital rack interface pod to a remote electronic device. The video compression engine may be configured to use the digital video compression algorithm to analyze sequential first and second digital video frames and to identify ones of the pixel locations in the second digital video frame that have pixel values that match corresponding ones of the pixel locations in the first digital video frame. The video compression engine may also be used to generate no change (NC) commands which are transmitted as part of a digital byte stream to the remote electronic device. The NC commands cause no change to the groups of contiguous ones of the pixel locations in the first digital video frame when the first digital video frame is refreshed to form the second digital video frame.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
[0009] Figure 1 is a perspective view of a digital rack interface pod ("DRIP") in accordance with one embodiment of the present disclosure;
[0010] Figure 2 is a perspective view of one embodiment of the DRIP of Figure 1 ;
[0011] Figure 3 is a high level block diagram of major subsystems of the DRIP of Figure 1 ;
[0012] Figure 3A is a high level block diagram of major components of the ASIC shown in Figure 3;
[0013] Figure 4 is a high level block diagram shown a flow of data between the remote access appliance and the server through the DRIP;
[0014] Figure 5 is a high level block diagram of the DRIP shown how data packets flow between the two Ethernet ports on the DRIP;
[0015] Figures 6A and 6B are high level block diagrams showing how the two DRIP Ethernet ports may be configured to act as the port communicating directly with the DRIP;
[0016] Figure 7 is a flowchart illustrating one example of a sequence of operations that may be used to detect for communications from the DRIP on either one of the two Ethernet ports of the DRIP;
[0017] Figure 8 illustrates an example of a Prior Art "client" frame of data made up an 8x8 grid of pixel locations, with each pixel location having a specific pixel value representing video data; [0018] Figure 9 illustrates a Prior Art "Live" frame of data that will be used to refresh the data frame shown in Figure 8, and where the Live frame of data uses the same 8x8 grid of pixel locations as the Client data frame shown in Figure 8 with shading in the pixel locations indicating where pixel values have changed from those present in the Client data frame;
[0019] Figure 10 illustrates a Prior Art sequence of the commands used to represent the changes present in the Live data frame, for each row of the Live data frame;
[0020] Figure 1 1 illustrates the Live data frame of Figure 10 but with the commands used in accordance with a lossless, partial video refresh system and method in accordance with the present disclosure; and
[0021] Figure 12 a flowchart is shown illustrating various operations that may be performed by the lossless, partial refresh algorithm of the present disclosure.
DETAILED DESCRIPTION
[0022] The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
[0023] Referring to Figure 1 there is shown a digital rack interface pod ("DRIP") 10 in accordance with one embodiment of the present application. For convenience, the digital rack interface pod will be referred to throughout the following discussion as the "DRIP" 10. In this example the DRIP 10 is used to interface a remote access appliance 12 with a server 14. However, it will be appreciated that the DRIP 10 could be potentially be used to interface to any component having a video source and/or USB keyboard/mouse port.
[0024] A remote computer 16 or other form of computing device may be in communication with the remote access appliance 12 either via a wide area network, such as the Internet, or via a hard wired connection. The computer 16 may be located remotely from the remote access appliance 12 or may be located in close proximity to the remote access appliance 12. The DRIP 10 may be interfaced to the remote access appliance by a suitable cable 18, such as a Cat5 cable, that connects to an Ethernet port 20 of the remote access appliance 12. The other end of the cable 18 may be coupled to a first port 22 of the DRIP 10. The first port 22 may be formed by any suitable jack, but in one preferred form the first port is formed by an RJ-45 jack.
[0025] The DRIP also may include a second port 24, which may also be formed by an RJ-45 jack. In one embodiment the first and second ports 22 and 24 may be formed by as a single, custom modular jack. The second port 24 may be interfaced to an Ethernet port 26 of the server 14 by a suitable cable 28. The cable 28 may also be a Cat5 cable. The Ethernet port 26 may also be formed by an RJ-45 jack. The Ethernet port 26 is in communication with a service processor 30 of the server 14. The service processor monitors a plurality of important operating parameters of the server 14 such as temperature, cooling fan speeds, power status, operating system status, just to name a few. The service processor 30 provides information relating to these metrics to the DRIP 30 via signals output from its associated Ethernet port 26.
[0026] The DRIP 10 also includes a cable assembly 32 having a portion 34 that interfaces to a serial port, for example a USB port 36, of the server 14, and a portion 38 that interfaces to a video port 40 (typically a VGA port) of the server 14. A major function of the DRIP 10 is to receive keyboard and mouse commands from the computer 16 and to format such commands into serial format, and then to forward them in serial format to the server 14 via cable portion 34. Another major function of the DRIP 10 is to receive analog video signals from the server's video port 40, digitize and format them into Ethernet protocol, and then output the video information in Ethernet format from first port 22 to the remote access appliance 12.
[0027] With further reference to Figure 1 , The cable assembly 32 may also include a portion 42 having a suitable plug or jack 44 for receiving DC power from an external DC power transformer. This provides the ability to power the DRIPs 10 internal components via the external DC power transformer. Typically, however, the DRIP 10 will be powered by DC power supplied from the USB port 36 of the server 14. [0028] Referring briefly to Figure 2, an illustration of one embodiment of the DRIP 10 is shown. The DRIP 10 includes a housing 48 within which the first and second ports 22 and 24, respectively, are housed. The housing 48 may include one or more LEDs 50 for status notification (e.g., flashing during firmware upgrade), or to locate the DRIP 10 by turning on an LED on demand by a user. Cable portion 34 may include a USB connector 52 and cable portion 38 may include a VGA connector 54. Cable portion 42 may include the jack 44, which is shown as a barrel jack in this implementation. A conventional ferrite bead 56 may be included to help reduce RF emissions from the cable 38.
[0029] A principal feature of the DRIP 10 is its ability to handle communications between the remote access appliance 12 and the service processor 30 of the server 14. Previously used rack interface pods have required the use of separate cable, shown by dashed lines 46, that interfaced to a separate port on a remote access appliance. In other words, the communications between the service processor 26 and the remote access appliance 12 did not pass through the DRIP 10. From a practical standpoint, this arrangement can necessitate significant extra cabling (i.e., such as cable 46). This is because the DRIP 10 is typically located in close physical proximity to the server 14, and often within 2-4 feet of the server's USB port 36 and video port 40. So the additional cabling required to interface the DRIP 10 to the Ethernet port 26 is quite minimal (typically less than 6 feet). However, the remote access appliance 12 may be located at some distance, possibly up to 300 feet or more, from the server 14. For example, if the server 14 was located about 250 feet from the remote access appliance 12, then cable 46 shown in Figure 1 would require a length of at least about 250 feet. Cable 18 shown in Figure 1 would also typically require a similar length. If the remote access appliance 12 was interfaced to 40 servers, with each server needing to have its Ethernet port interfaced to the remote access appliance 12, then 40 lengths of cabling 46 would be required with each having a length of about 250 feet. In effect, by enabling the DRIP 10 to handle the Ethernet communications between the remote access appliance 12 and the server 14, the amount of cabling required to accomplish this interfacing is dramatically reduced. The degree of reduction may be on the order of about one-half of the cabling that would otherwise be required if the DRIP 10 was not able to handle the Ethernet communications to/from the service processor 30. It will also be appreciated that the dramatic reduction in cabling required when the DRIP 10 is used makes for a significantly less cluttered installation.
[0030] Referring now to Figure 3, a high level block diagram of the internal components of the DRIP 10 is shown. The DRIP 10 may include a dual Ethernet jack 60 which includes the first and second ports 22 and 24. The dual Ethernet jack 60 communicates with an application specific integrated circuit (hereinafter "ASIC") 62. The ASIC 62 includes ports 64 and 66 for receiving the Ethernet communications present on ports 22 and 24. The ASIC has access to a double data rate (DDR) memory 68 for storing information during processing. A conventional USB hub 70 interfaces the ASIC 62 to the USB cable portion 34 of cable assembly 32. The ASIC 62 is also interfaced to the video cable portion 38 of cable assembly 32. A power subsystem 72 receives DC power from the jack 44 over cable 42 when the jack 44 is coupled to an external DC power transformer, or DC power over cable portion 34 of the cable assembly 32 when the DRIP 10 is being powered in its typical operation state by power from the server's USB port 36.
[0031] Figure 3A illustrates a high level block diagram of major components of the ASIC 62. The ASIC 62 may include a processor 100, a plurality of embedded memories 102 and an encryption engine 104. The encryption engine 104 is used for hardware encryption of data communicated to the appliance 12. A DDR controller 106 communicates with a DDR memory 1 14 and with a video compression engine 108. The video compression engine 108 is in communication with a video capture subsystem 1 10. The video compression engine 108 provides compression of live PC video such that the compressed video data can be stored into memory and transferred across a bus or network using minimal memory and bandwidth. As will be described in greater detail in the following paragraphs, the video compression engine 108 may store and use a new lossless, partial video refresh algorithm. The video capture subsystem 1 10 performs synchronization, detection and scaling of a video signal received by the ASIC 62. The video capture subsystem 1 10 receives an output from a video analog-to-digital conversion subsystem 1 12, which in turn receives an analog video input signal. The video capture 1 10 subsystem takes analog output from the server 14 and converts it to digital format. A flash controller 1 16 communicates with an embedded flash memory 1 18 to handle the execution of application code for boot up of the DRIP 10. An I2C bus interface controller communicates with a DDC interface bus, which informs the server 14 what video capabilities are supported by the DRIP 10 (e.g., screen resolution).
[0032] Referring now to Figures 4-7, a description of how the DRIP 10 is monitoring for, as well as routing, Ethernet packets received from the appliance 12 will be presented. Initially in Figure 4 it can be seen that port 22 has been labeled "APP ETH" and port 24 has been labeled "SP ETH". Ethernet packets flow bidirectionally between the appliance 12 and the DRIP 10, as indicated by bidirectional arrow 80. A bidirectional serial communications link exists between the DRIP USB port and the server's 14 USB port, as evidenced by bidirectional arrow 82. Video signals are received from the RGB (i.e., video) port 40 of the server 14, as indicated by arrow 84. Ethernet packets are also transmitted bidirectionally between the server 14 Ethernet port 26 and the SP ETH port 24 of the DRIP 10, as indicated by arrow 86. Arrow 88 indicates that Ethernet packets originating from the server's Ethernet port 26, or being transmitted to the port 26, may be transmitted between the two ports 22 and 24 of the DRIP 10.
[0033] In Figure 5, within the DRIP 10 Ethernet packets are able to flow in both directions between the APP ETH port 22 and the SP ETH port 24. All packets that are received from the server 14 at SP ETH port 24 of the DRIP 10 are routed within the DRIP 10 to its port APP ETH 22. However, for packets received from the appliance 12 at the DRIP's APP ETH port 22, only those packets that have a MAC address that is not equal to the MAC address of the DRIP's APP ETH port 22 will be routed to the SP ETH port 24. Those Ethernet packets that are not routed to the APP ETH port 24 will have the DRIP's 10 MAC address. These packets will be converted to serial protocol signals. [0034] Referring briefly to Figures 6A and 6B, it can be seen that the appliance 12 may be coupled to either the Ethernet port 22 or the Ethernet port 24. As will be described in greater detail below, the DRIP 10 is able to dynamically and automatically assign either of the ETH1 port 22 or the ETH2 port 24 as "APP ETH" (i.e., the port that the DRIP 10 recognizes as being directly coupled to the appliance 12.
[0035] Referring to Figure 7 a flowchart 200 is shown illustrating one example of various operations of a discovery and communications protocol referred to as "ADDP" (Avocent DRIP Discovery Protocol) that may be performed by the DRIP 10 in detecting a communication from the appliance 12. This sequence of operations assumes that the DRIP has not yet been coupled via a Cat5 or other suitable cable to the output port 20 of the appliance 12. Initially (as shown in Figure 6A) at operation 202, Ethernet port 22 is set to ΈΤΗ1 " and Ethernet port 24 is set to ΈΤΗ2". At operation 204 the ASIC 62 sets a variable "n" equal to "1 " and a variable "n' " equal to "2". At operation 206 ETHn is then enabled as "APP ETH" on the DRIP 10 and the other Ethernet port on the DRIP is disabled. Thus, in this configuration, as shown in Figure 6A, port 22 (i.e., ETH1 ) will be set as the port (i.e., port APP ETH) which the DRIP 10 will initially start monitoring for communications coming from the appliance 12.
[0036] At operation 208 a timeout counter is initialized and then the timeout counter is started at operation 210. At operation 212 the ASIC 62 of the DRIP 10 begins monitoring port ETHn of the DRIP 10 (i.e., port ETH1 or port 22, which is now acting as the APP ETH port) for a signal from the DRIP 10. If a communication is received from the appliance 12 at port ETHn, then port ETHn is maintained as the APP ETH port, as indicated at operation 214, and port ETHn' is maintained as port SP ETH port (i.e., assigned to communicate with the Ethernet port 26 of the server 14), as indicated at operation 216. At operation 218 the timeout counter is reset to zero. At operation 220 the ASIC 62 restarts the timeout counter and operation 212 is repeated.
[0037] If the check made by the ASIC 62 at operation 212 does not detect a communication with the appliance 12, then a check is made at operation 222 to determine if a predetermined timeout period has been reached by the timeout counter. In practice this timeout period may typically be on the order of 500ms, although this period could be adjusted if desired. If the check at operation 222 indicates that the timeout counter has not timed out, then operation 212 is repeated. If the check at operation 222 indicates that the timeout timer has timed out, then at operation 224 the ASIC 62 toggles the Ethernet port assignments on the DRIP 10 by swapping the values of n and n' (i.e., n=1 and n=2 to n=2 and n'=1 ). At operation 206 this has the effect of setting Ethernet port 24 on the DRIP as the APP ETH port and deactivating port 22. The timeout counter is then reset to zero at operation 208, the timeout counter is restarted at operation 210 and operation 212 is repeated by the ASIC 62. However, at this point the ASIC 62 will be checking for communications from the appliance 12 at port 24 (which is now assigned as the APP ETH port). If a communication from the appliance 12 is detected at operation 212 before the timeout counter times out at operation 22, then the DRIP 10 will continue using Ethernet port 24 as the APP ETH port. Thus, from the foregoing description it will be appreciated that the ASIC 62 will toggle the ports 22 and 24 when the timeout counter has timed out. In this manner the ASIC 62 will intermittently be looking at one Ethernet port (22 or 24) on the DRIP 10 and then the other until it detects a communication from the appliance 12.
[0038] The ASIC 62 thus provides the advantage that the user is able to hook up the output Ethernet port 20 of the appliance 12 to either of the two Ethernet ports 22 or 24 on the DRIP 10, and the ASIC 62 will still be able detect when it is receiving a communication from the appliance 12. If the user should disconnect the Ethernet output port 20 of the appliance 12 from, for example port 22, and then reconnect it using port 24 of the DRIP 10, the ASIC 62 will be able to detect this change as it periodically toggles the two ports 22 and 24 looking for a communication from the appliance 12. Once it receives a communication from the appliance 12 on one of the Ethernet ports 22 or 24, it dynamically makes the needed adjustment of the port assignments on the DRIP 10 so that that port will thereafter be designed as the APP ETH port on the DRIP 10. Thus, the Ethernet port assignment implemented on the DRIP 10 by the ASIC 62 is not only automatic but it is dynamic as well. If the Ethernet cables are swapped at run- time, the Ethernet port assignments will be swapped by the ASIC 62 without the need for the DRIP 10 or the appliance 12 to be rebooted.
[0039] Referring now to Figure 8-10, a brief review of a prior art "full refresh" digital video compression ("DVC") algorithm will be provided. The DVC algorithm may be embodied in the ASIC 62, and more specifically may be stored in the video compression engine 108 of Figure 3A. Figure 8 shows "Client" video data frame 100 (which may be referred to as a "first" video frame) and Figure 9 shows a Live video data frame 102 (which may be referred to a "second" video frame). By "Live" it will be understood that video data frame 102 is a new data frame that follows video frame 100. The video frames 100 and 102 in this example are made up of eight rows and eight columns of pixel locations (i.e., an 8x8 grid), with each pixel location including a number representing data that makes up the pixel value for that specific pixel location. Thus, the first row of frame 100 may be viewed as having pixel locations 1 -8, row two has pixel locations 9-16, and so forth.
[0040] In this example the video frame 100 (i.e., the Client frame) has already been transmitted to a user's electronic device from the DRIP 10 and is being displayed, and video frame 102 (i.e., the Live data frame) is the next video frame of data that will be used to refresh frame 100. Cross hatching in Figures 8 and 9 indicate which pixel locations have a pixel value in frame 102 that differs from its corresponding pixel location in frame 100. Figure 10 indicates, at the bottom of each pixel location (i.e., pixel box), the type of command that may be applied during a conventional full refresh operation. The commands that may be used in a conventional full refresh operation are "Copy Above" (CA), "Copy Left" (CL) and "Make Pixel" (MP), as defined below:
[0041] "Copy Above" (CA): a command indicating that a pixel value immediately elevationally above a specific pixel location in the second frame 102 is to be copied for one or more elevationally contiguous pixel locations (requires 1 byte);
[0042] "Copy Left" (CL): a command indicating that a pixel value immediately to the left of a specific pixel in the second frame 102 is to be copied for one or more following, contiguous pixel locations (requires 1 byte); and [0043] "Make Pixel" (MP): a command indicating a new pixel value for a specific pixel location (requires 2 byes).
[0044] In Figure 10 the results of the conventional, prior art DVC algorithm are apparent. The various shadings indicate where the CA, CL and MP commands are used. The result is that the Live video frame 102 may be represented by 73 bytes of data with the full refresh scheme. With no compression, 128 bytes would have been required (i.e., a MP command, requiring 2 bytes, for each one of the 64 pixel locations).
[0045] Referring now to Figure 1 1 , the result of a lossless, partial refresh digital video compression (DVC) operation (i.e., algorithm) in accordance with the present disclosure is shown. The lossless, partial refresh compression algorithm may be embodied in the video compression engine 108 in place of the conventional DVC algorithm and makes use of the Make Pixel (MP), Copy Above (CA) and CL commands discussed above, as well as an additional command: "No Change" (NC). The NC command is used to indicate that no pixel change is present for a specific pixel location, as well to indicate a horizontally contiguous group of pixel locations where no pixel change is present. It is a significant advantage of this command that it may be used to designate a string of pixel locations in one or more contiguous rows of pixel locations in the second video frame 104 which have the same pixel values as corresponding pixel locations in the first video frame 100. The NC command only requires a single byte of data, similar to the CA and CL commands.
[0046] To illustrate the reduction in bytes that the lossless, partial refresh compression algorithm of the present disclosure provides, reference is made to Figure 1 1 . Figure 1 1 shows the rows of the Live video frame 102 where the NC, MP, CA and CL commands are applied. For example, in rows 2 and 3 the NC command is used to denote 8 contiguous pixel locations (pixel locations 15-16 in row 2 through pixel locations 17-22 in row 3) with a single (one byte) NC command, indicating that the values of pixels in these pixel locations in frame 102 have not changed from those present in the corresponding locations of Client video frame 100. In the example of video frames 100 and 102, by using the NC commands in addition to the CA, CL and MP commands, the lossless, partial video refresh algorithm of the present disclosure enables the Live video frame 102 to be expressed using only 31 total bytes of information. The 31 bytes of information is transmitted in a byte stream from the DRIP 10 to the user's electronic device (for example laptop 16 in Figure 1 ) for display.
[0047] With brief reference to Figure 12, a flowchart 200 illustrating various operations that may be performed by the lossless, partial refresh algorithm is shown. At operation 202 all pixel values in all pixel locations may be read in the new video frame (e.g., Live video frame 102). At operation 204, all pixel locations are identified where a pixel value differs from the pixel value in the corresponding pixel location of the previously transmitted video frame (i.e., Client frame 100), and then NC commands may be generated. At operation 206 all instances of two or more horizontally contiguous pixel locations in the new video frame (i.e., Client video frame 102) that have the same pixel values may be identified, and then all the CL commands may be generated. At operation 208 all instances where two or more vertically contiguous pixel locations in the new video frame (Live video frame 102) have the same pixel value are identified, and then all of the CA commands are generated. At operation 210 all pixel locations are identified in the new video frame (Live video frame 102) where either a single pixel location has a pixel value that does not differ from the pixel value in the corresponding pixel location in the previously transmitted video frame (i.e., in the Client video frame 100), as well as all instances of contiguous strings of pixel values where pixel values have not changed, and then the MP commands may be generated. As noted above in the discussion of Figure 1 1 , a single NC command may potentially encompass pixel locations in two or more contiguous rows.
[0048] At operation 212 the byte stream for the compressed new video frame (e.g., Live video frame 102) is generated with all the MP, NC, CL and CA commands inserted at the appropriate locations in the byte stream. At operation 214 the byte stream is then transmitted from the DRIP 10 to the user's electronic device (e.g., laptop 16 in Figure 1 ).
[0049] It will be appreciated then that the lossless, partial video refresh algorithm described herein may significantly reduce the number of bytes in a lossless, DVC byte stream (e.g., in an Ethernet byte stream) that may be required to refresh a display of video data on a user's display system, as compared to the compression done by a traditional DVC algorithm. This can provide significant savings in network bandwidth when such video data is being transmitted over a network. The example given herein provides a greater than 50% reduction in the number of bytes required to perform a partial refresh of the video data on the user's electronic device. A particular advantage of the lossless, partial video refresh algorithm described herein is the ability to remove artifacts that may appear in a given video frame without requiring as much of the video data to be retransmitted to the user's device to recreate (i.e., refresh) the video frame being viewed. To the user it appears that the exact same video data is being displayed, but after the partial refresh the artifacts that have otherwise been present will be eliminated.
[0050] While various embodiments have been described, those skilled in the art will recognize modifications or variations which might be made without departing from the present disclosure. The examples illustrate the various embodiments and are not intended to limit the present disclosure. Therefore, the description and claims should be interpreted liberally with only such limitation as is necessary in view of the pertinent prior art.

Claims

What is claimed is: 1 . A method for compressing digital video data being transmitted from a first electronic device to a second electronic device, the method comprising: analyzing a first video frame made up of a grid of pixel locations, where each said pixel location is defined by a row number and a column number, and each said pixel location includes a pixel value;
analyzing a second video frame made up of a grid of the same pixel locations as the first video frame, where certain ones of the pixel locations in the second video frame have pixel values that differ from the pixel values of corresponding said pixel locations in the first video frame;
transmitting a byte stream of compressed digital video data to said second electronic device from said first electronic device, said byte stream of compressed digital video data adapted to refresh the first video frame and including commands identifying:
which ones of said pixel locations in said second video frame have pixel values that are identical to pixel values in corresponding ones of said pixel locations in said first video frame; and
which specific ones of said pixel locations in said second video frame have pixel values that differ from pixel values present in corresponding ones of said pixel locations of said first video frame.
2. The method of claim 1 , wherein said commands to identify which specific ones of said pixel locations in said second video frame have pixel values that differ from corresponding ones of said pixel locations in said first video frame includes forming a copy left (CL) command that identifies horizontally contiguous ones of said pixel locations in said second video frame that include pixel values that match corresponding ones of said pixel locations in said first video frame.
3. The method of claim 1 , wherein said commands to identify which specific ones of said pixel locations in said second video frame have pixel values that differ from corresponding ones of said pixel locations in said first video frame includes forming a copy above (CA) command that identifies elevationally contiguous ones of said pixel locations in said second video frame that include pixel values that match corresponding ones of said pixel locations in said first video frame.
4. The method of claim 1 wherein said commands to identify which ones of said pixel locations in said second video frame have pixel values that differ from corresponding ones of said pixel locations in said first video frame include make pixel (MP) commands, and where each said MP command identifies a single specific one of said pixel locations in said second video frame that includes a pixel value that differs from a pixel value of its said corresponding pixel location said first video frame, and includes a pixel value present in said single specific one of said pixel locations.
5. A method for compressing digital video data being transmitted from a first electronic device to a second electronic device, the method comprising: analyzing a first video frame made up of a plurality of pixel locations, where each said pixel location is defined by a row number and a column number, and each said pixel location includes a pixel value;
analyzing a second video frame made up of the same pixel locations as the first video frame, where certain ones of the pixel locations have pixel values that differ from the pixel values of corresponding said pixel locations in the first video frame;
identifying ones of said pixel locations in said second video frame that have pixel values that are identical to pixel values in corresponding ones of said pixel locations in said first video frame;
using a no change (NC) command to identify a number of contiguous ones of said pixel locations in each said row of said second video frame where identical ones of said pixel values are present, with each said NC command representing a first byte size of data;
generating a make pixel (MP) command, for each said pixel location in said second video frame that has a pixel value that differs from said pixel value in said first video frame, each said MP command representing a second byte size of data greater than said first byte size of data;
transmitting said NC commands and said MP commands in a byte stream to said second electronic device; and
using said byte stream at said second electronic device to modify said first video frame to match said second video frame.
6. The method of claim 5, further comprising successively displaying said first video frame and said modified first video frame on a video display component.
7. The method of claim 5, further comprising:
generating a copy left (CL) command to designate one or more contiguous ones of said pixel locations in one or more rows of said second video frame that each include a pixel value that matches said pixel value in corresponding ones of said pixel locations of said first video frame; and
transmitting said CL command as part of said byte stream.
8. The method of claim 5, further comprising:
generating a copy above (CA) command to designate one or more contiguous ones of said pixel locations in a single one of said columns of said second video frame, which include pixel values that match pixel values in corresponding ones of said pixel locations of a corresponding column of said first video frame; and
transmitting said CA command as part of said byte stream.
9. The method of claim 5, wherein said NC command comprises one byte of data.
10. The method of claim 5, wherein said NC command comprises one byte of data representing up to 31 ones of said pixel locations.
1 1 . The method of claim 8, wherein said CA command comprises one byte of data representing up to 31 ones of said pixel locations.
12. The method of claim 7, wherein said CL command comprises one byte of data representing up to 31 ones of said pixel locations.
13. The method of claim 5, wherein said MP command comprises two bytes of data representing 1 pixel location.
14. The method of claim 5, wherein said NC command includes ones of said pixel locations that are present in two contiguous rows of said pixel locations.
15. A method for compressing digital video data being transmitted from a first electronic device to a second electronic device, the method comprising: analyzing a first video frame made up of a plurality of pixel locations, where each said pixel location is defined by a row number and a column number, and each said pixel location includes a pixel value;
analyzing a second video frame made up of the same pixel locations as the first video frame, where certain ones of the pixel locations have pixel values that differ from the pixel values of corresponding said pixel locations in the first video frame;
identifying ones of said pixel locations that have identical ones of said pixel values in said first and second video frames;
using a no change (NC) command to identify individual ones of said pixel locations, as well as contiguous groups of pixel locations, in said second video frame that have pixel values that match pixel values in corresponding ones of said pixel locations of said first video frame, with each said NC command representing a first byte size of data; generating a make pixel (MP) command, for each said pixel location in said second video frame that has a pixel value that differs from said pixel value in said corresponding pixel location of said first video frame, each said MP command representing a second byte size of data greater than said first byte size of data;
generating a command that designates that more than one contiguous ones of said pixel locations either in one or more rows of said second video frame, or in a single one of said columns of said pixel locations of said second video frame, have the same pixel values;
transmitting said NC command, said MP command and said CL command in a byte stream to said first electronic device; and
using said byte stream at said second electronic device to update said pixel values in said pixel locations of said first video frame to match all of said pixel values in said pixel locations of said second video second video frame.
16. The method of claim 15, wherein said generating a command comprises generating a copy left (CL) command, for a specific one of said pixel locations in said second video frame, that designates a total for how many ones of said pixel locations horizontally contiguous to said specific one of said pixel locations, are to have a pixel value that matches a pixel value in said pixel location immediately to a left of said specific one of said pixel locations.
17. The method of claim 15, wherein said generating a command comprises generating a copy above (CA) command, for a specific one of said pixel locations in said second video frame, that designates a total of how many ones of said pixel locations elevationally contiguous to said specific one of said pixel locations, are to have a pixel value that matches a pixel value in said specific one of said pixel locations.
18. A digital rack interface pod apparatus that interfaces a first electronic device to a second electronic device, the digital rack interface pod comprising: an application specific integrated circuit (ASIC), the ASIC including:
a processor;
a memory in communication with said processor;
a video compression engine;
a digital video compression algorithm used by said video compression engine, the algorithm adapted to perform a partial refresh operation to compress digital video data being transmitted from the digital rack interface pod to the second electronic device;
the video compression engine being configured to use the digital video compression algorithm to analyze sequential first and second digital video frames and identify groups of contiguous ones of said pixel locations in said second digital video frame that have pixel values that match corresponding ones of said pixel locations in said first digital video frame, and to generate no change (NC) commands which are transmitted as part of a digital byte stream to said second electronic device, which thus cause no change to ones of said pixel locations in said first digital video frame when said first digital video frame is refreshed to form said second digital video frame.
19. The apparatus of claim 18, wherein said video compression engine is further configured to use said digital video compression algorithm to identify pixel locations in said second digital video frame that have a pixel value which differs from corresponding ones of said pixel locations in said first digital video frame, and to generate make pixel (MP) commands, which are integrated into said digital byte stream, identifying new pixel values to be used for specific ones of said pixel locations in said first digital video frame when said first digital video frame is refreshed to form said second digital video frame.
20. The apparatus of claim 18, wherein said video compression engine is further configured to use said digital video compression algorithm to identify pixel locations in said second digital video frame that have a common pixel value in contiguous pixel locations, and generating a copy command, which is integrated into said digital byte stream, identifying a specific pixel value that is to be copied into corresponding contiguous ones of said pixel locations in said first digital video frame when said first digital video frame is modified to form said second digital video frame.
PCT/US2012/038293 2011-05-20 2012-05-17 Digital rack interface pod system and method incorporating lossless, partial video refresh feature WO2012162079A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161488570P 2011-05-20 2011-05-20
US61/488,570 2011-05-20

Publications (3)

Publication Number Publication Date
WO2012162079A2 true WO2012162079A2 (en) 2012-11-29
WO2012162079A3 WO2012162079A3 (en) 2013-01-31
WO2012162079A4 WO2012162079A4 (en) 2013-03-28

Family

ID=47217984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/038293 WO2012162079A2 (en) 2011-05-20 2012-05-17 Digital rack interface pod system and method incorporating lossless, partial video refresh feature

Country Status (1)

Country Link
WO (1) WO2012162079A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3707785A4 (en) * 2017-11-09 2021-08-11 Vertiv IT Systems, Inc. Kvm extension device self-contained within a video connector
CN113364888A (en) * 2021-06-30 2021-09-07 重庆紫光华山智安科技有限公司 Service scheduling method, system, electronic device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008021172A2 (en) * 2006-08-10 2008-02-21 Avocent Huntsville Corporation Video compression algorithm
US7336839B2 (en) * 2004-06-25 2008-02-26 Avocent Corporation Digital video compression command priority

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7336839B2 (en) * 2004-06-25 2008-02-26 Avocent Corporation Digital video compression command priority
WO2008021172A2 (en) * 2006-08-10 2008-02-21 Avocent Huntsville Corporation Video compression algorithm

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3707785A4 (en) * 2017-11-09 2021-08-11 Vertiv IT Systems, Inc. Kvm extension device self-contained within a video connector
CN113364888A (en) * 2021-06-30 2021-09-07 重庆紫光华山智安科技有限公司 Service scheduling method, system, electronic device and computer readable storage medium
CN113364888B (en) * 2021-06-30 2022-05-31 重庆紫光华山智安科技有限公司 Service scheduling method, system, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
WO2012162079A3 (en) 2013-01-31
WO2012162079A4 (en) 2013-03-28

Similar Documents

Publication Publication Date Title
US9129069B2 (en) Digital rack interface pod system and method
US20080002894A1 (en) Signature-based video redirection
US7893941B2 (en) Intelligent video graphics switcher
US7145576B2 (en) Method and apparatus for implementing color graphics on a remote computer
US20170111215A1 (en) Diagnostic monitoring techniques for server systems
US20030058248A1 (en) System and method for communicating graphics over a network
US20170229093A1 (en) Display system for an array of video displays
US20070132771A1 (en) Efficient video frame capturing
US10520965B2 (en) Server room power management apparatus and method thereof
CN110069288B (en) USB equipment sharing method, device and system
TWI445374B (en) Remote management system and remote management method
US20100103183A1 (en) Remote multiple image processing apparatus
CN110071827B (en) Terminal and system for realizing networked KVM
CN107257474B (en) Video information compression system and method for BMC chip
CN110930932B (en) Display screen correction method and system
US8520147B1 (en) System for segmented video data processing
WO2012162079A2 (en) Digital rack interface pod system and method incorporating lossless, partial video refresh feature
WO2007057053A1 (en) Conditional updating of image data in a memory buffer
US20130251025A1 (en) Memory bandwidth reduction during video capture
US20040160438A1 (en) A security ratings system
CN102336355A (en) Elevator monitoring system
CN103747191A (en) Network interaction high-definition character superimposition system
CN110377550A (en) A kind of method and computer equipment for realizing display warm connection function
TWI588738B (en) Display system for an array of video displays
CN111857462B (en) Server, cursor synchronization method and device, and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12789698

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12789698

Country of ref document: EP

Kind code of ref document: A2