RU2599959C2 - Dram compression scheme to reduce power consumption in motion compensation and display refresh - Google Patents

Dram compression scheme to reduce power consumption in motion compensation and display refresh Download PDF

Info

Publication number
RU2599959C2
RU2599959C2 RU2014126348/08A RU2014126348A RU2599959C2 RU 2599959 C2 RU2599959 C2 RU 2599959C2 RU 2014126348/08 A RU2014126348/08 A RU 2014126348/08A RU 2014126348 A RU2014126348 A RU 2014126348A RU 2599959 C2 RU2599959 C2 RU 2599959C2
Authority
RU
Russia
Prior art keywords
motion compensation
data
module
display
compensation module
Prior art date
Application number
RU2014126348/08A
Other languages
Russian (ru)
Other versions
RU2014126348A (en
Inventor
Чжень ФАН
Нитин Б. ГУПТЕ
Ксяовэй ЦЗЯН
Original Assignee
Интел Корпорейшн
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Интел Корпорейшн filed Critical Интел Корпорейшн
Priority to PCT/US2011/066556 priority Critical patent/WO2013095448A1/en
Publication of RU2014126348A publication Critical patent/RU2014126348A/en
Application granted granted Critical
Publication of RU2599959C2 publication Critical patent/RU2599959C2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • H04N19/428Recompression, e.g. by spatial or temporal decimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • Y02D10/14

Abstract

FIELD: computer engineering.
SUBSTANCE: computer-implemented method of operating a memory controller, comprising a compression module, in compression module receives a write request from a motion compensation module, wherein write request contains video data and is received by memory controller both directly and indirectly from said motion compensation module; conducting, by means of compression module, compression of video data to obtain compressed data, wherein compression of video data is transparent to motion compensation module; storing compressed data in one or more memory chips; receiving, by a decompression module, a read request, wherein said read request is transmitted both directly and indirectly from motion compensation module or display device; retrieving, by decompression module, stored data from at least of one or more memory chips in response to read request; and performing, by decompression module, decompression of stored data to obtain decompressed data.
EFFECT: reduced power consumption due to more efficient access to buffer frames.
21 cl, 12 dwg

Description

State of the art

Some mobile devices may be able to play videos from various sources. A typical solution for playing video on a mobile device may include the use of motion compensation technologies for decoding video data before storing data in a DRAM (dynamic random access memory) frame buffer, in which the display device controller can process the frame buffer data for output to the display device. Conventional attempts to reduce the amount of data consumed by the memory used for motion compensation operations can have negative consequences for the memory power efficiency of the display device. Conventional technologies for reducing the memory capacity of a display device, on the other hand, can be a problem in terms of video decoding.

Brief Description of the Drawings

Various advantages of embodiments of the present invention will become apparent to those skilled in the art upon reading the following description and the appended claims and referring to the following drawings, in which:

FIG. 1A is a flowchart of an example of frame data associated with a video decoding operation in accordance with an embodiment;

FIG. 1B is a block diagram of an example of using frame data in a video reproduction architecture in accordance with an embodiment;

FIG. 2A is a flowchart of an example of a buffer access order of a motion compensation frame in accordance with an embodiment;

FIG. 2B is a flowchart of an example of a buffer access order of an output frame of a display device in accordance with an embodiment;

FIG. 3 is a block diagram of an example memory controller according to an embodiment;

FIG. 4 is an example of a compression scheme according to an embodiment;

FIG. 5 is a block diagram of an example compression / decompression architecture in accordance with an embodiment;

FIG. 6A is a flowchart of an example method for processing memory write requests in accordance with an embodiment;

FIG. 6B is a flowchart of an example method for processing memory read requests in accordance with an embodiment;

FIG. 7 is a block diagram of an example system according to an embodiment;

FIG. 8 is a block diagram of an example system having a navigation controller in accordance with an embodiment; and

FIG. 9 is a block diagram of an example system having small design parameters according to an embodiment.

The implementation of the invention

Embodiments may include a memory controller having a compression module for receiving a recording request from a motion compensation module in which the recording request contains video data. The compression module may also compress video to receive compressed data and store the compressed data in one or more memory chips. In one example, the memory controller also has an unpacking module.

Embodiments may also comprise a system comprising a display device, one or more memory chips and a processor chip with a motion compensation module and a memory controller. The memory controller may include a compression module for receiving a recording request from the motion compensation module, wherein the recording request contains video data. In addition, the compression module may compress the video data to receive compressed data and store the compressed data in at least one or more memory chips.

Other embodiments may comprise a computer-implemented method of operating a memory controller, wherein a write request is received from the motion compensation module. The write request may comprise video data, the method further comprising compressing the video data to obtain compressed data and storing the compressed data in one or more memory chips.

Additionally, embodiments may comprise a computer-implemented method of operating a memory controller in which a write request is received from a motion compensation module. The write request may comprise video data, the method further comprising performing video compression to obtain compressed data. Video compression can be transparent to the motion compensation module. The method may further provide for storing compressed data on one or more memory chips and receiving a read request. The stored data may be recovered from at least one or more memory chips in response to a read request. In addition, the decompression of the stored data may be performed to obtain the decompressed data. In one example, unpacking is transparent to the requester of the stored data.

Turning now to FIG. 1A and 1B, where frame data and video playback architecture 16 are shown, respectively, for video content decoded on a platform such as a mobile device. In particular, the architecture memory system 26 may comprise a DRAM frame buffer 30 that contains reconstructed pixels restored by a controller (not shown) of the display device and which are output to the display device 28 in accordance with a video protocol, such as, for example, the MPEG2 protocol ( e.g. Moving Picture Experts Group 2). In the example shown, the frame data comprises an I-frame (intra-coded frame) 10, a set of B-frames (bidirectional predicted frames) 12 (12a-12c), and a P-frame (predicted frame) 14. Each B-frame 12 can be decoded using An I-frame 10 as a reference frame that occurs earlier in time, and using the P-frame 14 as a reference frame that occurs later in time. Thus, the reconstruction of the B-frames 12 can be relatively memory intensive due to the need to repeatedly access both the I-frame 10 and the P-frame 14 of the data from the memory system 26.

For example, the video playback architecture 16 may comprise a variable length decoder (“VLD”) 18, which supplies the decoded video data to both the inverse discrete cosine transform (GOST) module 20 and the motion compensation module (“MS”) 22, the module 22, the MS can reconstruct the frames of one macroblock (e.g., 16 × 16 pixels) while using motion vectors, which are essentially pointers to the coordinates of the pixels in the reference frames (e.g., I-frame and P-frame). As will be discussed in more detail, the methods described herein allow more efficient access to the frame buffer 30 from the point of view of both the MS module 22 and the display device 28, and the increased memory efficiency can reduce power consumption and increase battery life.

In FIG. 2A and 2B show access orders to the frame buffer from the point of view of the motion compensation module and the display device, respectively. In particular, the shown order of access to the frame buffer for the motion compensation module is based on the macroblock 32, while the order of access to the frame buffer for the display device can be performed on the line 34 basis. Thus, there may be a mismatch between the two access orders, which can create difficulties that can be resolved by the methods described here.

In particular, in FIG. 3, a memory controller 36 is shown that controls the transmission of video data to and from one or more DRAM chips 38, in which DRAM chips 38 can be used to implement a frame buffer, such as a frame buffer 30 (FIG. IB), already discussed. In the example shown, the memory controller 36 includes video efficiency logic 40 having a compression module 42 to process write requests 44 coming (directly or indirectly) from the motion compensation module 22, and an unpacking module 46 for processing read requests 48 (48a, 48b), coming (directly or indirectly) from the motion compensation module 22 and the display device 28. Thus, write requests 44 from motion compensation unit 22 and read requests 48b from display device 28 may comprise transmitting video data of I-frame 10, B-frame 12, and P-frame 14 (FIG. 1A), while read requests 48a from module 22 motion compensation may include data transmission of the I-frame 10 and the P-frame 14 (Fig. 1A) (for example, as reference frames).

The shown compression module 42 is configured to receive write requests 44 from the motion compensation module 22, compress the video data associated with the write requests 44, and write the compressed data to DRAM chips 38 as necessary. Thus, the compression module 42 can compress the I-, B- and P-frames received from the motion compensation module 22 on a macroblock basis, wherein the compression can be transparent to the motion compensation module 22.

In FIG. 4 shows an approach to performing compression using a differential pulse code modulation (DPCM) process and a Huffman coding process. In particular, a macroblock string of sixteen brightness values (for example, expressed in 16 bytes) can be expressed as “steepnesses” of the DPCM, which can, in turn, be converted into a set of “steepness deltas” of the DPCM. The Huffman coding process can therefore generate 50 code, 50% compressed. If more than 50% compression is required, one or more AC closing factors (e.g., alternating current, non-zero frequency), DCTs can be truncated to produce lossy compression, and truncation can be rare and may not be visible to laypersons.

Returning to FIG. 3, the decompression module 46 shown is adapted to receive read requests 48 from the motion compensation module 22 and the display device, restore the stored data received from the DRAM chips 38 in response to the read requests 48, and unpack the stored / restored data to obtain Unpacked data. If the decompressed data corresponds to a read request 48a from the motion compensation unit 22, the decompressed data may be transmitted to the motion compensation unit 22, in which, in the illustrated example, the decompression of the stored data is transparent to the motion compensation unit 22. If the decompressed data corresponds to a read request 48b (for example, updating the display device) from the display device 28, the decompressed data can be transmitted to the display device 28, and in the shown example, unpacking the stored data is transparent to the display device 28. The decompression process can essentially be the opposite of the compression process. The memory controller 36 may also support other non-playback transmissions 49 to and from DRAM chips 38.

In FIG. 5 shows a compression architecture 52 in which a virtual representation 54 of the memory architecture from the point of view of the motion compensation module and the display device is data stored and restored in macroblocks. The actual representation 56 of the memory architecture, however, reflects that video data can use significantly less memory. Reduced memory usage can, in turn, provide significant savings in energy consumption associated with memory accesses. We emphasize that the presented approach does not require additional buffers in DRAM or additional copy operations in memory. Thus, the implementation of the presented solution in the memory controller can improve the efficiency of access to memory, and the solution may remain transparent to system components other than the memory architecture itself.

In FIG. 6A shows a method 60 for processing write requests. Method 60 can be implemented in a memory controller as a set of logical instructions stored on a computer-readable or computer-readable storage medium such as RAM, read-only memory (ROM), programmable ROM (PROM), flash memory, etc., in a configurable logic such as, for example, programmable logic arrays (PLA), programmable logic integrated circuits (FPGAs), complex programmable logic devices (CPLDs), in fixed-logic logic hardware using such circuitry technology as example, application specific integrated circuit (ASIC), metal-oxide-semiconductor (CMOS) complementary structure technology (CMOS), or transistor coupled logic (TTL) logic technology or any combination thereof. For example, a computer control program to perform the operations shown in method 60 may be written in any combination of one or more programming languages, including an object-oriented programming language such as C ++ and the like, and standard procedural programming languages, such as programming language “C” or similar programming languages. In addition, method 60 may be implemented using any of the aforementioned circuitry technologies.

The processing unit 62 shown receives a recording request from a motion compensation module in which the recording request may comprise video data. Block 64 may perform video compression to obtain compressed data in which video compression is transparent to the motion compensation module. The compressed data may be stored in one or more memory chips in block 66.

In FIG. 6B illustrates a method 68 for processing read requests. Method 60 may be implemented in a memory controller as a series of logical instructions stored on a computer-readable or computer-readable storage medium, such as RAM, ROM, PROM, flash memory, etc., in a configurable logic, such as, for example, PLA, FPGA, CPLD, in fixed-functional logic hardware using circuitry technology such as ASIC, CMOS, or transistor-coupled logic (TTL) logic circuits, or any combination thereof. The processing unit 70 shown receives a read request from a system component, such as a motion compensation module or a display device controller. The stored data can be restored from at least one of the memory chips in block 72, and block 74 can decompress the stored data. The decompressed data may then be transmitted to the data requester.

Turning now to FIG. 7, which shows a computer system 76 that permits the use of video, which implements compression based on a memory controller and decompression. The computer system 76 may be a mobile platform device such as, for example, a laptop, PDA, wireless smartphone, media player, image acquisition device, MID, any smart device, such as a smartphone, smart tablet, etc., or any combination thereof. Computer system 76 may also be part of a fixed platform, such as a personal computer (PC), smart TV, server, workstation, etc. The illustrated computer system 76 comprises one or more processors 78, a display device 80 having a display device controller 82, and system memory 84, which may comprise, for example, dual data rate synchronous DRAM (DDR) modules (SDRAM, for example, DDR3 SDRAM according to JEDEC standard JESD79-3C, April 2008). System memory modules 84 may be contained in one or more chips associated with a single-row pin memory module (SIMM), dual-row pin memory module (DIMM), small external DIMM (SODLMM), etc.

The processor 78 may have a video decoder 86 and an integrated memory controller 88 and one or more processor cores (not shown) to execute one or more drivers associated with a control OS (operating system) and / or application software in which each core may be fully functional using command call blocks, command decoders, level one cache (L1), execution blocks, etc. Processor 78 can alternatively communicate with an off-chip version of memory controller 88, also known as northbridge, via a system bus. The processor 78 shown communicates with a platform controller hub (PCH) 90, also known as a south bridge, through a hub bus. A memory controller 88 / processor 78 and a PCH 90 is sometimes referred to as a chipset. The PCN may be coupled to a network controller 92 and / or a mass storage device 94 (e.g., a hard disk / HDD, optical disk drive, etc.).

The memory controller 88 shown comprises efficiency logic 96, such as efficiency logic 40 already discussed (FIG. 3). Thus, the memory controller 88 may be configured to receive write requests from the motion compensation module (not shown) of the decoder 86, compress the video data associated with the write requests, and store the compressed data in the system memory 84. In addition, the memory controller 88 may be configured to receive read requests from the motion compensation module 86 of the decoder and the controller 82 of the display device, restore the stored data from the system memory 84 in response to requests for reading and unpacking the restored data x decompressed before transmission of data to the interrogator. Compression and decompression processes can be transparent to all system components except the memory controller 88 and system memory 84.

In FIG. 8 illustrates an embodiment of a system 700. In embodiments, system 700 may be a media system, although system 700 is not limited to this context. For example, system 700 can be inserted into a personal computer (PC), laptop, ultra-laptop, tablet, touchpad, laptop computer, PDA, small handheld computer, personal digital assistant (PDA), mobile phone, integrated mobile phone / PDA, television , a smart device (e.g., smartphone, smart tablet or smart TV), a mobile Internet device (MID), a messaging device, a data device, etc.

In embodiments, system 700 comprises a platform 702 associated with a display 720. Platform 702 may receive content from a content device, such as content service device (s) 730 or content delivery device (s) 740 or other similar content sources. A navigation controller 750 containing one or more navigation functions may be used to interact, for example, with a platform 702 and / or a display 720. Each of these components is described in more detail below.

In embodiments, platform 702 comprises any combination of chipset 705, processor 710, memory 712, memory 714, graphics subsystem 715, applications 716 and / or radio 718. Chipset 705 can provide multilateral communications between processor 710, memory 712, memory 714, graphics subsystem 715, applications 716, and / or radio facilities 718. For example, the chipset 705 may include a storage adapter (not shown) capable of multi-way communication with storage device 714.

The processor 710 can be implemented as a computer processor with a complex command system (CISC) or as a computer processor with an abbreviated system of commands (RISC), as a processor compatible with the x86 instruction system, multicore, or any other microprocessor unit or central processing unit (CPU). In embodiments, the processor 710 may comprise a dual-core processor (s), a dual-core mobile processor (s), etc.

The memory 712 may be implemented as a volatile memory device such as, in particular, random access memory (RAM), dynamic random access memory (DRAM), or static RAM (SRAM).

The storage device 714 can be implemented as a non-volatile memory device, such as, in particular, a magnetic disk drive, optical drive, tape drive, internal storage device, attached storage device, flash memory, battery backup SDRAM (synchronous DRAM) and / or network accessible storage device. In embodiments, the storage device 714 may comprise technology for enhancing improved protection of the performance of the storage device for valuable digital media when there are, for example, a plurality of hard drives.

The graphics subsystem 715 may perform image processing, such as photographs or video, for the display device. The graphics subsystem 715 may be, for example, a graphics processor (GPU) or a visual processor (VPU). An analog or digital interface may be used for communication by communication of the graphics subsystem 715 and the display device 720. For example, the interface may be any high-definition multimedia interface, DisplayPort, wireless HDMI, and / or be based on wireless HD-compatible technologies. The graphics subsystem 715 can be integrated into the processor 710 or the chipset 705. The graphics subsystem 715 can be a stand-alone card, communication means associated with the chipset 705.

The graphics and / or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and / or functionality can be integrated within the chipset. Alternatively, a discrete graphics processor and / or video processor may be used. As yet another embodiment, the graphics and / or video functions may be implemented by a general-purpose processor comprising a multi-core processor. In a further embodiment, the functions may be implemented in a consumer electronics device.

Radio facilities 718 may comprise one or more radio facilities capable of transmitting and receiving signals using various appropriate radio technologies. Such technologies may comprise communication through one or more wireless networks. Exemplary wireless networks include (in particular), wireless local area networks (WLANs), wireless personal area networks (WPANs), wireless metropolitan area networks (WMANs), and cellular and satellite networks. When communicating over such networks, radios 718 may operate in accordance with one or more applicable standards in any version.

In embodiments, the display device 720 may include any monitor or display device such as a television receiver. The display device 720 may comprise, for example, a screen of a computer display device, a touch screen display device, a video monitor, a device similar to a television receiver and / or television. The display device 720 may be digital and / or analog. In embodiments, the display device 720 may be a holographic display. In addition, the display device 720 may be a transparent surface that can receive a visual projection. Such projections can transmit various forms of information, images and / or objects. For example, such projections may be a visual overlay for a mobile augmented reality (MAR) application. Running one or more software applications 716, platform 702 may display a user interface 722 on a display device 720.

In embodiments, the device (s) 730 content services can be controlled by any national, international and / or independent service and, thus, be available to the platform 702, for example, via the Internet. The content services device (s) 730 may be associated with a platform 702 and / or a display device 720. Platform 702 and / or content services device (s) 730 may connect to a network 760 to communicate (eg, transmit and / or receive) to transmit media information to and from the network 760. The content delivery device (s) 740 also may be associated with platform 702 and / or display device 720.

In embodiments, the content service device (s) 730 may include a cable television rack, personal computer, network, telephone, devices or equipment capable of working with the Internet, capable of transmitting digital information and / or content, and any other similar devices capable of unidirectional or bi-directionally transferring content between content providers and platform 702 and / / display device 720 via a network 760 or directly. It should be understood that the content can be transmitted unidirectionally and / or bi-directionally to any one of the components of the system 700 or from it and from the content provider through the network 760. Examples of content can contain any media information containing, for example, video, music, medical and game information and etc.

The device (s) 730 content services receives content, such as cable television programs containing media information, digital information and / or other content. Examples of content providers may include any cable providers or satellite television or radio providers or Internet content providers. The presented examples are not intended to limit embodiments of the invention.

In embodiments, the platform 702 may receive control signals from a navigation controller 750 having one or more navigation functions. The navigation functions of controller 750 may be used, for example, to interact with user interface 722. In embodiments, the navigation controller 750 may be a pointing device, which may be a component of computer equipment (specifically, a user interface device) allowing the user to enter spatial (e.g., continuous and multidimensional) data into the computer. Many systems, such as graphical user interfaces (GUIs), television receivers, and monitors, allow the user to control and provide data for a computer or television using physical gestures.

The movements of the navigation functions of the controller 750 may be reflected on the display device (e.g., on the display device 720) by moving the pointer, cursor, focus ring, or other visual indicators displayed on the display device. For example, running software applications 716, navigation functions located on the navigation controller 750 may be displayed in virtual navigation functions displayed, for example, on a user interface 722. In embodiments, the controller 750 may not be a separate component, but integrated into the platform 702 and / or display device 720. Embodiments, however, are not limited to the elements or context described or shown here.

In embodiments, drivers (not shown) may include technology that allows users to instantly turn a platform 702 on and off, such as a television, for example, by pressing a button after bootstrapping, when enabled. Program logic may allow the platform 702 to stream content to media adapters or other device (s) of content services 730 or to device (s) 7 of content delivery when the platform is turned off. In addition, the chipset 705 may include hardware support and / or software support, for example, for stereo sound in 5.1 system and / or high definition stereo sound in 7.1 system. Drivers may contain a graphics driver for integrated graphics platforms. In embodiments, the graphics driver may comprise an Express graphics card for interconnecting peripheral components (PCI).

In various embodiments, one or more of the components shown in system 700 may be integrated. For example, the platform 702 and the content services device (s) 730 can be integrated or the content delivery platform (s) 740 or the platform 702, the content services device (s) 730 and the device (s) 740 can be integrated content delivery can, for example, be integrated. In various embodiments, the platform 702 and the display device 720 may be an integrated unit. The display device 720 and the content service device (s) 730 may be integrated, or, for example, the display device 720 and the content delivery device (s) 740 may be integrated. These examples are not intended to limit the invention.

In various embodiments, the system 700 may be implemented as a wireless system, a wired system, or a combination thereof. When implemented as a wireless system, system 700 may comprise components and interfaces suitable for communication in a wireless sharing environment, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, etc. An example of a wireless sharing environment may be portions of a wireless communication spectrum, such as an RF radio frequency spectrum, etc. When implemented as a wired system, system 700 may comprise components and interfaces suitable for communication in a wired communication environment, such as I / O adapters, physical connectors for connecting to an I / O adapter with a corresponding wired communication medium, network interface card (NIC), disk controller, video controller, audio controller, etc. Examples of a wired communication environment may include wire, cable, metal conductors, printed circuit boards (PCBs), cable management, multi-input switching system, semiconductor material, twisted wire pair, coaxial cable, fiber optics, etc.

Platform 702 may form one or more logical or physical channels for transmitting information. The information may include media information and control information. Media information may refer to any data representing content intended for the user. Examples of content may include, for example, voice conversation data, video conferencing, video streaming, email message (“email”), voicemail message, alphanumeric characters, graphics, image, video, text, etc. Voice conversation data may be, for example, voice information, periods of silence, background noise, comfort noise, tones, etc. Control information may refer to any data representing commands or control words intended for an automated system. For example, control information can be used to direct media information through a system or to send commands to a node in order to process media information in a given way. Embodiments, however, are not limited to the elements or context shown or described in FIG. 8.

As described above, system 700 may be implemented with various physical styles or shape factors. FIG. 9 shows embodiments of a miniature coefficient coefficient apparatus 800 in which system 700 may be implemented. In embodiments, for example, apparatus 800 may be implemented as a mobile computing device having wireless capabilities. A mobile computing device may refer to any device having a processing system and a mobile power or power source, such as, for example, one or more batteries.

As described above, examples of a mobile computing device may include a personal computer (PC), laptop, ultra-laptop, tablet, touchpad, laptop computer, PDA, small handheld computer, personal digital assistant (PDA), mobile phone, integrated mobile phone / PDA , a TV, a smart device (e.g., smartphone, smart tablet or smart TV), a mobile Internet device (MID), a messaging device, a data device, etc.

Examples of mobile computing devices may also include computers configured to be worn by humans, such as wrist computers, finger computers, wristbands computers, ocular computers, computers for fastening to a belt, computers for carrying on a shoulder, computers for shoes, computers for clothes and others wearable microcomputers. In embodiments, for example, a mobile computer device may be implemented as a smartphone capable of executing computer applications, as well as transmitting voice messages and / or data. Although some embodiments may be described using a mobile computing device implemented as a smartphone, for example, it will be understood that other embodiments may be implemented using other wireless mobile computing devices as well. Embodiments in this context are not limited.

As shown in FIG. 9, the device 800 may include a housing 802, a display device 804, an input / output (I / O) device 806 and an antenna 808. The device 800 may also include navigation functions 812. The display device 804 may include any suitable display unit for displaying information suitable for a mobile computing device. The input / output device 806 may comprise any appropriate input / output device for inputting information into a mobile computing device. Examples of the input / output device 806 may include an alphanumeric keypad, a numeric keypad, a touchpad, input keys, buttons, switches, toggle switches, microphones, speakers, a speech recognition device and software, etc. Information can also be input to device 800 via a microphone. Such information can be digitized by a speech recognition device. Embodiments are not limited in this context.

Various embodiments may be implemented using hardware elements, software elements, or a combination thereof. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistances, capacitors, inductances, etc.), integrated circuits, specialized integrated circuits (ASICs), programmable logic devices (PLDs), digital signal processors ( DSP), programmable logic integrated circuits (FPGAs), gates, registers, semiconductor device, microcircuits, microcrystals, chipsets, etc. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, program modules, standard programs, subprograms, functions, methods, procedures, software interfaces, application programming interfaces (APIs), instruction sets, computer code, control programs , The control program segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and / or software elements may vary in accordance with any set of factors, such as the desired calculation speed, power levels; heat tolerances, budget processing cycle, input data rates, output data rates, memory resources, data bus speeds and other design or performance limitations.

One or more aspects of the at least one embodiment may be implemented by representative instructions stored on a computer-readable medium that represents various logic within a processor that, when read by a machine, causes the machine to create logic to perform the methods described herein. Such representations, known as “IP cores," can be stored on a physical machine-readable medium and provided to various clients or production means of loading machines to create that actually create a logic or processor.

The methods described here can therefore provide a direct-action control system that guarantees both real-time operation of the consumer video channel and dynamic updating of the working channel to provide optimal visual perceptual quality and viewing experience. In particular, a discrete control system for a video channel can dynamically adapt operating points to optimize the global configuration of interactive component modules that communicate with television perceptual quality. In a serial configuration, a perceptual quality analysis module may be placed before the video processing channel, and parameters determined for the subsequent processing channel may be used for the same frame. In the case of distributed computing of the quality analysis unit or when it is necessary to perform perceptual quality analysis at intermediate points in the channel, the parameters determined using this frame can be applied to the next frame to guarantee real-time operation. Distributed computing is sometimes preferable to reduce complexity since some elements for calculating perceptual quality can already be calculated in a post-processing channel and can be reused. The approaches shown may also be compatible with closed loop control, where perceptual quality analysis is reused at the output of the video processing channel to evaluate the output quality, which is also used by the control mechanism.

Embodiments of the present invention are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of such IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, etc. In addition, in some figures, the lines of signal conductors are represented by lines. Some of them may differ to indicate more components of the signal paths, have a label with a number, indicate many components of the signal paths and / or have arrows at one or more ends to indicate the direction of the flow of primary information. This, however, should not be construed as limiting. Rather, such an added part can be used in conjunction with one or more exemplary embodiments to facilitate and simplify the understanding of the circuit. Any presented signal lines, regardless of whether they carry additional information, can actually contain one or more signals that can pass in many directions and can be implemented by any appropriate type of signal circuit, for example, digital or analog lines are realized by differential pairs, fiber optic lines and / or terminating at one end of the lines.

Examples of sizes / models / values / ranges may be given, although embodiments of the present invention are not limited to them. As manufacturing technologies (such as photolithography) evolve over time, it is expected that smaller devices can be produced. In addition, well-known power / ground connections to IC chips and other components may or may not be shown within the drawings for ease of illustration and discussion and so as not to obscure certain aspects of embodiments of the invention. Additionally, the circuits may be shown in the form of flowcharts in order to avoid complicating the display of the embodiments of the invention, as well as from the point of view of the fact that the particular features regarding the implementation of the circuits of such a flowchart are highly dependent on the platform in which the embodiment should be implemented, that is, such specific features should be good within the competence of specialists in this field of technology. Where specific details (e.g., schemes) are set forth to describe exemplary embodiments of the invention, it will be apparent to one skilled in the art that embodiments of the invention can be practiced without or with modified specific details. The description should therefore be regarded as illustrative and not as limiting.

Some embodiments may be implemented, for example, using a computer-readable or physical computer-readable medium or article that can store a command or set of instructions that, if executed by a machine, can cause the machine to execute a method and / or operation in accordance with embodiments. Such a machine may comprise, for example, any suitable processing platform, computer platform, computer device, processing device, computer system, processing system, computer, processor, and the like, and may be implemented using any appropriate combination of hardware and / or software. The computer-readable medium or article may have, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, data storage medium, medium and / or data storage unit, for example, a memory device, removable or non-removable medium, erasable or non-erasable media, recordable or rewritable media, digital or analogue media, hard disk, floppy disk, read-only memory device on a compact disc (CD-ROM), recordable compact disc (CD-R), rewritable writable compact disc (CD-RW), optical disc, magnetic media, magneto-optical media, removable memory cards or disks, various types of digital versatile disk (DVD), tape, tape, etc. Teams may contain any appropriate type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, etc., implemented using any suitable high-level programming language, object-oriented, visual, compiled and / or inter the claimed programming language.

Unless specifically indicated otherwise, it can be understood that terms such as “processing”, “calculation”, “computational processing”, “definition”, etc., refer to the operation and / or processes of a computer or computer system or similar electronic a computer device that manipulates and / or converts data represented as physical values (e.g. electronic) inside the registers and / or memory of a computer system into other data similarly represented as physical quantities inside a computer memory systems, registers, or other such information storage devices, transmission or display devices. Embodiments are not limited in this context.

The term “coupled” can be used here to mean any type of relationship, direct or indirect, between the components in question, and can be applied to electrical, mechanical, fluid, optical, electromagnetic, or other compounds. In addition, the terms “first”, “second”, etc. may be used here only to facilitate discussion and do not bear any particular temporal or chronological significance, unless otherwise indicated.

Those of skill in the art from the above description should understand that the vast technologies of embodiments of the present invention can be implemented in various forms. Therefore, although embodiments of the present invention have been described in connection with their specific examples, the true scope of embodiments of the invention should not be limited to this, as specialists in the art after studying the drawings, description and the following claims, other modifications will become apparent.

Claims (21)

1. A computer-implemented method of operating a memory controller comprising a compression module, comprising the steps of:
receive, using the compression module, a recording request from the motion compensation module, wherein the recording request contains video data and is received by the memory controller both directly and indirectly from the specified motion compensation module;
using the compression module, video data is compressed to obtain compressed data, while the video data compression is transparent to the motion compensation module;
save compressed data in one or more memory chips using a compression module;
receive a read request using the unpacking module, the specified read request arriving either directly or indirectly from the motion compensation module or the display device;
using the decompression module, the stored data from at least one or more memory chips is restored in response to a read request; and
using the decompression module, unpacking the stored data to obtain the decompressed data is performed.
2. The method of claim 1, wherein the read request is received from the motion compensation module, the method further comprising transmitting the decompressed data to the motion compensation module, and unpacking the stored data is transparent to the motion compensation module.
3. The method of claim 1, wherein the read request is received from the controller of the display device, the method further comprising transmitting the decompressed data to the controller of the display device, and unpacking the stored data is transparent to the controller of the display device.
4. The method of claim 1, further comprising using one or more of a differential pulse code modulation process and a Huffman process to compress the video data and decompress the stored data.
5. A memory controller containing:
a compression module configured to
receiving a recording request from the motion compensation module, wherein the recording request contains video data and is received by the memory controller both directly and indirectly from the specified motion compensation module,
performing video compression to obtain compressed data, and
storing the compressed data in one or more memory chips and an unpacking module configured to receive a read request;
restoring the stored data from at least one of the one or more memory chips in response to a read request, said read request being received either directly or indirectly from a motion compensation module or display device; and
performing decompression of the stored data to obtain the decompressed data.
6. The memory controller according to claim 5, wherein the video compression is transparent to the motion compensation module.
7. The memory controller according to claim 5, wherein the read request is to be received from the motion compensation module, wherein the decompression module is configured to transmit the decompressed data to the motion compensation module, and the decompression of the stored data is transparent to the motion compensation module.
8. The memory controller according to claim 5, wherein the read request is to be received from the display device controller, wherein the decompression module is configured to transmit the decompressed data to the display device controller, and the decompression of the stored data is transparent to the display device controller.
9. The memory controller according to claim 5, wherein the memory controller is located on a chip other than one or more memory chips.
10. The memory controller of claim 9, wherein the chip other than one or more memory chips comprises a motion compensation module.
11. The memory controller according to claim 5, wherein the compression module is configured to use one or more of a differential pulse code modulation process and a Huffman process to perform video compression.
12. A data compression system comprising:
display device;
one or more memory chips; and
a processor chip comprising a motion compensation module and a memory controller, the memory controller comprising a compression module configured to
receiving a recording request from the motion compensation module, wherein the recording request contains video data and is received both directly and indirectly from the specified motion compensation module,
performing video compression to obtain compressed data, and
storing compressed data in at least one or more memory chips, and
an unpacking module configured to
receiving a read request, wherein said read request comes both directly and indirectly from the motion compensation module or the display device;
restoring stored data from at least one of the one or more memory chips in response to a read request; and
performing decompression of the stored data to obtain the decompressed data.
13. The system of claim 12, wherein the video compression is transparent to the motion compensation module.
14. The system of claim 12, wherein the read request is to be received from the motion compensation module, and the decompression module is configured to transmit the decompressed data to the motion compensation module, while unpacking the stored data is transparent to the motion compensation module.
15. The system of claim 12, further comprising a display device controller connected to the display device and the processor chip, wherein the read request is to be received from the display device controller, and the decompression unit is configured to transmit the decompressed data to the display device controller, and unpack the stored data transparent to the display device controller.
16. The system of claim 12, wherein the compression module is configured to use one or more of a differential pulse code modulation process and a Huffman process to perform video compression.
17. A computer-implemented method of operating a memory controller comprising a compression module, comprising the steps of:
receive, using the compression module, a recording request from the motion compensation module, wherein the recording request contains video data and is received by the memory controller both directly and indirectly from the specified motion compensation module;
using the compression module, video data is compressed to obtain compressed data, and
save compressed data in one or more memory chips using a compression module;
receiving a read request, wherein said read request is received both directly and indirectly from the motion compensation module or the display device;
recovering the stored data from at least one or more memory chips in response to a read request; and
decompress the stored data to obtain the decompressed data.
18. The method of claim 17, wherein the video compression is transparent to the motion compensation module.
19. The method of claim 17, wherein the read request is received from the motion compensation module, further comprising the step of transmitting the decompressed data to the motion compensation module, wherein unpacking the stored data is transparent to the motion compensation module.
20. The method according to claim 17, wherein the read request is received from the display device controller, further comprising the step of transmitting the decompressed data to the display device controller, wherein unpacking the stored data is transparent to the display device controller.
21. The method of claim 17, further comprising the step of using one or more processes from a differential pulse code modulation process and a Huffman process to perform video compression.
RU2014126348/08A 2011-12-21 2011-12-21 Dram compression scheme to reduce power consumption in motion compensation and display refresh RU2599959C2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2011/066556 WO2013095448A1 (en) 2011-12-21 2011-12-21 Dram compression scheme to reduce power consumption in motion compensation and display refresh

Publications (2)

Publication Number Publication Date
RU2014126348A RU2014126348A (en) 2016-01-27
RU2599959C2 true RU2599959C2 (en) 2016-10-20

Family

ID=48638976

Family Applications (1)

Application Number Title Priority Date Filing Date
RU2014126348/08A RU2599959C2 (en) 2011-12-21 2011-12-21 Dram compression scheme to reduce power consumption in motion compensation and display refresh

Country Status (9)

Country Link
US (1) US9268723B2 (en)
EP (1) EP2795896A4 (en)
JP (1) JP5639144B2 (en)
KR (2) KR101605047B1 (en)
CN (1) CN103179393B (en)
IN (1) IN2014CN03371A (en)
RU (1) RU2599959C2 (en)
TW (1) TWI524326B (en)
WO (1) WO2013095448A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IN2014CN03371A (en) 2011-12-21 2015-07-03 Intel Corporationu
US20140355665A1 (en) * 2013-05-31 2014-12-04 Altera Corporation Adaptive Video Reference Frame Compression with Control Elements
US9864536B2 (en) 2013-10-24 2018-01-09 Qualcomm Incorporated System and method for conserving power consumption in a memory system
US20150121111A1 (en) * 2013-10-24 2015-04-30 Qualcomm Incorporated System and method for providing multi-user power saving codebook optmization
US10080028B2 (en) 2014-11-26 2018-09-18 Samsung Display Co., Ltd. System and method of compensating for image compression errors
KR20170053373A (en) 2015-11-06 2017-05-16 삼성전자주식회사 Memory Device and Memory System Performing Request-based Refresh and Operating Method of Memory Device
US10168909B1 (en) * 2016-03-29 2019-01-01 Amazon Technologies, Inc. Compression hardware acceleration

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2119727C1 (en) * 1993-03-01 1998-09-27 Сони Корпорейшн Methods and devices for processing of transform coefficients, methods and devices for reverse orthogonal transform of transform coefficients, methods and devices for compression and expanding of moving image signal, record medium for compressed signal which represents moving image
US6157740A (en) * 1997-11-17 2000-12-05 International Business Machines Corporation Compression/decompression engine for enhanced memory storage in MPEG decoder
EP0782345B1 (en) * 1995-12-27 2003-03-05 Thomson Consumer Electronics, Inc. Memory management for a video decoder

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08116539A (en) 1994-10-17 1996-05-07 Hitachi Ltd Dynamic image coder and dynamic image coding method
US5812791A (en) * 1995-05-10 1998-09-22 Cagent Technologies, Inc. Multiple sequence MPEG decoder
US5668599A (en) * 1996-03-19 1997-09-16 International Business Machines Corporation Memory management for an MPEG2 compliant decoder
US6278735B1 (en) 1998-03-19 2001-08-21 International Business Machines Corporation Real-time single pass variable bit rate control strategy and encoder
US6628714B1 (en) 1998-12-18 2003-09-30 Zenith Electronics Corporation Down converting MPEG encoded high definition sequences to lower resolution with reduced memory in decoder loop
US6510178B1 (en) 1999-12-15 2003-01-21 Zenith Electronics Corporation Compensating for drift in the down conversion of high definition sequences to lower resolution sequences
US20020176507A1 (en) * 2001-03-26 2002-11-28 Mediatek Inc. Method and an apparatus for reordering a decoded picture sequence using virtual picture
KR100598093B1 (en) * 2003-01-29 2006-07-07 삼성전자주식회사 Apparatus and method with low memory bandwidth for video data compression
KR100771401B1 (en) * 2005-08-01 2007-10-30 (주)펄서스 테크놀러지 Computing circuits and method for running an mpeg-2 aac or mpeg-4 aac audio decoding algorithm on programmable processors
JP4384130B2 (en) * 2006-03-28 2009-12-16 株式会社東芝 Video decoding method and apparatus
JP5245794B2 (en) 2008-12-15 2013-07-24 富士通株式会社 Image processing apparatus and method
JP5504885B2 (en) 2009-12-25 2014-05-28 富士通株式会社 Image processing apparatus and image processing method
IN2014CN03371A (en) 2011-12-21 2015-07-03 Intel Corporationu

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2119727C1 (en) * 1993-03-01 1998-09-27 Сони Корпорейшн Methods and devices for processing of transform coefficients, methods and devices for reverse orthogonal transform of transform coefficients, methods and devices for compression and expanding of moving image signal, record medium for compressed signal which represents moving image
EP0782345B1 (en) * 1995-12-27 2003-03-05 Thomson Consumer Electronics, Inc. Memory management for a video decoder
US6157740A (en) * 1997-11-17 2000-12-05 International Business Machines Corporation Compression/decompression engine for enhanced memory storage in MPEG decoder

Also Published As

Publication number Publication date
RU2014126348A (en) 2016-01-27
KR20140099501A (en) 2014-08-12
CN103179393A (en) 2013-06-26
CN103179393B (en) 2016-08-24
IN2014CN03371A (en) 2015-07-03
KR20150081373A (en) 2015-07-13
US9268723B2 (en) 2016-02-23
EP2795896A1 (en) 2014-10-29
WO2013095448A1 (en) 2013-06-27
JP5639144B2 (en) 2014-12-10
JP2013132056A (en) 2013-07-04
KR101605047B1 (en) 2016-03-21
TW201404158A (en) 2014-01-16
EP2795896A4 (en) 2015-05-20
TWI524326B (en) 2016-03-01
US20140204105A1 (en) 2014-07-24

Similar Documents

Publication Publication Date Title
US20180270496A1 (en) Composite video streaming using stateless compression
US9520128B2 (en) Frame skipping with extrapolation and outputs on demand neural network for automatic speech recognition
US9264749B2 (en) Server GPU assistance for mobile GPU applications
CN101908035B (en) Video coding and decoding method, GPU (Graphics Processing Unit) and its interacting method with CPU (Central Processing Unit), and system
KR100502586B1 (en) Video Audio Processing Equipment with High Processing Efficiency
US20170358052A1 (en) Parallel processing image data having top-left dependent pixels
EP2831838B1 (en) System, method, and computer program product for decompression of block compressed images
US8948822B2 (en) Coordinating power management functions in a multi-media device
KR101836027B1 (en) Constant quality video coding
CN101651832B (en) Method and apparatus for providing higher resolution images in an embedded device
KR101745625B1 (en) Embedding thumbnail information into video streams
US9189945B2 (en) Visual indicator and adjustment of media and gaming attributes based on battery statistics
TWI538479B (en) Cross-channel residual prediction
US20140198838A1 (en) Techniques for managing video streaming
TWI582751B (en) Avoiding sending unchanged regions to display
CN103577269B (en) media workload scheduler
TWI480725B (en) Adaptive graphics subsystem power and performance management
CN103959235B (en) The technology of multi-medium data is shown during operating system initialization
US9589543B2 (en) Static frame image quality improvement for sink displays
US10257510B2 (en) Media encoding using changed regions
US9357236B2 (en) Color compression using a selective color transform
KR101717915B1 (en) Compression techniques for dynamically-generated graphics resources
CN104796767A (en) Method and system for editing cloud video
US9521449B2 (en) Techniques for audio synchronization
US20140003662A1 (en) Reduced image quality for video data background regions

Legal Events

Date Code Title Description
MM4A The patent is invalid due to non-payment of fees

Effective date: 20171222