WO2006100532A1 - Processing device, system and method for multi-processor implementation with common content visualization - Google Patents

Processing device, system and method for multi-processor implementation with common content visualization Download PDF

Info

Publication number
WO2006100532A1
WO2006100532A1 PCT/IB2005/000740 IB2005000740W WO2006100532A1 WO 2006100532 A1 WO2006100532 A1 WO 2006100532A1 IB 2005000740 W IB2005000740 W IB 2005000740W WO 2006100532 A1 WO2006100532 A1 WO 2006100532A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
processing module
display
interface
additional
Prior art date
Application number
PCT/IB2005/000740
Other languages
French (fr)
Inventor
Klaus Kunze
Volker Schütz
Jens König
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/IB2005/000740 priority Critical patent/WO2006100532A1/en
Priority to CNA200580049219XA priority patent/CN101147120A/en
Priority to EP05718242A priority patent/EP1861773A1/en
Publication of WO2006100532A1 publication Critical patent/WO2006100532A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2092Details of a display terminals using a flat panel, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/04Structural and physical details of display devices
    • G09G2300/0404Matrix technologies
    • G09G2300/0408Integration of the drivers onto the display substrate
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2330/00Aspects of power supply; Aspects of display protection and defect management
    • G09G2330/02Details of power systems and of start or stop of display operation
    • G09G2330/021Power management, e.g. power saving
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/395Arrangements specially adapted for transferring the contents of the bit-mapped memory to the screen
    • G09G5/397Arrangements specially adapted for transferring the contents of two or more bit-mapped memories to the screen simultaneously, e.g. for mixing or overlay

Definitions

  • PROCESSING DEVICE SYSTEM AND METHOD FOR MULTI-PROCESSOR IMPLEMENTATION WITH COMMON CONTENT VISUALIZATION
  • the present invention relates to the field of multi processor based devices.
  • the present invention relates to display handling in multi processor based devices.
  • Typical portable CE devices are based on single processor hardware design, where a universal operable single processor carries out the applications. It is well known in the art to implement additionally specialized processing modules into such a single processor based design. Such specialized processing modules are typically adapted to specific operational tasks, provide additional processing capacity, additional functionality, and/or additional interfaces.
  • the present invention provides an inter-connectable processing module which enables for multi-processor implementation having access to a common display for displaying common visual content.
  • the processing module overcomes the disadvantages to which traditional multi-processor implementation design is subjected.
  • the present invention provides a system, a processing device, and a method for operating the processing device.
  • a system enabled multi-processor implementation having access to a common display for displaying common visual content.
  • An original processing module is provided with an output display interface.
  • the original processing module is operable with at least one software application module, which is able to generate image data.
  • the image data is provided by the output display interface.
  • the image data is intended for being displayed.
  • An additional processing module is included, which comprises at least an input interface adapted for receiving image data from the output display interface of the original processing module and an output interface adapted to output image data intended for being displayed and connectable to a display interface .
  • a display module is provided with a display interface, which is connectable to the additional processing module for receiving image data therefrom.
  • the additional processing module is operable to provide image data representing the common visual content at the output interface.
  • the common visual content is obtainable from image data received via the input interface, originating from the additional processing module and/or any combination of image data thereof.
  • the additional processing module includes a display controller module, which is operable for consolidate the image data received via the input interface and image data originating from the additional processing module.
  • the additional processing module includes a frame buffer, which buffers the image data received via the input interface, and one or more additional frame buffers, which are dedicated for storing image data originating form the additional processing module.
  • the additional processing module includes one or more pixel pipelines, each of which is associated with the respectively corresponding one or more frame buffers.
  • the pixel pipelines are adapted to read out pixel data from the respectively corresponding frame buffers and the pixel pipelines are operable to manipulate the pixel data for each pixel.
  • the additional processing module includes a post-processing module, which is operable for consolidating the pixel data resulting from the pixel pipelines.
  • the consolidation include overlaying of visual content delivered by one or more of the pixel pipelines resulting in a common image visualization to be displayed.
  • the system comprises several additional processing modules.
  • One of the additional processing modules is connectable via its input interface to the display interface of the original processing module.
  • Another one of the additional processing modules is connectable via its output interface to the display interface of the display module.
  • the remaining additional processing modules are interposed between the original processing module and the display module.
  • the remaining additional processing modules are connectable in series via their input interfaces and their output interfaces.
  • the original processing module and/or the additional processing module are systems on a chip (SoCs).
  • SoCs systems on a chip
  • the additional processing module is dedicated for image processing tasks.
  • a processing module connectable between an original processing module and a display module for enabling multi-processor implementation having access to a common display for displaying common visual content.
  • the processing module includes at least an input interface, which is adapted for receiving image data from the output display interface of the original processing module, and an output interface, which is adapted to output image data intended for being displayed and connectable to a display interface.
  • the processing module is operable to provide image data representing a common visual content at the output interface.
  • the common visual content is obtainable from image data received via the input interface, image data originating from the additional processing module and/or any combination of image data thereof.
  • processing module corresponds to the additional processing module described above with respect to any system according to an embodiment of the present invention.
  • the processing module is connectable in series via its input interface and its output interface to further (additional) processing modules.
  • a processing device enabled for multiprocessor implementation having access to a common display for displaying common visual content.
  • An original processing module is provided with an output display interface.
  • the original processing module is operable with at least one software application module, which is capable to generate image data, which is provided by the output display interface.
  • the image data is intended for being displayed.
  • An additional processing module is included in the processing device.
  • the additional processing module includes at least an input interface, which is adapted for receiving image data from the output display interface of the original processing module, and an output interface, which is adapted to output image data intended for being displayed.
  • the output interface is connectable to a display interface.
  • the processing device includes also a display module, which is provided with a display interface, which is connectable to the additional processing module.
  • the additional processing module is operable to provide image data representing the common visual content at the output interface.
  • the common visual content is obtainable from image data received via the input interface, image data originating from the additional processing module and/or any combination of image data thereof.
  • a method of enabling multi-processor implementation having access to a common display for displaying a common visual content is provided.
  • Image data is received via an input interface of a processing module.
  • image data is provided by the processing module.
  • the received image data and the provided image data is consolidated to obtain a common visual content.
  • the common visual content is obtainable from the received image data, the provided image data and/or any combination thereof.
  • the consolidated image data representing the common visual content is provided via an output interface.
  • the consolidated image data is intended to be displayed by a display module.
  • the received image data is at least temporary buffered in a frame buffer and the received, buffered image data is read out therefrom by a pixel pipeline.
  • the received, read-out image data is manipulated by the means of the pixel pipeline and in accordance with control data received via the input interface.
  • the provided image data is at least temporary buffered in one or more frame buffers of the processing module, the provided, buffered image data is read out by one or more pixel pipelines of the processing module.
  • the provided, read-out image data is manipulated by the means of the pixel pipelines and in accordance with control data provided by the processing module.
  • the operations relating to the received image data i.e. buffering reading out and manipulating the image data received via the input interface
  • the operations relating to the provided image data are operable essentially simultaneously or in time shift.
  • the image data is obtained from pixel pipelines for consolidating.
  • the consolidation especially includes deciding, which pixel pipelines have to be at least partially read out to generate the common visual content of the individual visual content, which is provided by the pixel pipelines on the basis of the image data handled thereby.
  • Fig. 1 schematically illustrates an example block diagram for a portable CE device embodied exemplarily on the basis of a cellular terminal device
  • Fig. 2a schematically illustrates a block diagram of a system on a chip (SoC) connected to a display
  • SoC system on a chip
  • Fig. 2b schematically illustrates a first state of the art implementation on the basis of a block diagram for connecting two systems on a chip (SoCs) to a common display;
  • Fig. 2c schematically illustrates further two state of the art implementations on the basis of a block diagram for connecting two systems on a chip (SoCs) to a common display
  • Fig 3a schematically illustrates a basic block diagram of a first embodiment of the invention
  • Fig. 3b schematically illustrates a block diagram of a smart display module according to an embodiment of the invention
  • Fig. 4 schematically illustrates a block diagram of a system on a chip (SoC) implementation enabling connectivity of several systems on a chip (SoCs) to a common display according to an embodiment of the present invention.
  • SoC system on a chip
  • Fig.5 schematically illustrates a flow chart illustrating operational steps of a method in accordance with the present invention.
  • Fig. 1 depicts a typical mobile device according to an embodiment of the present invention.
  • the mobile device 10 shown in Fig. 1 is capable for cellular data and voice communications. It should be noted that the present invention is not limited to this specific embodiment, which represents for the way of illustration one embodiment out of a multiplicity of embodiments.
  • the mobile device 10 includes a (main) microprocessor or microcontroller 100 as well as components associated with the microprocessor controlling the operation of the mobile device.
  • These components include a display controller 130 connecting to a display module 135, a non-volatile memory 140, a volatile memory 150 such as a random access memory (RAM), an audio input/output (I/O) interface 160 connecting to a microphone 161, a speaker 162 and/or a headset 163, a keypad controller 170 connected to a keypad 175 or keyboard, any auxiliary input/output (I/O) interface 200, and a short-range communications interface 180.
  • a display controller 130 connecting to a display module 135, a non-volatile memory 140, a volatile memory 150 such as a random access memory (RAM), an audio input/output (I/O) interface 160 connecting to a microphone 161, a speaker 162 and/or a headset 163, a keypad controller 170 connected to a keypad 175 or keyboard, any auxiliary input/output (I/O) interface 200, and a short-range communications interface 180.
  • Such a device also typically includes other device subsystems shown generally at 190.
  • the mobile device 10 may communicate over a voice network and/or may likewise communicate over a data network, such as any public land mobile networks (PLMNs) in form of e.g. digital cellular networks, especially GSM (global system for mobile communication) or UMTS (universal mobile telecommunications system).
  • PLMNs public land mobile networks
  • GSM global system for mobile communication
  • UMTS universal mobile telecommunications system
  • the voice and/or data communication is operated via an air interface, i.e. a cellular communication interface subsystem in cooperation with further components (see above) to a base station (BS) or node B (not shown) being part of a radio access network (RAN) of the infrastructure of the cellular network.
  • BS base station
  • node B not shown
  • RAN radio access network
  • the digital signal processor (DSP) 120 sends communication signals 124 to the transmitter (TX) 122 and receives communication signals 125 from the receiver (RX) 121.
  • the digital signal processor 120 also provides for receiver control signals 126 and transmitter control signal 127.
  • the gain levels applied to communication signals in the receiver (RX) 121 and transmitter (TX) 122 may be adaptively controlled through automatic gain control algorithms implemented in the digital signal processor (DSP) 120.
  • DSP digital signal processor
  • Other transceiver control algorithms could also be implemented in the digital signal processor (DSP) 120 in order to provide more sophisticated control of the transceiver 122.
  • a single local oscillator (LO) 128 may be used in conjunction with the transmitter (TX) 122 and receiver (RX) 121.
  • TX transmitter
  • RX receiver
  • a plurality of local oscillators 128 can be used to generate a plurality of corresponding frequencies.
  • the antenna 129 depicted in FIG. 1 or a diversity antenna system the mobile device 10 could be used with a single antenna structure for signal reception as well as transmission.
  • Information which includes both voice and data information, is communicated to and from the cellular interface 110 via a data link between the digital signal processor (DSP) 120.
  • DSP digital signal processor
  • the mobile device 10 may then send and receive communication signals, including both voice and data signals, over the wireless network.
  • Signals received by the antenna 129 from the wireless network are routed to the receiver 121, which provides for such operations as signal amplification, frequency down conversion, filtering, channel selection, and analog to digital conversion. Analog to digital conversion of a received signal allows more complex communication functions, such as digital demodulation and decoding, to be performed using the digital signal processor (DSP) 120.
  • DSP digital signal processor
  • signals to be transmitted to the network are processed, including modulation and encoding, for example, by the digital signal processor (DSP) 120 and are then provided to the transmitter 122 for digital to analog conversion, frequency up conversion, filtering, amplification, and transmission to the wireless network via the antenna 129.
  • DSP digital signal processor
  • the microprocessor / microcontroller ( ⁇ C) 110 which may also be designatedas a device platform microprocessor, manages the functions of the mobile device 10.
  • Operating system software 149 used by the processor 110 is preferably stored in a persistent store such as the non- volatile memory 140, which may be implemented, for example, as a Flash memory, battery backed-up RAM, any other non-volatile storage technology, or any combination thereof.
  • the non- volatile memory 140 includes a plurality of high-level software application programs or modules, such as a voice communication software application 142, a data communication software application 141, an organizer module (not shown), or any other type of software module (not shown).
  • These modules are executed by the processor 100 and provide a high-level interface between a user of the mobile device and the mobile device 10.
  • This interface typically includes a graphical component provided through the display module 135 controlled by a display controller 130 and input/output components provided through a keypad 175 connected via a keypad controller 170 to the processor 100, an auxiliary input/output (I/O) interface 200, and/or a short-range (SR) communication interface 180.
  • I/O auxiliary input/output
  • SR short-range
  • the auxiliary I/O interface 200 comprise especially USB (universal serial bus) interface, serial interface, MMC (multimedia card) interface and related interface technologies/standards, and any other standardized or proprietary data communication bus technology, whereas the short-range communication radio frequency (RF) low-power interface including especially WLAN (wireless local area network) and / or Bluetooth communication technology or an IrDA (infrared Data association) interface.
  • RF radio frequency
  • the RF low-power interface technology referred to herein should especially be understood to include any IEEE 801. xx standard technology, which description is obtainable from the Institute of Electrical and Electronics Engineers.
  • the auxiliary I/O interface 200 as well as the short-range communication interface 180 may each represent one or more interfaces supporting one or more input/output interface technologies and communication interface technologies, respectively.
  • the operating system, specific device software applications or modules, or parts thereof, may be temporarily loaded into a volatile store 150 such as a random access memory (typically implemented on the basis of DRAM (direct random access memory) technology for faster operation.
  • received communication signals may also be temporarily stored to volatile memory 150, before permanently writing them to a file system located in the nonvolatile memory 140 or any mass storage preferably detachably connected via the auxiliary I/O interface for storing data.
  • a volatile store 150 such as a random access memory (typically implemented on the basis of DRAM (direct random access memory) technology for faster operation.
  • received communication signals may also be temporarily stored to volatile memory 150, before permanently writing them to a file system located in the nonvolatile memory 140 or any mass storage preferably detachably connected via the auxiliary I/O interface for storing data.
  • An exemplary software application module of the mobile device 10 is a personal information manager application providing PDA functionality including typically a contact manager, calendar, a task manager, and the like. Such a personal information manager is executed by the processor 100, may have access to the components of the mobile device 10, and may interact with other software application modules. For instance, interaction with the voice communication software application allows for managing phone calls, voice mails, etc., and interaction with the data communication software application enables for managing SMS (Smart Message Service), MMS (Multimedia Message Service), e-mail communications and other data transmissions.
  • the non- volatile memory 140 preferably provides a file system to facilitate permanent storage of data items on the device including particularly calendar entries, contacts etc.
  • the ability for data communication with networks e.g. via the cellular interface, the short-range communication interface, or the auxiliary I/O interface enables upload, download, synchronization via such networks.
  • the application modules 141 to 149 represent device functions or software applications that are configured to be executed by the processor 100.
  • a single processor manages and controls the overall operation of the mobile device as well as all device functions and software applications.
  • Such a concept is applicable for today's mobile devices.
  • Especially the implementation of enhanced multimedia functionalities includes for example reproducing of video streaming applications, manipulating of digital images, and video sequences captured by integrated or detachably connected digital camera functionality but also gaming applications with sophisticated graphics drives the requirement of computational power.
  • One way to deal with the requirement for computational power which has been pursued in the past, solves the problem for increasing computational power by implementing powerful and universal processor cores.
  • Another approach for providing computational power is to implement two or more independent processor cores, which is a well known methodology in the art.
  • a universal processor is designed for carrying out a multiplicity of different tasks without specialization to a preselection of distinct tasks
  • a multi-processor arrangement may include one or more universal processors and one or more specialized processors adapted for processing a predefined set of tasks. Nevertheless, the implementation of several processors within one device, especially a mobile device such as mobile device 10, requires traditionally a complete and sophisticated re-design of the components.
  • SoC system-on-a-chip
  • SoC system-on-a-chip
  • a typical processing device comprise of a number of integrated circuits that perform different tasks.
  • These integrated circuits may include especially microprocessor, memory, universal asynchronous receiver-transmitters (UARTs), serial/parallel ports, direct memory access (DMA) controllers, and the like.
  • UART universal asynchronous receiver-transmitter
  • DMA direct memory access
  • UART universal asynchronous receiver-transmitter
  • VLSI very-large-scale integration
  • FIG. 2a illustrates schematically the starting point, where an original system-on-a-chip (SoC) 320 is connected via a display interface 305 to a display 300.
  • SoC system-on-a-chip
  • This implementation is to be extended by an additional system-on-a-chip (SoC).
  • SoC system-on-a-chip
  • Two principle main approaches may be applicable.
  • the first approach provides for an additional display controlled and used by an additional SoC besides the original SoC connected to an original (primary) display.
  • this approach requires only little modifications on an existing hardware design to enable implementation of the additional SoC connected to its own additional display.
  • an additional (secondary) display is not necessarily wanted in device design and the usage of the additional (secondary) display may require adaptation of one or more software applications, user interface, and the operating system.
  • the second approach provides for a common display used for displaying data by both an original SoC and an additional SoC.
  • a multiplexer (MuX) 310 is interposed between the display 300 with display interface 305 and the original SoC 320 as well as the additional SoC 330 each connected via own display interfaces 305 to the multiplexer (MuX) 310.
  • the multiplexer (MuX) 310 is operated to control the switching between display 300 with display interface 305 as well as one of the SoCs, i.e. the original SoC 320 with display interface 305 and the additional SoC 330 with display interface 305, respectively.
  • the multiplexer solution as presented in Fig. 2b involves serious drawbacks. Especially, overlaying display data provided by both SoC in parallel in order to obtain a display content which contains display data contributions from both SoC is at least difficult unless impossible. Random read access to display data provided by the other SoC is also at least difficult unless impossible.
  • a control entity has to be implemented which controls bus arbitration. Those skilled in the art will appreciate that such a control entity has to be partly implemented in both SoCs and interaction between both SoCs is required to enable the decision on display ownership in each moment.
  • the original SoC 320 and the additional SoC 330 are interconnected via a data interface 315, wherein either the original SoC 320 or the additional SoC 330 is connected to the display 300 by the means of display interface 305.
  • the display 300 is connected to the additional SoC 330 via respective display interfaces 305.
  • the additional SoC 330 is connected with the original SoC 320 via data interfaces 315, which data interfaces 325 are adapted for exchanging display data.
  • the display 300 is connected to the original SoC 320 via respective display interfaces 305.
  • the original SoC 330 is connected with the additional SoC 330 via data interfaces 315, which data interfaces 325 are adapted for exchanging display data.
  • the concatenation solution as presented with reference to Fig. 2c is also subjected to several serious drawbacks.
  • the SoC which provides the display interface 305 connecting the display with its corresponding display interface 305, requires to be powered whenever display access is needed, i.e. writing display data to the display and/or reading display data from the display via the display interfaces 305.
  • An additional data interface 315 for sharing display data between original SoC 320 and additional SoC 330 has to be designed and implemented. During the design and implementation of such a data interface 315 specific requirements have to be considered including especially high data throughput addressing the bandwidth requires for display data exchange (e.g. when considering video playback at typical frame rates).
  • the principle inventive idea, on which the present invention is based, is schematically illustrated in Fig. 3 a.
  • the inventive concept allows to overcome the drawbacks described in detail in view of the traditional implementations aforementioned.
  • the inventive concept which will be described in detail in the following, has several advantages over the state of the art solutions presented above.
  • the inventive concept preserves a common look and feel of the original user interface which addresses the usability requirements of mobile CE devices elementary for business success.
  • the integration efforts of an additional SoC into an existing hardware design are limited to a minimum.
  • the inventive concept will additionally enable power reduction mechanism and display overlaying functionality.
  • a stacked arrangement of the SoCs i.e. the original SoC 320 and the additional SoC 330.
  • the SoCs are interconnected by the means of display interfaces 315.
  • the additional SoC 330 provides the same display interface 305 to the original SoC 320 as the display 330.
  • the basic inventive concept allows provision of additional computational performance and additional functionality e.g. with respect to additional integer and/or floating point computing performance, additional interfaces and dedicated hardware acceleration and the like.
  • the basic inventive concept may be assumed to be based on design constraints to be satisfied.
  • the design constraints may include a fixed original SoC design, which means that the additional SoC 330 should be implemented without modification on the design of the original SoC 320, only slight adaptations of the software application modules developed for being carried out on the original SoC 320, which means that the software application modules provided for the original SoC 320 should be left untouched as most as possible, simultaneous display access, which means that both, the original SoC 320 and the additional SoC 330 are enabled to access the display in parallel for displaying data, and power saving, which means that the additional SoC 330 should provide one or more power down or power reduction modes.
  • the power down or power reduction modes of the additional SoC 330 should enable to operate at least a selection of components of the additional SoC 330 at power reduced state or a power down mode for overall power reduction (power saving) of the mobile device in case the functionality of the additional SoC 330 is not required.
  • the additional SoC 330 is supplied with the image data of the original SoC 320 via the display interface 305. Consequently, any read and/or write access of display data originating from the original SoC 320 can be registered by the additional SoC 330.
  • the ability of the additional SoC 320 to track and to access the image data provided by the original SoC 320 allows for instance merging of display data provided by the original SoC 320 with display data provided by the additional SoC 330.
  • the merging of display data will be referred to as overlaying of display data.
  • a bypass functionality implemented therein or implemented in parallel thereto, allows to bypass at least a selection of components of the additional SoC 330.
  • the bypassed components of the additional SoC 330 can be switched into any power reduction state, including reduced power consumption (for instance in case register states have to be preserved) to even power down.
  • the one or more selections of components of the additional SoC 330 may be associated into so-called power islands, which are controlled by a power controller for power state control.
  • the additional SoC 330 appears to the original SoC 320 as a display, the adaptation of software application modules developed for being carried out on the original SoC 320 is reduced to a minimum in all.
  • inventive basic concept is illustrated and described on the basis of an original SoC 320 and an additional SoC 330, those skilled in the art will appreciate that the present invention is not limited to this specific embodiment comprising two SoCs interconnected.
  • the stacked arrangement is also applicable for integration of further SoCs. This means any number of additional SoCs 330 may be arranged interposed between original SoC 320 and display 300.
  • additional SoCs 330 are distinguished by a two display interfaces 305, one of which for at least receiving image data from the original SoC 320 and the other one for at least transmitting image data to the display 300.
  • the image data may be fed through any number of additional SoCs interposed between.
  • a smart display module 400 is proposed, which is connectable with a SoC, herein an original SoC 320, via a display interface 305 for receiving digital display data for the SoC and for supplying digital display data to the SoC.
  • the smart display module 400 comprises a (hardware) input interface 410, which is adapted to operate as the display interface 305 and is interoperable with the display interface 305 of the original SoC 320.
  • the display interface 305 allows for both image data and control data transmissions from the (original) SoC to the smart display module 400 and its input interface 410, respectively.
  • Image data 415 received by the input interface 410 is supplied to a frame buffer 420, whereas control data 435 received is supplied to a pixel pipeline 430.
  • the frame buffer 420 is typically implemented as a volatile random access memory (RAM) and allocated for storing image data for one frame to be displayed on the display, preferably in a display pixel-organized manner.
  • the image data typically comprises color values for every pixel (point that can be displayed) on the display.
  • a frame buffer may be operated in different operation modes including off-screen, i.e.
  • image data written to the frame buffer do not appear on the visible screen of the display, and on-screen, i.e. the frame buffer is directly coupled to the display and its image data is visible.
  • the frame buffer acts as buffer storage for the image data received via the input interface 410.
  • the image data may be also accessed (by the original SoC 320) for reading via the input interface 410.
  • the image data, i.e. the pixel-organized image data 425, buffered in the frame buffer 420 is read out by the pixel pipeline 430, which may manipulate image values corresponding to pixels of the display, if required.
  • the manipulation operations operable with the pixel pipeline include for instance color lookup, gamma correction, flipping, rotating, and the like.
  • the operation of the pixel pipeline 430 is controlled by the control data 435 received via the input interface 410 and supplied thereby. Finally, the display data is supplied to the display 440 for displaying to the user.
  • the specific implementations of the frame buffer and the pixel pipeline are out of the scope of the present invention. It should be noted that the present invention is not limited to any specific implementations of the frame buffer and the pixel pipeline. Any embodiments thereof are merely illustrative and for the sake of completeness.
  • Fig. 4 illustrates a schematic component diagram of components required to enable the inventive concept of the present invention.
  • the additional SoC 330 comprises further components, which are typical for a system-on-a-chip such as microprocessor, memory, universal asynchronous receiver- transmitters (UARTs), serial/parallel ports, direct memory access (DMA) controllers, and the like.
  • the design of system-on-a-chip is well known in the art.
  • the designs of SoCs are typically carried out in consideration of the processing tasks to be operated by the SoCs such that the designs differ between each other.
  • the illustration of Fig. 4 shows an original SoC 320 connected via the display interface 305 to the additional Soc 330, which is in turn connected via the display interface 305 to the display module 400, which is preferably a smart display module as described above with reference to Fig. 3b.
  • the additional SoC 330 comprises besides its typical components, an input interface 410, a frame buffer 420, and a pixel pipeline 430.
  • the input interface 410 is adapted to operate as the display interface 305 and is interoperable with the display interface 305 of the original SoC 320.
  • the display interface 305 allows for both image data and control data transmissions from the original SoC 320 to the additional SoC 330 and its input interface 410, respectively.
  • Image data 415 received by the input interface 410 is supplied to a frame buffer 420, whereas control data 435 received is supplied to the pixel pipeline 430.
  • the image data, i.e. the pixel- organized image data 425, buffered in the frame buffer 420 is read out by the pixel pipeline
  • the pixel pipeline 430 which may manipulate image values corresponding to pixels of the display, if required.
  • the manipulation operations operable with the pixel pipeline include for instance color lookup, gamma correction, flipping, rotating, and the like.
  • the operation of the pixel pipeline 430 is controlled by the control data 435 received via the input interface 410 and supplied thereby.
  • the image data buffered in the frame buffer 420 may be also accessed via the input interface 410 (for instance by the original SoC 320 or any other additional SoC 330 connected directly or indirectly via one or more additional SoCs interposed) for being read out.
  • the additional SoC 330 comprises one or more additional frame buffers 421 and one or more additional pixel pipelines 431.
  • the additional frame buffers 421 and the additional pixel pipelines 431 are included for receiving, buffering and processing image data 416 originating from the additional SoC 330.
  • the image data, i.e. the pixel-organized image data 426, buffered in the additional frame buffers 421 is read out by the additional pixel pipelines 431 , which may manipulate image values corresponding to pixels of the display, if required.
  • the operation of the additional pixel pipelines 431 is controlled by the control data 436 provided by the additional SoC 330 and supplied thereby.
  • the functionality and operation of the additional frame buffers 421 and the additional pixel pipelines 431 is analogous to those described above with reference to the frame buffer 420 and the pixel pipeline 430.
  • the control data 435 may also effect control over the additional pixel pipelines
  • An overlaying and post-processing module 450 manages finally which pixel pipeline has to be read out for composing the final image to be displayed. This means that the overlaying and post-processing module can produce an image to be displayed originating from the pixel pipeline 430, from one of the additional pixel pipelines 431, or from any combination thereof.
  • the overlaying and post-processing module considering any combination of the pixel pipelines 430 and 431 enables an overlay image resulting from parts of the image data provided by the pixel pipeline 430 as well as one or more additional pixel pipelines 431 , which parts contribute to an overall composed image.
  • the image data resulting from the overlaying and post-processing management operated by the module 450 is provided via an output interface 460 which acts as a display interface 305.
  • the output display interface 460 of the additional SoC 330 may be connected to input interface of a further additional SoC 330 comprising an analogous implementation or to the display module 400 with display interface 305 as depicted in the embodiment illustrated in Fig. 4.
  • the pixel pipelines of the additional SoC 330 which comprise the pixel pipeline 430 and the pixel pipelines 431, and the overlaying and post-processing module 450 are preferably arranged in a display controller module 350 of the additional SoC 330.
  • the display controller module 350 comprising the pixel pipelines 430 and 431 as well as the overlaying and post-processing module 450, is provided with inputs comprising an input for the pixel data 425 originating from the frame buffer 420 and terminating at the pixel pipeline 430, inputs for the pixel data 426 originating from the additional frame buffers 421 and terminating at the respective additional pixel pipelines 431, an input for control data 435 originating from the input interface 410 and an input for control data 436 provided by the additional SoC 330.
  • a bypass such as the bypass module 500 illustrated exemplary in Fig. 4 can serve to route the image data along a bypass passing the additional SoC 330 such that the image data can be supplied by the original SoC 320 to the display module 400 even in case the additional SoC 330 is completely out of operation.
  • the inventive concept proposed provides for several advantages over the state of the art implementations.
  • the advantages relate especially to display overlaying, display data access, power islands and integration efforts.
  • the visual content delivered by the original SoC or any additional SoC arranged logically before the additional SoC 330 (i.e. connected directly or indirectly to the input interface 410) to the additional SoC 330 in question may be employed as an overlay within a visual content rendered by the additional SoC 330 in question and vice versa.
  • the additional SoC 330 (in question) has access to all display data received by its input interface 410, i.e. from the original SoC or any additional SoC arranged logically before the additional SoC 330 in question.
  • the original SoC 320 requires access to the display, it is possible to power down the whole additional SoC 330 and the image data is routed via the bypass 500 or to power down the whole additional SoC 330 with exception of the display control block including at least input interface 410, frame buffer 420 pixel pipeline 430 and output interface 460.
  • the latter possibility does not require any bypass 500.
  • the design of the additional SoC 330 according to an embodiment of the present invention behaves from outer additional SoC point of view like a display module such as the display module 400 described above.
  • the software application modules developed for being carried out on the original SoC 320 do not need any adaptation to the new architecture including one or more additional SoCs 330 according to an embodiment of the present invention arranged in series or stacked manner. Arrangement in series means that the output interface (such as interface 460) of an additional SoC is connectable to an input interface (such as interface 410) of a next additional SoC.
  • the operation of the additional SoC 330 starts and one or more essentially parallel or time shifted operational sequences are operated.
  • the set of operations SlOO to S 130 relates to the handling of image data originating from the original SoC 320. Firstly, the image data is received via the input interface 410. Next, the received image data is buffered in the frame buffer 420 associated to the input interface 420. Then, the image data is read out by the pixel pipeline 430, which is also associated to the input interface 410 and the frame buffer 420. The processing of the pixel pipeline 430 is controlled at least by the control data 435 received via the input interface and supplied to the pixel pipeline 430 for controlling purpose.
  • the set of operations S200 to S230 are operated, which relates to the handling of image data provided by the additional SoC 330.
  • the image data is provided by the additional SoC 330 and next buffered in an additional frame buffer 421, purposed for image data provided by the additional SoC 330.
  • the image data is read out by the pixel pipeline 431, which is also purposed to image data provided by the additional SoC 330.
  • the processing of the pixel pipeline 431 is controlled at least by the control data 436 provided also by the additional SoC 330 and supplied to the pixel pipeline 431 for controlling purpose.
  • the frame buffer is a display pixel orientated data storage, which preferably stores a pixel value of pre-defmable representation and maximal co-domain for each display pixel.
  • the size of the frame buffer may correspond to the pixel size of the display or may extend the pixel size of the display.
  • the pixel manipulation enables the controllable manipulation of each pixel, if required and/or desired, including color lookup, gamma correction, flipping rotation, scaling, trimming, and the like.
  • the content visualization of each pixel pipeline used for visualization is then consolidated in the operation S400, where the overlaying and post-processing management is operated by the means of the overlaying and post-processing module 450.
  • the overlay and post-processing management enables a decision logic, which pixel pipelines (430, 431) have to be at least partially read out to generate a final common visualization of the individual visualization content provided by the pixel pipelines (430, 431) on the basis of the image data handled thereby.
  • the consolidated image data representing the final common content visualization is provided to the output interface (460) in the operation S410 and can be supplied to the display module 400 for reproducing.
  • the operational sequence is completed. Those skilled in the art will appreciate that the operational sequence may be at least partially repeated whenever new image data is received via the input interface 410 and/or provided by the additional SoC 330 to the additional frame buffers 431.

Abstract

A processing module connectable between an original processing module and a display module for enabling multi-processor implementation having access to a common display for displaying common visual content is provided. The processing module includes at least an input interface, which is adapted for receiving image data from the output display interface of the original processing module, and an output interface, which is adapted to output image data intended for being displayed and connectable to a display interface. The processing module is operable to provide image data representing a common visual content at the output interface. The common visual content is obtainable from image data received via the input interface, image data originating from the additional processing module and/or any combination of image data thereof.

Description

PROCESSING DEVICE, SYSTEM AND METHOD FOR MULTI-PROCESSOR IMPLEMENTATION WITH COMMON CONTENT VISUALIZATION
The present invention relates to the field of multi processor based devices. In particular the present invention relates to display handling in multi processor based devices.
At the latest, the implementation of multimedia application such as digital music player, digital cameras, sophisticated gaming applications, and the like, into today's portable CE devices, e.g. including particularly cellular phones with enhanced multimedia functionality, drives the requirement for providing computational performance. Typical portable CE devices are based on single processor hardware design, where a universal operable single processor carries out the applications. It is well known in the art to implement additionally specialized processing modules into such a single processor based design. Such specialized processing modules are typically adapted to specific operational tasks, provide additional processing capacity, additional functionality, and/or additional interfaces.
Nevertheless, the known implementations suffer in several drawbacks, when the main processor and the one or more specialized processing modules have to have common access to a single common display. A serious drawback is the typically required re-design of the hardware implementation, which is typically orientated with single processor design. A redesign is time and costs intensive. Another serious drawback relates to the software applications developed for being carried out on the single processor hardware implementation. Typically, great efforts have to be ventured to adapt existing software applications to the new multi-processor based hardware design. Both, the hardware re-design and the software adaptation, is associated with great economical risks, which may be not be justifiable even if the multi-processor hardware design represents the better concept instead of implementation of a more powerful single processor core. The present invention provides an inter-connectable processing module which enables for multi-processor implementation having access to a common display for displaying common visual content. The processing module overcomes the disadvantages to which traditional multi-processor implementation design is subjected. In addition, the present invention provides a system, a processing device, and a method for operating the processing device.
According to a first aspect of the present invention, a system enabled multi-processor implementation having access to a common display for displaying common visual content is provided. An original processing module is provided with an output display interface. The original processing module is operable with at least one software application module, which is able to generate image data. The image data is provided by the output display interface. The image data is intended for being displayed. An additional processing module is included, which comprises at least an input interface adapted for receiving image data from the output display interface of the original processing module and an output interface adapted to output image data intended for being displayed and connectable to a display interface . A display module is provided with a display interface, which is connectable to the additional processing module for receiving image data therefrom. The additional processing module is operable to provide image data representing the common visual content at the output interface. The common visual content is obtainable from image data received via the input interface, originating from the additional processing module and/or any combination of image data thereof.
According to an embodiment of the present invention, the additional processing module includes a display controller module, which is operable for consolidate the image data received via the input interface and image data originating from the additional processing module.
According to another embodiment of the present invention, the additional processing module includes a frame buffer, which buffers the image data received via the input interface, and one or more additional frame buffers, which are dedicated for storing image data originating form the additional processing module.
According to yet another embodiment of the present invention, the additional processing module includes one or more pixel pipelines, each of which is associated with the respectively corresponding one or more frame buffers. The pixel pipelines are adapted to read out pixel data from the respectively corresponding frame buffers and the pixel pipelines are operable to manipulate the pixel data for each pixel.
According to yet another embodiment of the present invention, the additional processing module includes a post-processing module, which is operable for consolidating the pixel data resulting from the pixel pipelines.
According to a further embodiment of the present invention, the consolidation include overlaying of visual content delivered by one or more of the pixel pipelines resulting in a common image visualization to be displayed.
According to yet a further embodiment of the present invention, the system comprises several additional processing modules. One of the additional processing modules is connectable via its input interface to the display interface of the original processing module. Another one of the additional processing modules is connectable via its output interface to the display interface of the display module. The remaining additional processing modules are interposed between the original processing module and the display module. The remaining additional processing modules are connectable in series via their input interfaces and their output interfaces.
According to an additional embodiment of the present invention, the original processing module and/or the additional processing module are systems on a chip (SoCs).
According to yet an additional embodiment of the present invention the additional processing module is dedicated for image processing tasks. According to a second aspect of the present invention, a processing module connectable between an original processing module and a display module for enabling multi-processor implementation having access to a common display for displaying common visual content is provided. The processing module includes at least an input interface, which is adapted for receiving image data from the output display interface of the original processing module, and an output interface, which is adapted to output image data intended for being displayed and connectable to a display interface. The processing module is operable to provide image data representing a common visual content at the output interface. The common visual content is obtainable from image data received via the input interface, image data originating from the additional processing module and/or any combination of image data thereof.
It should be noted that the processing module corresponds to the additional processing module described above with respect to any system according to an embodiment of the present invention.
According to an embodiment of the present invention, the processing module is connectable in series via its input interface and its output interface to further (additional) processing modules.
According to a third aspect of the present invention, a processing device enabled for multiprocessor implementation having access to a common display for displaying common visual content is provided. An original processing module is provided with an output display interface. The original processing module is operable with at least one software application module, which is capable to generate image data, which is provided by the output display interface. The image data is intended for being displayed. An additional processing module is included in the processing device. The additional processing module includes at least an input interface, which is adapted for receiving image data from the output display interface of the original processing module, and an output interface, which is adapted to output image data intended for being displayed. The output interface is connectable to a display interface. The processing device includes also a display module, which is provided with a display interface, which is connectable to the additional processing module. The additional processing module is operable to provide image data representing the common visual content at the output interface. The common visual content is obtainable from image data received via the input interface, image data originating from the additional processing module and/or any combination of image data thereof.
Further embodiments of the additional processing module of the processing device according to the present invention are described below in detail.
According to a fourth aspect of the present invention, a method of enabling multi-processor implementation having access to a common display for displaying a common visual content is provided. Image data is received via an input interface of a processing module. In parallel, image data is provided by the processing module. The received image data and the provided image data is consolidated to obtain a common visual content. The common visual content is obtainable from the received image data, the provided image data and/or any combination thereof. The consolidated image data representing the common visual content is provided via an output interface. The consolidated image data is intended to be displayed by a display module.
Further functions of the method according to an embodiment of the present invention can be obtained from the detailed description below .
According to an embodiment of the present invention, the received image data is at least temporary buffered in a frame buffer and the received, buffered image data is read out therefrom by a pixel pipeline. The received, read-out image data is manipulated by the means of the pixel pipeline and in accordance with control data received via the input interface.
According to another embodiment of the present invention, the provided image data is at least temporary buffered in one or more frame buffers of the processing module, the provided, buffered image data is read out by one or more pixel pipelines of the processing module. The provided, read-out image data is manipulated by the means of the pixel pipelines and in accordance with control data provided by the processing module.
According to yet another embodiment of the present invention, the operations relating to the received image data (i.e. buffering reading out and manipulating the image data received via the input interface) and the operations relating to the provided image data (i.e. buffering reading out and manipulating the image data provided by the processing module) are operable essentially simultaneously or in time shift.
According to a further embodiment of the present invention , the image data is obtained from pixel pipelines for consolidating. The consolidation especially includes deciding, which pixel pipelines have to be at least partially read out to generate the common visual content of the individual visual content, which is provided by the pixel pipelines on the basis of the image data handled thereby.
The foregoing and other aspects of various embodiments of the present invention will be apparent through examination of the following detailed description thereof in conjunction with the accompanying drawings, in which
Fig. 1 schematically illustrates an example block diagram for a portable CE device embodied exemplarily on the basis of a cellular terminal device; Fig. 2a schematically illustrates a block diagram of a system on a chip (SoC) connected to a display;
Fig. 2b schematically illustrates a first state of the art implementation on the basis of a block diagram for connecting two systems on a chip (SoCs) to a common display;
Fig. 2c schematically illustrates further two state of the art implementations on the basis of a block diagram for connecting two systems on a chip (SoCs) to a common display; Fig 3a schematically illustrates a basic block diagram of a first embodiment of the invention Fig. 3b schematically illustrates a block diagram of a smart display module according to an embodiment of the invention; Fig. 4 schematically illustrates a block diagram of a system on a chip (SoC) implementation enabling connectivity of several systems on a chip (SoCs) to a common display according to an embodiment of the present invention.
Fig.5 schematically illustrates a flow chart illustrating operational steps of a method in accordance with the present invention.
In the following description of the various embodiments, reference is made to the accompanying drawings which form a part thereof, and in which is shown by way of illustration various embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the invention. Wherever possible same reference numbers are used throughout drawings and description to refer to similar or like parts.
Fig. 1 depicts a typical mobile device according to an embodiment of the present invention. The mobile device 10 shown in Fig. 1 is capable for cellular data and voice communications. It should be noted that the present invention is not limited to this specific embodiment, which represents for the way of illustration one embodiment out of a multiplicity of embodiments. The mobile device 10 includes a (main) microprocessor or microcontroller 100 as well as components associated with the microprocessor controlling the operation of the mobile device. These components include a display controller 130 connecting to a display module 135, a non-volatile memory 140, a volatile memory 150 such as a random access memory (RAM), an audio input/output (I/O) interface 160 connecting to a microphone 161, a speaker 162 and/or a headset 163, a keypad controller 170 connected to a keypad 175 or keyboard, any auxiliary input/output (I/O) interface 200, and a short-range communications interface 180. Such a device also typically includes other device subsystems shown generally at 190.
The mobile device 10 may communicate over a voice network and/or may likewise communicate over a data network, such as any public land mobile networks (PLMNs) in form of e.g. digital cellular networks, especially GSM (global system for mobile communication) or UMTS (universal mobile telecommunications system). Typically the voice and/or data communication is operated via an air interface, i.e. a cellular communication interface subsystem in cooperation with further components (see above) to a base station (BS) or node B (not shown) being part of a radio access network (RAN) of the infrastructure of the cellular network. The cellular communication interface subsystem as depicted illustratively with reference to Fig. 1 comprises the cellular communication interface 110, a digital signal processor (DSP) 120, a receiver (RX) 121, a transmitter (TX) 122, and one or more local oscillators (LOs) 123 and enables the communication with one or more public land mobile networks (PLMNs). The digital signal processor (DSP) 120 sends communication signals 124 to the transmitter (TX) 122 and receives communication signals 125 from the receiver (RX) 121. In addition to processing communication signals, the digital signal processor 120 also provides for receiver control signals 126 and transmitter control signal 127. For example, besides the modulation and demodulation of the signals to be transmitted and signals received, respectively, the gain levels applied to communication signals in the receiver (RX) 121 and transmitter (TX) 122 may be adaptively controlled through automatic gain control algorithms implemented in the digital signal processor (DSP) 120. Other transceiver control algorithms could also be implemented in the digital signal processor (DSP) 120 in order to provide more sophisticated control of the transceiver 122.
In case the mobile device 10 communicates through the PLMN occur at a single frequency or a closely-spaced set of frequencies, then a single local oscillator (LO) 128 may be used in conjunction with the transmitter (TX) 122 and receiver (RX) 121. Alternatively, if different frequencies are utilized for voice/ data communications or transmission versus reception, then a plurality of local oscillators 128 can be used to generate a plurality of corresponding frequencies. Although the antenna 129 depicted in FIG. 1 or a diversity antenna system (not shown), the mobile device 10 could be used with a single antenna structure for signal reception as well as transmission. Information, which includes both voice and data information, is communicated to and from the cellular interface 110 via a data link between the digital signal processor (DSP) 120. The detailed design of the cellular interface 110, such as frequency band, component selection, power level, etc., will be dependent upon the wireless network in which the mobile device 100 is intended to operate. After any required network registration or activation procedures, which may involve the subscriber identification module (SIM) 210 required for registration in cellular networks, have been completed, the mobile device 10 may then send and receive communication signals, including both voice and data signals, over the wireless network. Signals received by the antenna 129 from the wireless network are routed to the receiver 121, which provides for such operations as signal amplification, frequency down conversion, filtering, channel selection, and analog to digital conversion. Analog to digital conversion of a received signal allows more complex communication functions, such as digital demodulation and decoding, to be performed using the digital signal processor (DSP) 120. In a similar manner, signals to be transmitted to the network are processed, including modulation and encoding, for example, by the digital signal processor (DSP) 120 and are then provided to the transmitter 122 for digital to analog conversion, frequency up conversion, filtering, amplification, and transmission to the wireless network via the antenna 129.
The microprocessor / microcontroller (μC) 110, which may also be designatedas a device platform microprocessor, manages the functions of the mobile device 10. Operating system software 149 used by the processor 110 is preferably stored in a persistent store such as the non- volatile memory 140, which may be implemented, for example, as a Flash memory, battery backed-up RAM, any other non-volatile storage technology, or any combination thereof. In addition to the operating system 149, which controls low-level functions as well as (graphical) basic user interface functions of the mobile device 10, the non- volatile memory 140 includes a plurality of high-level software application programs or modules, such as a voice communication software application 142, a data communication software application 141, an organizer module (not shown), or any other type of software module (not shown). These modules are executed by the processor 100 and provide a high-level interface between a user of the mobile device and the mobile device 10. This interface typically includes a graphical component provided through the display module 135 controlled by a display controller 130 and input/output components provided through a keypad 175 connected via a keypad controller 170 to the processor 100, an auxiliary input/output (I/O) interface 200, and/or a short-range (SR) communication interface 180. The auxiliary I/O interface 200 comprise especially USB (universal serial bus) interface, serial interface, MMC (multimedia card) interface and related interface technologies/standards, and any other standardized or proprietary data communication bus technology, whereas the short-range communication radio frequency (RF) low-power interface including especially WLAN (wireless local area network) and / or Bluetooth communication technology or an IrDA (infrared Data association) interface. The RF low-power interface technology referred to herein should especially be understood to include any IEEE 801. xx standard technology, which description is obtainable from the Institute of Electrical and Electronics Engineers. Moreover, the auxiliary I/O interface 200 as well as the short-range communication interface 180 may each represent one or more interfaces supporting one or more input/output interface technologies and communication interface technologies, respectively.
The operating system, specific device software applications or modules, or parts thereof, may be temporarily loaded into a volatile store 150 such as a random access memory (typically implemented on the basis of DRAM (direct random access memory) technology for faster operation. Moreover, received communication signals may also be temporarily stored to volatile memory 150, before permanently writing them to a file system located in the nonvolatile memory 140 or any mass storage preferably detachably connected via the auxiliary I/O interface for storing data. It should be understood that the components described above represent typical components of a traditional mobile device 10 embodied herein in form of a cellular phone. The present invention is not limited to these specific components and their implementation depicted merely for the way for illustration and sake of completeness.
An exemplary software application module of the mobile device 10 is a personal information manager application providing PDA functionality including typically a contact manager, calendar, a task manager, and the like. Such a personal information manager is executed by the processor 100, may have access to the components of the mobile device 10, and may interact with other software application modules. For instance, interaction with the voice communication software application allows for managing phone calls, voice mails, etc., and interaction with the data communication software application enables for managing SMS (Smart Message Service), MMS (Multimedia Message Service), e-mail communications and other data transmissions. The non- volatile memory 140 preferably provides a file system to facilitate permanent storage of data items on the device including particularly calendar entries, contacts etc. The ability for data communication with networks, e.g. via the cellular interface, the short-range communication interface, or the auxiliary I/O interface enables upload, download, synchronization via such networks.
The application modules 141 to 149 represent device functions or software applications that are configured to be executed by the processor 100. In most known mobile devices, a single processor manages and controls the overall operation of the mobile device as well as all device functions and software applications. Such a concept is applicable for today's mobile devices. Especially the implementation of enhanced multimedia functionalities includes for example reproducing of video streaming applications, manipulating of digital images, and video sequences captured by integrated or detachably connected digital camera functionality but also gaming applications with sophisticated graphics drives the requirement of computational power. One way to deal with the requirement for computational power, which has been pursued in the past, solves the problem for increasing computational power by implementing powerful and universal processor cores. Another approach for providing computational power is to implement two or more independent processor cores, which is a well known methodology in the art. The advantages of several independent processor cores can be immediately appreciated by those skilled in the art. Whereas a universal processor is designed for carrying out a multiplicity of different tasks without specialization to a preselection of distinct tasks, a multi-processor arrangement may include one or more universal processors and one or more specialized processors adapted for processing a predefined set of tasks. Nevertheless, the implementation of several processors within one device, especially a mobile device such as mobile device 10, requires traditionally a complete and sophisticated re-design of the components.
In the following, the present invention will provide a concept which allows simple integration of additional processor cores into an existing processing device implementation enabling the omission of expensive complete and sophisticated redesign. The inventive concept will be described with reference to system-on-a-chip (SoC) design. System-on-a-chip (SoC) is a concept of integrating at least numerous (or all) components of a processing device into a single high-integrated chip. Such a system-on-a-chip can contain digital, analog, mixed-signal, and often radio-frequency functions - all on one chip. A typical processing device comprise of a number of integrated circuits that perform different tasks. These integrated circuits may include especially microprocessor, memory, universal asynchronous receiver-transmitters (UARTs), serial/parallel ports, direct memory access (DMA) controllers, and the like. A universal asynchronous receiver-transmitter (UART) translates between parallel bits of data and serial bits. The recent improvements in semiconductor technology caused that very-large-scale integration (VLSI) integrated circuits enable a significant grow in complexity, making it possible to integrate numerous components of a system in a single chip. With reference to Fig. 1 , one or more components thereof, e.g. the controllers 130 and 160, the memory components 150 and 140, and one or more of the interfaces 200, 180 and 1 10, can be integrated together with the processor 100 in a single chip which forms finally a system-on-a-chip (Soc).
With reference to Figs. 2a to 2c, traditional ways of implementation of an additional SoCs into an existing design with an (original) SoC in one device is picked out as a central theme. Fig. 2a illustrates schematically the starting point, where an original system-on-a-chip (SoC) 320 is connected via a display interface 305 to a display 300. This implementation is to be extended by an additional system-on-a-chip (SoC). Two principle main approaches may be applicable. The first approach provides for an additional display controlled and used by an additional SoC besides the original SoC connected to an original (primary) display. Advantageously, this approach requires only little modifications on an existing hardware design to enable implementation of the additional SoC connected to its own additional display. However, an additional (secondary) display is not necessarily wanted in device design and the usage of the additional (secondary) display may require adaptation of one or more software applications, user interface, and the operating system. The second approach provides for a common display used for displaying data by both an original SoC and an additional SoC. With reference to Figs. 2b and 2c two typical known solutions for two SoCs and a common display are schematically depicted. In Fig. 2b, a multiplexer (MuX) 310 is interposed between the display 300 with display interface 305 and the original SoC 320 as well as the additional SoC 330 each connected via own display interfaces 305 to the multiplexer (MuX) 310. The multiplexer (MuX) 310 is operated to control the switching between display 300 with display interface 305 as well as one of the SoCs, i.e. the original SoC 320 with display interface 305 and the additional SoC 330 with display interface 305, respectively. The multiplexer solution as presented in Fig. 2b involves serious drawbacks. Especially, overlaying display data provided by both SoC in parallel in order to obtain a display content which contains display data contributions from both SoC is at least difficult unless impossible. Random read access to display data provided by the other SoC is also at least difficult unless impossible. A control entity has to be implemented which controls bus arbitration. Those skilled in the art will appreciate that such a control entity has to be partly implemented in both SoCs and interaction between both SoCs is required to enable the decision on display ownership in each moment.
In Fig. 2c, the original SoC 320 and the additional SoC 330 are interconnected via a data interface 315, wherein either the original SoC 320 or the additional SoC 330 is connected to the display 300 by the means of display interface 305. With reference to the first option depicted in Fig. 2c, the display 300 is connected to the additional SoC 330 via respective display interfaces 305. The additional SoC 330 is connected with the original SoC 320 via data interfaces 315, which data interfaces 325 are adapted for exchanging display data. With reference to the second option depicted in Fig. 2c, the display 300 is connected to the original SoC 320 via respective display interfaces 305. The original SoC 330 is connected with the additional SoC 330 via data interfaces 315, which data interfaces 325 are adapted for exchanging display data. The concatenation solution as presented with reference to Fig. 2c is also subjected to several serious drawbacks. The SoC, which provides the display interface 305 connecting the display with its corresponding display interface 305, requires to be powered whenever display access is needed, i.e. writing display data to the display and/or reading display data from the display via the display interfaces 305. An additional data interface 315 for sharing display data between original SoC 320 and additional SoC 330 has to be designed and implemented. During the design and implementation of such a data interface 315 specific requirements have to be considered including especially high data throughput addressing the bandwidth requires for display data exchange (e.g. when considering video playback at typical frame rates).
The principle inventive idea, on which the present invention is based, is schematically illustrated in Fig. 3 a. The inventive concept allows to overcome the drawbacks described in detail in view of the traditional implementations aforementioned. The inventive concept, which will be described in detail in the following, has several advantages over the state of the art solutions presented above. The inventive concept preserves a common look and feel of the original user interface which addresses the usability requirements of mobile CE devices elementary for business success. The integration efforts of an additional SoC into an existing hardware design are limited to a minimum. Moreover, the inventive concept will additionally enable power reduction mechanism and display overlaying functionality. These and further advantages will be described below in detail and appreciated by those skilled in the art on the basis of the description.
With reference to Fig. 3a, a stacked arrangement of the SoCs, i.e. the original SoC 320 and the additional SoC 330, is proposed. In contrast to the traditional implementation, the SoCs are interconnected by the means of display interfaces 315. From original SoC 320 point of view, the additional SoC 330 provides the same display interface 305 to the original SoC 320 as the display 330. The basic inventive concept allows provision of additional computational performance and additional functionality e.g. with respect to additional integer and/or floating point computing performance, additional interfaces and dedicated hardware acceleration and the like. The basic inventive concept may be assumed to be based on design constraints to be satisfied. For instance the design constraints may include a fixed original SoC design, which means that the additional SoC 330 should be implemented without modification on the design of the original SoC 320, only slight adaptations of the software application modules developed for being carried out on the original SoC 320, which means that the software application modules provided for the original SoC 320 should be left untouched as most as possible, simultaneous display access, which means that both, the original SoC 320 and the additional SoC 330 are enabled to access the display in parallel for displaying data, and power saving, which means that the additional SoC 330 should provide one or more power down or power reduction modes. With respect to the simultaneous display access, there are several use cases, which may take advantages of such a simultaneous display access, especially imaging application like image displaying, image and video manipulating, image and video sequence reproducing, and the like. The power down or power reduction modes of the additional SoC 330 should enable to operate at least a selection of components of the additional SoC 330 at power reduced state or a power down mode for overall power reduction (power saving) of the mobile device in case the functionality of the additional SoC 330 is not required.
Referring now back to Fig. 3 a, which illustrates schematically the inventive basic concept of the present invention, those skilled in the art will appreciate that several advantages can be identified. The additional SoC 330 is supplied with the image data of the original SoC 320 via the display interface 305. Consequently, any read and/or write access of display data originating from the original SoC 320 can be registered by the additional SoC 330. The ability of the additional SoC 320 to track and to access the image data provided by the original SoC 320 (including modification of the image data originating from the original SoC 320 or terminating at the original SoC 320) allows for instance merging of display data provided by the original SoC 320 with display data provided by the additional SoC 330. The merging of display data will be referred to as overlaying of display data. In case that the functionality of the additional SoC 330 is not required, a bypass functionality implemented therein or implemented in parallel thereto, allows to bypass at least a selection of components of the additional SoC 330. The bypassed components of the additional SoC 330 can be switched into any power reduction state, including reduced power consumption (for instance in case register states have to be preserved) to even power down. The one or more selections of components of the additional SoC 330 may be associated into so-called power islands, which are controlled by a power controller for power state control. Due to the fact that the additional SoC 330 appears to the original SoC 320 as a display, the adaptation of software application modules developed for being carried out on the original SoC 320 is reduced to a minimum in all. Although the inventive basic concept is illustrated and described on the basis of an original SoC 320 and an additional SoC 330, those skilled in the art will appreciate that the present invention is not limited to this specific embodiment comprising two SoCs interconnected. The stacked arrangement is also applicable for integration of further SoCs. This means any number of additional SoCs 330 may be arranged interposed between original SoC 320 and display 300. These additional SoCs 330 are distinguished by a two display interfaces 305, one of which for at least receiving image data from the original SoC 320 and the other one for at least transmitting image data to the display 300. The image data may be fed through any number of additional SoCs interposed between.
A more detailed implementation of the additional SoC 330, which enables the aforementioned functionality according to an embodiment of the present invention, is described in the following. The following embodiment of the additional SoC 330 will be understood by those skilled in the art, when referring to the so-called smart display architecture illustrated schematically in Fig. 3b. A smart display module 400 is proposed, which is connectable with a SoC, herein an original SoC 320, via a display interface 305 for receiving digital display data for the SoC and for supplying digital display data to the SoC. The smart display module 400 comprises a (hardware) input interface 410, which is adapted to operate as the display interface 305 and is interoperable with the display interface 305 of the original SoC 320. The display interface 305 allows for both image data and control data transmissions from the (original) SoC to the smart display module 400 and its input interface 410, respectively. Image data 415 received by the input interface 410 is supplied to a frame buffer 420, whereas control data 435 received is supplied to a pixel pipeline 430. The frame buffer 420 is typically implemented as a volatile random access memory (RAM) and allocated for storing image data for one frame to be displayed on the display, preferably in a display pixel-organized manner. The image data typically comprises color values for every pixel (point that can be displayed) on the display. A frame buffer may be operated in different operation modes including off-screen, i.e. image data written to the frame buffer do not appear on the visible screen of the display, and on-screen, i.e. the frame buffer is directly coupled to the display and its image data is visible. In principle the frame buffer acts as buffer storage for the image data received via the input interface 410. The image data may be also accessed (by the original SoC 320) for reading via the input interface 410. The image data, i.e. the pixel-organized image data 425, buffered in the frame buffer 420 is read out by the pixel pipeline 430, which may manipulate image values corresponding to pixels of the display, if required. The manipulation operations operable with the pixel pipeline include for instance color lookup, gamma correction, flipping, rotating, and the like. The operation of the pixel pipeline 430 is controlled by the control data 435 received via the input interface 410 and supplied thereby. Finally, the display data is supplied to the display 440 for displaying to the user. The specific implementations of the frame buffer and the pixel pipeline are out of the scope of the present invention. It should be noted that the present invention is not limited to any specific implementations of the frame buffer and the pixel pipeline. Any embodiments thereof are merely illustrative and for the sake of completeness.
Now referring to Fig. 4, the additional SoC 330 according to an embodiment of the present invention is presented. Fig. 4 illustrates a schematic component diagram of components required to enable the inventive concept of the present invention. Those skilled in the art will appreciate that the additional SoC 330 comprises further components, which are typical for a system-on-a-chip such as microprocessor, memory, universal asynchronous receiver- transmitters (UARTs), serial/parallel ports, direct memory access (DMA) controllers, and the like. The design of system-on-a-chip is well known in the art. The designs of SoCs are typically carried out in consideration of the processing tasks to be operated by the SoCs such that the designs differ between each other.
In accordance with the basic inventive concept illustrated in Fig. 3a, the illustration of Fig. 4 shows an original SoC 320 connected via the display interface 305 to the additional Soc 330, which is in turn connected via the display interface 305 to the display module 400, which is preferably a smart display module as described above with reference to Fig. 3b. The additional SoC 330 comprises besides its typical components, an input interface 410, a frame buffer 420, and a pixel pipeline 430. The input interface 410 is adapted to operate as the display interface 305 and is interoperable with the display interface 305 of the original SoC 320. The display interface 305 allows for both image data and control data transmissions from the original SoC 320 to the additional SoC 330 and its input interface 410, respectively. Image data 415 received by the input interface 410 is supplied to a frame buffer 420, whereas control data 435 received is supplied to the pixel pipeline 430. The image data, i.e. the pixel- organized image data 425, buffered in the frame buffer 420 is read out by the pixel pipeline
430, which may manipulate image values corresponding to pixels of the display, if required. The manipulation operations operable with the pixel pipeline include for instance color lookup, gamma correction, flipping, rotating, and the like. The operation of the pixel pipeline 430 is controlled by the control data 435 received via the input interface 410 and supplied thereby. The image data buffered in the frame buffer 420 may be also accessed via the input interface 410 (for instance by the original SoC 320 or any other additional SoC 330 connected directly or indirectly via one or more additional SoCs interposed) for being read out. In parallel to the frame buffer 420 and the pixel pipeline 430, which are provided for receiving, buffering and processing image data received via the input interface 410 acting as display interface 305, the additional SoC 330 comprises one or more additional frame buffers 421 and one or more additional pixel pipelines 431. The additional frame buffers 421 and the additional pixel pipelines 431 are included for receiving, buffering and processing image data 416 originating from the additional SoC 330. The image data, i.e. the pixel-organized image data 426, buffered in the additional frame buffers 421 is read out by the additional pixel pipelines 431 , which may manipulate image values corresponding to pixels of the display, if required. The operation of the additional pixel pipelines 431 is controlled by the control data 436 provided by the additional SoC 330 and supplied thereby. The functionality and operation of the additional frame buffers 421 and the additional pixel pipelines 431 is analogous to those described above with reference to the frame buffer 420 and the pixel pipeline 430. The control data 435 may also effect control over the additional pixel pipelines
431. An overlaying and post-processing module 450 manages finally which pixel pipeline has to be read out for composing the final image to be displayed. This means that the overlaying and post-processing module can produce an image to be displayed originating from the pixel pipeline 430, from one of the additional pixel pipelines 431, or from any combination thereof. The overlaying and post-processing module considering any combination of the pixel pipelines 430 and 431 enables an overlay image resulting from parts of the image data provided by the pixel pipeline 430 as well as one or more additional pixel pipelines 431 , which parts contribute to an overall composed image. The image data resulting from the overlaying and post-processing management operated by the module 450 is provided via an output interface 460 which acts as a display interface 305. The output display interface 460 of the additional SoC 330 may be connected to input interface of a further additional SoC 330 comprising an analogous implementation or to the display module 400 with display interface 305 as depicted in the embodiment illustrated in Fig. 4. The pixel pipelines of the additional SoC 330, which comprise the pixel pipeline 430 and the pixel pipelines 431, and the overlaying and post-processing module 450 are preferably arranged in a display controller module 350 of the additional SoC 330. The display controller module 350 comprising the pixel pipelines 430 and 431 as well as the overlaying and post-processing module 450, is provided with inputs comprising an input for the pixel data 425 originating from the frame buffer 420 and terminating at the pixel pipeline 430, inputs for the pixel data 426 originating from the additional frame buffers 421 and terminating at the respective additional pixel pipelines 431, an input for control data 435 originating from the input interface 410 and an input for control data 436 provided by the additional SoC 330. In addition a bypass such as the bypass module 500 illustrated exemplary in Fig. 4 can serve to route the image data along a bypass passing the additional SoC 330 such that the image data can be supplied by the original SoC 320 to the display module 400 even in case the additional SoC 330 is completely out of operation.
As aforementioned, the inventive concept proposed provides for several advantages over the state of the art implementations. The advantages relate especially to display overlaying, display data access, power islands and integration efforts. The visual content delivered by the original SoC or any additional SoC arranged logically before the additional SoC 330 (i.e. connected directly or indirectly to the input interface 410) to the additional SoC 330 in question may be employed as an overlay within a visual content rendered by the additional SoC 330 in question and vice versa. The additional SoC 330 (in question) has access to all display data received by its input interface 410, i.e. from the original SoC or any additional SoC arranged logically before the additional SoC 330 in question. In case the original SoC 320 requires access to the display, it is possible to power down the whole additional SoC 330 and the image data is routed via the bypass 500 or to power down the whole additional SoC 330 with exception of the display control block including at least input interface 410, frame buffer 420 pixel pipeline 430 and output interface 460. The latter possibility does not require any bypass 500. The design of the additional SoC 330 according to an embodiment of the present invention behaves from outer additional SoC point of view like a display module such as the display module 400 described above. The software application modules developed for being carried out on the original SoC 320 do not need any adaptation to the new architecture including one or more additional SoCs 330 according to an embodiment of the present invention arranged in series or stacked manner. Arrangement in series means that the output interface (such as interface 460) of an additional SoC is connectable to an input interface (such as interface 410) of a next additional SoC.
With reference to Fig. 5, an operational sequence embodying the basic operations sequence of the additional SoC 330 described above in detail with reference to Fig. 4 is illustrated.
The operation of the additional SoC 330 starts and one or more essentially parallel or time shifted operational sequences are operated. The set of operations SlOO to S 130 relates to the handling of image data originating from the original SoC 320. Firstly, the image data is received via the input interface 410. Next, the received image data is buffered in the frame buffer 420 associated to the input interface 420. Then, the image data is read out by the pixel pipeline 430, which is also associated to the input interface 410 and the frame buffer 420. The processing of the pixel pipeline 430 is controlled at least by the control data 435 received via the input interface and supplied to the pixel pipeline 430 for controlling purpose. Substantially parallel or shifted in time, the set of operations S200 to S230 are operated, which relates to the handling of image data provided by the additional SoC 330. Firstly, the image data is provided by the additional SoC 330 and next buffered in an additional frame buffer 421, purposed for image data provided by the additional SoC 330. Then (in analogy to the handling of the image data received via the input interface 410), the image data is read out by the pixel pipeline 431, which is also purposed to image data provided by the additional SoC 330. The processing of the pixel pipeline 431 is controlled at least by the control data 436 provided also by the additional SoC 330 and supplied to the pixel pipeline 431 for controlling purpose. Additional sets of operations S300 to S330 in analogy to the operations S200 to S230 may be carried out for each additional frame buffer 421 and additional pixel pipeline 431 implemented in the additional SoC 330 and used for content visualization by the additional SoC 330. As described above, the frame buffer is a display pixel orientated data storage, which preferably stores a pixel value of pre-defmable representation and maximal co-domain for each display pixel. The size of the frame buffer may correspond to the pixel size of the display or may extend the pixel size of the display. The pixel manipulation enables the controllable manipulation of each pixel, if required and/or desired, including color lookup, gamma correction, flipping rotation, scaling, trimming, and the like.
The content visualization of each pixel pipeline used for visualization is then consolidated in the operation S400, where the overlaying and post-processing management is operated by the means of the overlaying and post-processing module 450. The overlay and post-processing management enables a decision logic, which pixel pipelines (430, 431) have to be at least partially read out to generate a final common visualization of the individual visualization content provided by the pixel pipelines (430, 431) on the basis of the image data handled thereby. Finally, the consolidated image data representing the final common content visualization is provided to the output interface (460) in the operation S410 and can be supplied to the display module 400 for reproducing.
The operational sequence is completed. Those skilled in the art will appreciate that the operational sequence may be at least partially repeated whenever new image data is received via the input interface 410 and/or provided by the additional SoC 330 to the additional frame buffers 431.
Several features and aspects of the present invention have been illustrated and described in detail with reference to particular embodiments by the way of example only, and not by the way of limitation. Those skilled in the art will appreciate that alternative implementations and various modifications to the disclosed embodiments are within the scope and contemplation of the invention. Therefore, it is intended that the invention be considered as limited only by the scope of the appended claims.

Claims

Claims
1. System for enabling multi-processor implementation having access to a common display for displaying common visual content, comprising - an original processing module (320) provided with an output display interface (305), wherein the original processing module (320) is operable with at least one software application module (410 to 419) generating image data provided by the output display interface (305), the image data being intended for display (400); - an additional processing module (330) including at least an input interface
(410) adapted for receiving image data from the output display interface (305) of the original processing module (320) and an output interface (460) adapted to output image data intended for display and connectable to a display interface (305); and - a display module (400) provided with a display interface (305) connectable to the additional processing module (330), wherein the additional processing module (330) is operable to provide image data representing the common visual content at the output interface (460); wherein the common visual content is obtainable from image data received via the input interface (410), and is originating from the additional processing module (330) and/or any combination of image data thereof.
2. System according to claim 1, wherein the additional processing module (330) includes a display controller module (350), which is operable for consolidate the image data received via the input interface (410) and image data originating from the additional processing module (330).
3. System according to claim 1 or claim 2, wherein the additional processing module (330) includes a frame buffer (420), which buffers the image data received via the input interface (410), and one or more additional frame buffers (421) decided for storing image data originating form the additional processing module (330).
4. System according to claim 3, wherein the additional processing module (330) includes one or more pixel pipelines (430, 431) each associated with the respectively corresponding one or more frame buffers (420, 421), wherein the pixel pipelines (430, 431) are adapted to read out pixel data from the frame buffers (420, 421) and operable to manipulate the pixel data for each pixel.
5. System according to claim 4, wherein the additional processing module (330) includes an post-processing module (450), which is operable for consolidating the pixel data resulting from the pixel pipelines (430, 431).
6. System according to claim 2 or claim 4, wherein the consolidation include overlaying of visual content delivered by one or more of the pixel pipelines resulting in a common image representation to be displayed.
7. System according to anyone of the preceding claims, comprising several additional processing modules (330), wherein one of the additional processing modules (330) is connectable via its input interface (410) to the display interface (305) of the original processing module (320), wherein another one of the additional processing modules (330) is connectable via its output interface (460) to the display interface (305) of the display module (400), wherein the remaining additional processing modules (330) are interposed between the original processing module (320) and the display module (400) and are connectable in series via the input interfaces (410) and the output interfaces (460);
8. System according to anyone of the preceding claims, wherein the original processing module (320) and/or the additional processing module (330) are systems on a chip (SoCs).
9. System according to anyone of the preceding claims, wherein the additional processing module (330) is dedicated for image processing tasks.
10. Processing module (330) connectable between an original processing module (320) and a display module (400) for enabling multi-processor implementation having access to a common display for displaying common visual content, comprising: - at least an input interface (410) adapted for receiving image data from the output display interface (305) of the original processing module (320); and an output interface (460) adapted to output image data intended for being displayed and connectable to a display interface (305), wherein the processing module (330) is operable to provide image data representing a common visual content at the output interface (460); wherein the common visual content is obtainable from image data received via the input interface (410) and originating from the processing module itself (330) and/or any combination of image data thereof.
11. Processing module according to claim 10, including a display controller module (350), which is operable for consolidate the image data received via the input interface 410 and originating from the additional processing module (330).
12. Processing module according to claim 10 or claim 1 1, including: a frame buffer (420), adapted to buffer the image data received via the input interface (410); and one or more additional frame buffers (421) adapted to buffer image data originating form the additional processing module (330).
13. Processing module according to claim 12, including: one or more pixel pipelines (430, 431) each associated with the respectively corresponding one or more frame buffers (420, 421), wherein the pixel pipelines (430, 431) are adapted to read out pixel data from the frame buffers (420, 421) and operable to manipulate the pixel data for each pixel.
14. Processing module according to claim 13, including: a post-processing module (450), which is operable for consolidating the pixel data resulting from the pixel pipelines (430, 431).
15. Processing module according to anyone of claim 11 or claim 14, wherein the display controller module (350) is adapted for overlaying visual content delivered by one or more of the pixel pipelines resulting in a common image representation to be displayed.
16. Processing module according to anyone of the preceding claims, wherein the processing module (330) is connectable in series via its input interface (410) and its output interface (460) to further processing modules (330).
17. Processing module according to anyone of the preceding claims, wherein the processing module (330) and/or the original processing module (320) are systems on a chip (SoCs).
18. Processing module according to anyone of the preceding claims, wherein the processing module (330) is dedicated for image processing tasks.
19. Processing device enabled for multi-processor implementation having access to a common display for displaying common visual content, including: an original processing module (320) provided with an output display interface (305), wherein the original processing module (320) is operable with at least one software application module (410 to 419) generating image data provided by the output display interface (305), the image data is intended for being displayed (400); an additional processing module (330) including at least an input interface (410) adapted for receiving image data from the output display interface (305) of the original processing module (320) and an output interface (460) adapted to output image data intended for being displayed and connectable to a display interface (305); and a display module (400) provided with a display interface (305) connectable to the additional processing module (330), wherein the additional processing module (330) is operable to provide image data representing the common visual content at the output interface (460); wherein the common visual content is obtainable from image data received via the input interface (410), originating from the additional processing module (330) and/or any combination of image data thereof.
20. Processing device according to claim 19, wherein the additional processing module (330) is a processing module according to anyone of the claims 10 to 19.
21. Method of enabling multi-processor implementation having access to a common display for displaying a common visual content, comprising: receiving image data via an input interface (410) of a processing module
(320); providing image data by a processing module (330); consolidating the received image data and the provided image data to obtain a common visual content, wherein the common visual content is obtainable from the received image data, the provided image data and/or any combination thereof; and providing the consolidated image data representing the common visual content via an output interface (450), wherein the consolidated image data is intended to be displayed by a display module (400).
22. Method according to claim 21, including: buffering the received image data in a frame buffer (420); reading out the received image data from the frame buffer (420) by a pixel pipeline (430); and manipulating the received image data by the means of the pixel pipeline (430) and in accordance with control data (435) received via the input interface (410).
23. Method according to claim 21 or claim 22, including: buffering the provided image data in one or more frame buffers (421 ); reading out the provided image data from the one or more frame buffers (421) by one or more pixel pipelines (431); and manipulating the provided image data by the means of the pixel pipelines (431) and in accordance with control data (435) provided by the processing module (330).
24. Method according to the claims 22 or claim 23, wherein the operations relating to the received image data and the operations relating to the provided image data are operable essentially simultaneously or in time shift.
25. Method according to anyone of the claims 21 to 23, including: obtaining the image data for consolidating from pixel pipelines (430, 431), wherein the consolidation includes deciding, which pixel pipelines (430, 431) have to be at least partially read out to generate the common visual content of the individual visual content provided by the pixel pipelines (430, 431) on the basis of the image data handled thereby.
PCT/IB2005/000740 2005-03-22 2005-03-22 Processing device, system and method for multi-processor implementation with common content visualization WO2006100532A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/IB2005/000740 WO2006100532A1 (en) 2005-03-22 2005-03-22 Processing device, system and method for multi-processor implementation with common content visualization
CNA200580049219XA CN101147120A (en) 2005-03-22 2005-03-22 Processing device, system and method for multi-processor implementation with common content visualization
EP05718242A EP1861773A1 (en) 2005-03-22 2005-03-22 Processing device, system and method for multi-processor implementation with common content visualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2005/000740 WO2006100532A1 (en) 2005-03-22 2005-03-22 Processing device, system and method for multi-processor implementation with common content visualization

Publications (1)

Publication Number Publication Date
WO2006100532A1 true WO2006100532A1 (en) 2006-09-28

Family

ID=37023398

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/000740 WO2006100532A1 (en) 2005-03-22 2005-03-22 Processing device, system and method for multi-processor implementation with common content visualization

Country Status (3)

Country Link
EP (1) EP1861773A1 (en)
CN (1) CN101147120A (en)
WO (1) WO2006100532A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11200192B2 (en) * 2015-02-13 2021-12-14 Amazon Technologies. lac. Multi-mode system on a chip

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0054187A2 (en) * 1980-12-15 1982-06-23 Texas Instruments Incorporated Multiple digital processor system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877741A (en) * 1995-06-07 1999-03-02 Seiko Epson Corporation System and method for implementing an overlay pathway

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0054187A2 (en) * 1980-12-15 1982-06-23 Texas Instruments Incorporated Multiple digital processor system

Also Published As

Publication number Publication date
CN101147120A (en) 2008-03-19
EP1861773A1 (en) 2007-12-05

Similar Documents

Publication Publication Date Title
US11711623B2 (en) Video stream processing method, device, terminal device, and computer-readable storage medium
CN102007698B (en) Network interface device with shared antenna
US20080119178A1 (en) Allocating Compression-Based Firmware Over the Air
US9716830B2 (en) Image signal processing device performing image signal processing through plural channels
US20080146260A1 (en) Voice data RF disk drive IC
US20100087147A1 (en) Method and System for Input/Output Pads in a Mobile Multimedia Processor
EP1998455B1 (en) Multi-mode IC with multiple processing cores
US20080098211A1 (en) Reconfigurable integrated circuit, circuit reconfiguration method and circuit reconfiguration apparatus
US20100023654A1 (en) Method and system for input/output pads in a mobile multimedia processor
US20060035663A1 (en) Mobile telephone system with media processor
US20090213242A1 (en) Image capture module and applications thereof
WO2006100532A1 (en) Processing device, system and method for multi-processor implementation with common content visualization
CN116074623B (en) Resolution selecting method and device for camera
CN109582511B (en) Controller generation method and device and storage medium
KR100731969B1 (en) Method and apparatus for sharing memory through a plurality of routes
CN1828665B (en) Method and system for information processing in a communication apparatus
US9058668B2 (en) Method and system for inserting software processing in a hardware image sensor pipeline
CN116009763A (en) Storage method, device, equipment and storage medium
CN113849194A (en) Burning method and terminal equipment
KR100529786B1 (en) Wireless communication terminal for providing section display
US20070192565A1 (en) Semiconductor device and mobile phone using the same
EP1202539A2 (en) Display controller for radio communication terminal
JP2008136184A (en) Reconfigurable integrated circuit, circuit reconfiguration method and circuit reconfiguration apparatus
CN100399783C (en) Communication terminal with TV telephone function
CN117251126A (en) Display and control method of folding screen electronic equipment and folding screen electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005718242

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200580049219.X

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

WWP Wipo information: published in national office

Ref document number: 2005718242

Country of ref document: EP