WO2003036465A2 - Efficient communication method and system - Google Patents

Efficient communication method and system Download PDF

Info

Publication number
WO2003036465A2
WO2003036465A2 PCT/IB2002/004322 IB0204322W WO03036465A2 WO 2003036465 A2 WO2003036465 A2 WO 2003036465A2 IB 0204322 W IB0204322 W IB 0204322W WO 03036465 A2 WO03036465 A2 WO 03036465A2
Authority
WO
WIPO (PCT)
Prior art keywords
streaming
component
control
processor
application
Prior art date
Application number
PCT/IB2002/004322
Other languages
French (fr)
Other versions
WO2003036465A3 (en
Inventor
Egidius G. P. Van Doren
Hendrikus C. W. Van Heesch
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to AU2002341302A priority Critical patent/AU2002341302A1/en
Priority to EP02775113A priority patent/EP1446719A2/en
Priority to KR10-2004-7006120A priority patent/KR20040044557A/en
Priority to JP2003538887A priority patent/JP2005506629A/en
Publication of WO2003036465A2 publication Critical patent/WO2003036465A2/en
Publication of WO2003036465A3 publication Critical patent/WO2003036465A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services

Definitions

  • data processing is used for systems such as Internet audio, digital TV, set-top boxes, and time-shift recording.
  • the input data (from disk, Internet, satellite, etc.) is processed in several steps and finally rendered on a display or loudspeaker.
  • the trend is that more and more of this data processing is done in software.
  • the data processing in software is based on a graph of connected processing nodes. The nodes do the actual processing and when a packet is processed it is passed to the next node in the chain.
  • the processing chain has to be controlled. Initially it has to be created, and during runtime the components in the chain may need to be reconfigured due to interaction with the user or due to changes in the data stream.
  • This control code is called the application.
  • the application translates input from the user/data stream to a command to set a parameter of a streaming component.
  • the streaming component(s) perform data processing according to the settings given by the application.
  • the runtime characteristics of the application and the streaming components are different and in general it holds that streaming components have more real-time constraints. As a result the application and the streaming components will run on different threads/processes/processors and a communication mechanism is needed for the interaction between the application and the streaming components.
  • Control issued by the application should not disturb the real-time characteristics of the streaming components (e.g. blocking them, or cause priority inversion).
  • the standard solution is that the application and the streaming components are decoupled by an OS primitive, such as a queue, a Remote Procedure (all RPC), or a semaphore-protected shared variable.
  • OS primitive such as a queue, a Remote Procedure (all RPC), or a semaphore-protected shared variable.
  • the application writes into or reads from/to the decouple queue independently of the streaming task (e.g. using a different execution context).
  • the streaming component reads out the decouple queue at specific points in the processing of the streaming data. It is algorithm dependent at which points reconfigurations can be made. Typical examples are just before or after data communication, or at the start or end of the processing loop. It is therefore not useful (or in some cases even erroneous) to take the contents of the decouple queue into account before such a specific point is reached (i.e. directly when the control is issued by the application).
  • a major disadvantage of using a generic RPC mechanism to cross a processor boundary to control a streaming component is that an RPC task has to be activated to put a message in the decouple queue of the streaming component. Activating an RPC task (with a high priority to get a fast response) has the disadvantage that a streaming task may be preempted. As a result, the data and instruction caches of the processor are partially flushed. This degrades the performance of the streaming components, which are optimized for cache usage.
  • the streaming component is split into two parts:
  • Fig. 1 schematically shows an example data processing system arranged for processing streaming music data in the MP3 format
  • Fig. 2 illustrates the basic mechanism for Remote Procedure Calls (RPC);
  • RPC Remote Procedure Calls
  • Fig. 3 illustrates an RPC mechanism being used for controlling streaming components
  • Fig. 4 shows how control can be done more efficiently by using a private communication channel for each component in addition to the mechanism of Fig. 2;
  • Fig. 5 depicts the difference between a "traditional" streaming component and a streaming component operating in accordance with the invention.
  • FIG. 6 provides a legend for symbols used in Figs. 2, 3, 4 and 5.
  • same reference numerals indicate similar or corresponding features.
  • Some of the features indicated in the drawing are typically implemented in software, and as such represent software entities, such as software modules or objects.
  • Fig. 1 schematically shows an example data processing system 100 arranged for processing streaming music data.
  • An input component 101 receives streaming data, which is in the well-known MP3 format (US 5,579,430).
  • An MP3 decoding component 102 decodes this streaming data to obtain music data, and feeds this to an equalizer component 103. After equalizing, the data is fed to an output component 104, which renders it, e.g. by playing the music data on a loudspeaker.
  • the streaming components 101 - 104 perform their data processing according to settings or parameters given by an application 105, which may change due to interaction with the user or due to changes in the data stream.
  • the runtime characteristics of the application 105 and the streaming components 101 - 104 are different and in general it holds that streaming components have more real-time constraints.
  • the application 105 and the streaming components 101 - 104 will run on different threads/processes/processors and a communication mechanism is needed for the interaction between the application 105 and the streaming components 101 - 104.
  • Fig. 2 illustrates the basic mechanism for Remote Procedure Calls (RPC).
  • RPC Remote Procedure Calls
  • RPCs normally handle calling functions in another process/processor.
  • an RPC-call has the following stages:
  • a client 200 calls a proxy 201 (a local representative for a remote service).
  • the proxy 201 marshals (packs) the arguments in a packet along with the function/method ED, and then adds the packet to a communication channel, such as a queue 210.
  • the communication channel 202 transfers the marshaled data to the other process 220 or processor (the server).
  • a stub 221 is notified of new packets and (unpacks) the arguments and function/method ID.
  • the stub 221 calls the actual function 222 of the service with the unmarshaled arguments.
  • the method executes and returns its return value and arguments to the stub 221.
  • the stub 221 marshals the return arguments and puts a return packet in the communication channel 210.
  • the communication channel 210 transfers the packet to the client process/processor 200.
  • the proxy 201 is notified and unmarshals the return value and arguments.
  • Fig. 2 the processes 200, 210 that communicate via RPC are shown as separated by a processor boundary, indicated with a dashed line. This boundary indicates that communication takes place from one processor to another.
  • the two processors 200, 210 might be in two entirely different computer systems, connected via a network, but might also be in one single computer system.
  • the processor boundary could also be virtual; a single processor could switch between the tasks for the client and for the server.
  • the RPC mechanism has a pool with one or more tasks that are used to call the functions on the remote processor via the stub.
  • the proxy is a local representation of the remote functions. For the caller it looks like the functions are local (thus providing location transparency).
  • the streaming system of Fig. 1 could use RPC to allow communication between application 105 and streaming components 101 - 104.
  • the application code 105 calls a control function of the equalizer component 103, for instance
  • SetBassLevel ( f loat_level ) .
  • the proxy marshals the function ID and the function argument level into a packet and sends that to the streaming processor on which the equalizer component 103 runs.
  • an interrupt awakens a worker task of the RPC mechanism, which fetches the packet from the communication channel, unmarshals it and calls the SetBassLevel function of the actual component.
  • the equalizer component implements the SetBassLevel function by putting a message in its decouple queue. Just before the equalizer component fetches more audio data it checks the command queue, finds a pending message and calls the corresponding handler. This handler sets the new bass level, after which the audio streaming is processed using the new equalizer settings.
  • a generic RPC mechanism When a generic RPC mechanism is used for controlling streaming components, the situation as shown in Fig. 3 occurs. After a control command is marshaled and put in the command queue 210, an interrupt is generated on the streaming processor, which triggers an ISR (Interrupt Service Routine). The routine activates a task of the RPC for handling the function call. The function of the actual streaming component 322 is called which puts a message in the decouple queue. The streaming component 322 checks at certain points in its algorithm whether there is a message, and if so, it is executed.
  • ISR Interrupt Service Routine
  • Fig. 4 shows two communication channels: a command queue 410 and a conventional RPC mechanism 411 that are both used for control.
  • a shared variable or other mechanism could also be used as communication channel.
  • the command queue mechanism 410 is used for runtime control and reduces the number of context-switches and interruptions on the streaming processor, which makes things more efficient.
  • the RPC mechanism 411 is an active channel, i.e. it initiates communication by itself.
  • the command queue 410 is a passive channel, i.e. it requires activity of the streaming component task to check it and can therefore only be used when the streaming component is running. Creating, destroying, starting, and stopping a streaming component still requires an active communication channel like the conventional RPC mechanism. Fortunately, these commands typically occur with a very low frequency.
  • the streaming component is split into two parts: 1. A control part T (top): This part runs in the execution context of the application. 2. A streaming part B (bottom): This runs in the execution context of the streaming algorithm.
  • the application code 105 calls the SetBassLevel ( f loat_level ) function of the component 103.
  • the top provides this function.
  • the top part T marshals the function ID and the function argument level into a message and sends that directly to the command queue 410 of the streaming component 322 whose bottom part B runs on the streaming processor.
  • the streaming component 322 fetches more audio data it checks the command queue 410, finds a pending message and calls the corresponding handler. This handler sets the new bass level, after which the audio streaming is processed using the new equalizer settings.
  • the RPC mechanism can be reduced to an ordinary function call in the case that the application and streaming component are in the same process.
  • a shared variable 501 (e.g. a register) can be used.
  • the command queue can simply be implemented by a decouple queue in case that the application and streaming are on the same processor (but in different threads or processes). In the case that the command queue crosses a processor boundary the command queue could implemented using shared memory.
  • the use of passive communication channels for controlling streaming components in a multi process/processor system has at least the following advantages: 1.
  • the application thread can immediately write into the passive communication channel independently of whether the control and streaming part are separated by thread/process/processor boundary.
  • VLIW Very Long Instruction Word
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps other than those listed in a claim.
  • the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Communication Control (AREA)

Abstract

A communication mechanism for use between an application and a streaming component, which has on the streaming side a passive interface that can be used to check (poll) whether control commands are pending. The streaming algorithm checks the queue at those points in time, at which it makes sense to execute control. The main advantage: In the case that the application runs in another process than the streaming component, an activation of an RPC task is saved. In the case that the application runs on another processor than the streaming component, and there is shared memory, an additional advantage is that the streaming processor does not need tobe interrupted when control is performed.

Description

Efficient communication method and system
For systems such as Internet audio, digital TV, set-top boxes, and time-shift recording, data processing is used. The input data (from disk, Internet, satellite, etc.) is processed in several steps and finally rendered on a display or loudspeaker. The trend is that more and more of this data processing is done in software. The data processing in software is based on a graph of connected processing nodes. The nodes do the actual processing and when a packet is processed it is passed to the next node in the chain.
The processing chain has to be controlled. Initially it has to be created, and during runtime the components in the chain may need to be reconfigured due to interaction with the user or due to changes in the data stream. This control code is called the application. The application translates input from the user/data stream to a command to set a parameter of a streaming component. The streaming component(s) perform data processing according to the settings given by the application. The runtime characteristics of the application and the streaming components are different and in general it holds that streaming components have more real-time constraints. As a result the application and the streaming components will run on different threads/processes/processors and a communication mechanism is needed for the interaction between the application and the streaming components.
Control issued by the application should not disturb the real-time characteristics of the streaming components (e.g. blocking them, or cause priority inversion). The standard solution is that the application and the streaming components are decoupled by an OS primitive, such as a queue, a Remote Procedure (all RPC), or a semaphore-protected shared variable. From now on the term decouple queue will be used to denote such a decoupling mechanism. The interaction is as follows:
- The application writes into or reads from/to the decouple queue independently of the streaming task (e.g. using a different execution context).
- The streaming component reads out the decouple queue at specific points in the processing of the streaming data. It is algorithm dependent at which points reconfigurations can be made. Typical examples are just before or after data communication, or at the start or end of the processing loop. It is therefore not useful (or in some cases even erroneous) to take the contents of the decouple queue into account before such a specific point is reached (i.e. directly when the control is issued by the application).
A major disadvantage of using a generic RPC mechanism to cross a processor boundary to control a streaming component is that an RPC task has to be activated to put a message in the decouple queue of the streaming component. Activating an RPC task (with a high priority to get a fast response) has the disadvantage that a streaming task may be preempted. As a result, the data and instruction caches of the processor are partially flushed. This degrades the performance of the streaming components, which are optimized for cache usage.
According to the invention, the streaming component is split into two parts:
1. A control part (top): This part runs in the execution context of the application.
2. A streaming part (bottom): This runs in the execution context of the streaming process.
Although both parts are separated, they are both specific for each streaming component. This separation makes it possible to use a communication channel that is shared over processes/processors that is more efficient than the use of a standard RPC mechanism. From the functional perspective of the application, it looks as if the streaming component runs in the same execution context as the application (like it is the case for proxies in RPC).
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments shown in the drawings, in which:
Fig. 1 schematically shows an example data processing system arranged for processing streaming music data in the MP3 format; Fig. 2 illustrates the basic mechanism for Remote Procedure Calls (RPC);
Fig. 3 illustrates an RPC mechanism being used for controlling streaming components;
Fig. 4 shows how control can be done more efficiently by using a private communication channel for each component in addition to the mechanism of Fig. 2; Fig. 5 depicts the difference between a "traditional" streaming component and a streaming component operating in accordance with the invention; and
Fig. 6 provides a legend for symbols used in Figs. 2, 3, 4 and 5. Throughout the Figures, same reference numerals indicate similar or corresponding features. Some of the features indicated in the drawing are typically implemented in software, and as such represent software entities, such as software modules or objects. Fig. 1 schematically shows an example data processing system 100 arranged for processing streaming music data. An input component 101 receives streaming data, which is in the well-known MP3 format (US 5,579,430). An MP3 decoding component 102 decodes this streaming data to obtain music data, and feeds this to an equalizer component 103. After equalizing, the data is fed to an output component 104, which renders it, e.g. by playing the music data on a loudspeaker. Of course music data in other formats, video data in any format or any other data could easily be substituted for MP3 music. The streaming components 101 - 104 perform their data processing according to settings or parameters given by an application 105, which may change due to interaction with the user or due to changes in the data stream. The runtime characteristics of the application 105 and the streaming components 101 - 104 are different and in general it holds that streaming components have more real-time constraints. As a result the application 105 and the streaming components 101 - 104 will run on different threads/processes/processors and a communication mechanism is needed for the interaction between the application 105 and the streaming components 101 - 104. Fig. 2 illustrates the basic mechanism for Remote Procedure Calls (RPC).
RPCs normally handle calling functions in another process/processor. In general an RPC-call has the following stages:
1. A client 200 calls a proxy 201 (a local representative for a remote service).
2. The proxy 201 marshals (packs) the arguments in a packet along with the function/method ED, and then adds the packet to a communication channel, such as a queue 210.
3. The communication channel 202 transfers the marshaled data to the other process 220 or processor (the server).
4. In the other process 220, a stub 221 is notified of new packets and (unpacks) the arguments and function/method ID.
5. The stub 221 calls the actual function 222 of the service with the unmarshaled arguments.
6. The method executes and returns its return value and arguments to the stub 221.
7. The stub 221 marshals the return arguments and puts a return packet in the communication channel 210. 8. The communication channel 210 transfers the packet to the client process/processor 200.
9. The proxy 201 is notified and unmarshals the return value and arguments.
10. Return value and arguments are returned to the original caller (the client).
In Fig. 2, the processes 200, 210 that communicate via RPC are shown as separated by a processor boundary, indicated with a dashed line. This boundary indicates that communication takes place from one processor to another. The two processors 200, 210 might be in two entirely different computer systems, connected via a network, but might also be in one single computer system. The processor boundary could also be virtual; a single processor could switch between the tasks for the client and for the server. The RPC mechanism has a pool with one or more tasks that are used to call the functions on the remote processor via the stub. The proxy is a local representation of the remote functions. For the caller it looks like the functions are local (thus providing location transparency).
The streaming system of Fig. 1 could use RPC to allow communication between application 105 and streaming components 101 - 104. For example, suppose that due to a user event the settings of the equalizer changes. As a result the application code 105 calls a control function of the equalizer component 103, for instance
SetBassLevel ( f loat_level ) . For the component there is a local proxy providing this function. The proxy marshals the function ID and the function argument level into a packet and sends that to the streaming processor on which the equalizer component 103 runs.
On the streaming processor an interrupt awakens a worker task of the RPC mechanism, which fetches the packet from the communication channel, unmarshals it and calls the SetBassLevel function of the actual component. The equalizer component implements the SetBassLevel function by putting a message in its decouple queue. Just before the equalizer component fetches more audio data it checks the command queue, finds a pending message and calls the corresponding handler. This handler sets the new bass level, after which the audio streaming is processed using the new equalizer settings.
When a generic RPC mechanism is used for controlling streaming components, the situation as shown in Fig. 3 occurs. After a control command is marshaled and put in the command queue 210, an interrupt is generated on the streaming processor, which triggers an ISR (Interrupt Service Routine). The routine activates a task of the RPC for handling the function call. The function of the actual streaming component 322 is called which puts a message in the decouple queue. The streaming component 322 checks at certain points in its algorithm whether there is a message, and if so, it is executed. A major disadvantage of using a generic RPC mechanism with streaming is that an RPC task has to be activated to put a message in the decouple queue of the streaming component 322. Activating an RPC task (with a high priority to get a fast response) has the disadvantage that a streaming task is pre-empted. As a result, the data and instruction caches are partially flushed. This degrades the performance of the streaming components, which are optimized for cache usage.
Fig. 4 shows two communication channels: a command queue 410 and a conventional RPC mechanism 411 that are both used for control. A shared variable or other mechanism could also be used as communication channel. The command queue mechanism 410 is used for runtime control and reduces the number of context-switches and interruptions on the streaming processor, which makes things more efficient. The RPC mechanism 411 is an active channel, i.e. it initiates communication by itself. The command queue 410 is a passive channel, i.e. it requires activity of the streaming component task to check it and can therefore only be used when the streaming component is running. Creating, destroying, starting, and stopping a streaming component still requires an active communication channel like the conventional RPC mechanism. Fortunately, these commands typically occur with a very low frequency.
In the present invention the streaming component is split into two parts: 1. A control part T (top): This part runs in the execution context of the application. 2. A streaming part B (bottom): This runs in the execution context of the streaming algorithm.
Although both parts are separated, they are both specific for each streaming component. This separation makes it possible to use a communication channel that is shared over processes/processors that is more efficient than a standard RPC mechanism. From the functional perspective of the application, it looks as if the streaming component runs in the same execution context as the application (like was the case for proxies in RPC).
Consider again the streaming system of Fig. 1. If the settings of the equalizer component 103 change as mentioned before, the application code 105 calls the SetBassLevel ( f loat_level ) function of the component 103. For the component 103 there is a local part (the top) providing this function. The top part T marshals the function ID and the function argument level into a message and sends that directly to the command queue 410 of the streaming component 322 whose bottom part B runs on the streaming processor. Just before the streaming component 322 fetches more audio data it checks the command queue 410, finds a pending message and calls the corresponding handler. This handler sets the new bass level, after which the audio streaming is processed using the new equalizer settings.
The difference between a "traditional" streaming component and the new situation for a component is depicted in Fig. 5. The communication mechanisms are factored out and the implementation can be instantiated depending on the situation. Examples:
- The RPC mechanism can be reduced to an ordinary function call in the case that the application and streaming component are in the same process.
- A shared variable 501 (e.g. a register) can be used.
- The command queue can simply be implemented by a decouple queue in case that the application and streaming are on the same processor (but in different threads or processes). In the case that the command queue crosses a processor boundary the command queue could implemented using shared memory.
The use of passive communication channels for controlling streaming components in a multi process/processor system has at least the following advantages: 1. The application thread can immediately write into the passive communication channel independently of whether the control and streaming part are separated by thread/process/processor boundary.
2. No thread has to be activated in the streaming context to handle the control. The streaming algorithm runs independently and checks at its own defined points whether control is present. So in case of a multiprocessor system this prevents the need to interrupt the streaming processor, which would hurt performance.
3. The response on control is faster since no intermediate RPC task is needed.
4. In the traditional approach it is difficult to assign the right priorities to the RPC tasks in the task pool. In the proposed approach, the runtime commands are automatically handled on the same priority as the streaming component (ideal situation).
5. Code on a Very Long Instruction Word (VLIW) processor (sometimes used as streaming processor, such as the Philips TriMedia IC) is more expensive in text size than a standard RISC processor (often used as control processor, such as MIPS or ARM chips). By moving the code of the control part of the component from such streaming processor to such a control processor, the text size reduces.
6. Execution of control code (lot of branches) executes less efficient on a VLIW than on a RISC processor, since a VLIW cannot exploit instruction level parallelism for such code. By moving the code of the control part of the component from such streaming processor to such a control processor, the relative performance increases. 7. Subsystems consisting of multiple streaming components have a similar separation between control and streaming. The control code, which is the main added value of a subsystem, runs on the control processor. So the advantages of code text size and relative execution performance also hold for subsystems. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer.
In the system claim enumerating several components, several of these components can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. A communication method between a control component and a streaming component, the streaming component having a passive interface for polling whether control commands are pending, which interface is polled at those points in time at which it makes sense to execute a control command.
2. The method of claim 1 , in which the interface is polled just before the streaming component fetches data.
3. The method of claim 1, in which the passive interface comprises a command queue.
4. The method of claim 3, in which the passive interface comprises a decouple queue.
5. The method of claim 1, in which the passive interface comprises a shared variable.
6. The method of claim 1, in which the streaming component comprises a control part running in a first execution context, and a streaming part running in a second execution context.
7. The method of claim 6, in which the first execution context is the context of the control component.
8. A computer system arranged for streaming data transmission, comprising a control component and a streaming component, the streaming component having a passive interface for polling whether control commands are pending, which interface is polled at those points in time at which it makes sense to execute a control command.
9. The computer system of claim 8, wherein the control component is a software module.
10. The computer system of claim 8 or 9, wherein the streaming component is a software module.
11. The system of claim 8, in which the interface is polled just before the streaming component fetches data.
12. The system of claim 8, in which the streaming component comprises a control part running in a first execution context, and a streaming part running in a second execution context.
PCT/IB2002/004322 2001-10-24 2002-10-17 Efficient communication method and system WO2003036465A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU2002341302A AU2002341302A1 (en) 2001-10-24 2002-10-17 Efficient communication method and system
EP02775113A EP1446719A2 (en) 2001-10-24 2002-10-17 Efficient communication method and system
KR10-2004-7006120A KR20040044557A (en) 2001-10-24 2002-10-17 Efficient communication method and system
JP2003538887A JP2005506629A (en) 2001-10-24 2002-10-17 Efficient communication method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP01204039.0 2001-10-24
EP01204039 2001-10-24

Publications (2)

Publication Number Publication Date
WO2003036465A2 true WO2003036465A2 (en) 2003-05-01
WO2003036465A3 WO2003036465A3 (en) 2004-04-22

Family

ID=8181127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2002/004322 WO2003036465A2 (en) 2001-10-24 2002-10-17 Efficient communication method and system

Country Status (6)

Country Link
EP (1) EP1446719A2 (en)
JP (1) JP2005506629A (en)
KR (1) KR20040044557A (en)
CN (1) CN1602465A (en)
AU (1) AU2002341302A1 (en)
WO (1) WO2003036465A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10942888B2 (en) 2019-03-26 2021-03-09 Raytheon Company Data transferring without a network interface configuration
WO2021076213A1 (en) * 2019-10-16 2021-04-22 Raytheon Company Alternate control channel for network protocol stack
US11438300B2 (en) * 2019-10-16 2022-09-06 Raytheon Company Alternate control channel for network protocol stack

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7711746B2 (en) * 2005-12-17 2010-05-04 International Business Machines Corporation System and method for deploying an SQL procedure
WO2023053454A1 (en) * 2021-10-01 2023-04-06 日本電信電話株式会社 Arithmetic processing offload system and arithmetic processing offload method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664190A (en) * 1994-01-21 1997-09-02 International Business Machines Corp. System and method for enabling an event driven interface to a procedural program
US5682534A (en) * 1995-09-12 1997-10-28 International Business Machines Corporation Transparent local RPC optimization
US6029205A (en) * 1994-12-22 2000-02-22 Unisys Corporation System architecture for improved message passing and process synchronization between concurrently executing processes
EP1122644A1 (en) * 2000-01-14 2001-08-08 Sun Microsystems, Inc. A method and system for dynamically dispatching function calls from a first execution environment to a second execution environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5664190A (en) * 1994-01-21 1997-09-02 International Business Machines Corp. System and method for enabling an event driven interface to a procedural program
US6029205A (en) * 1994-12-22 2000-02-22 Unisys Corporation System architecture for improved message passing and process synchronization between concurrently executing processes
US5682534A (en) * 1995-09-12 1997-10-28 International Business Machines Corporation Transparent local RPC optimization
EP1122644A1 (en) * 2000-01-14 2001-08-08 Sun Microsystems, Inc. A method and system for dynamically dispatching function calls from a first execution environment to a second execution environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"LOCAL REMOTE PROCEDURE CALL EXTENSIONS FOR DISTRIBUTED COMPUTER ENVIRONMENT" IBM TECHNICAL DISCLOSURE BULLETIN, IBM CORP. NEW YORK, US, vol. 37, no. 12, 1 December 1994 (1994-12-01), pages 473-474, XP000487856 ISSN: 0018-8689 *
"REMOTE PROCEDURE CALLS FOR AN ATTACHED PROCESSOR" IBM TECHNICAL DISCLOSURE BULLETIN, IBM CORP. NEW YORK, US, vol. 35, no. 1B, 1 June 1992 (1992-06-01), pages 237-238, XP000309042 ISSN: 0018-8689 *
SMOLENSKI M ET AL: "Design of a personal digital video recorder/player" 2000 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS , 11 - 13 October 2000, pages 1-12, XP010525210 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10942888B2 (en) 2019-03-26 2021-03-09 Raytheon Company Data transferring without a network interface configuration
WO2021076213A1 (en) * 2019-10-16 2021-04-22 Raytheon Company Alternate control channel for network protocol stack
US11412073B2 (en) * 2019-10-16 2022-08-09 Raytheon Company Alternate control channel for network protocol stack
US11438300B2 (en) * 2019-10-16 2022-09-06 Raytheon Company Alternate control channel for network protocol stack

Also Published As

Publication number Publication date
WO2003036465A3 (en) 2004-04-22
JP2005506629A (en) 2005-03-03
AU2002341302A1 (en) 2003-05-06
KR20040044557A (en) 2004-05-28
CN1602465A (en) 2005-03-30
EP1446719A2 (en) 2004-08-18

Similar Documents

Publication Publication Date Title
EP1438674B1 (en) System for integrating java servlets with asynchronous messages
US5721922A (en) Embedding a real-time multi-tasking kernel in a non-real-time operating system
US6886041B2 (en) System for application server messaging with multiple dispatch pools
US5903752A (en) Method and apparatus for embedding a real-time multi-tasking kernel in a non-real-time operating system
WO2009113381A1 (en) Multiprocessor system and method of sharing device among os in multiprocessor system
US20050125789A1 (en) Executing processes in a multiprocessing environment
AU2002362656A1 (en) System for integrating java servlets with asynchronous messages
AU2002362654A1 (en) System for application server messaging with multiple dispatch pools
US20040117793A1 (en) Operating system architecture employing synchronous tasks
JP2004536382A (en) Systems, methods, and articles of manufacture using replaceable components to select network communication channel components with replaceable quality of service features
US20040255305A1 (en) Method of forming a pattern of sub-micron broad features
CN109144698B (en) Data acquisition method, event distributor, device, medium, and unmanned vehicle
US6832266B1 (en) Simplified microkernel application programming interface
EP1446719A2 (en) Efficient communication method and system
US6934953B2 (en) Deferred procedure call in interface description language
US7320044B1 (en) System, method, and computer program product for interrupt scheduling in processing communication
US20230121778A1 (en) Apparatuses, Devices, Methods, Computer Systems and Computer Programs for Handling Remote Procedure Calls
JP2010026575A (en) Scheduling method, scheduling device, and multiprocessor system
WO1996018152A1 (en) An improved method and apparatus for embedding a real-time multi-tasking kernel in a non-real-time operating system
KR100420268B1 (en) Kernel scheduling method using stacks
KR19990053528A (en) Multiple resource sharing method in real time system
KR19990053525A (en) Resource sharing method of real-time system
DNS et al. Half Sync/Half Async
EP0892346A2 (en) Propagation of a command status code from a remote unit to a host unit
JPS6349261B2 (en)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003538887

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2002775113

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 20028210778

Country of ref document: CN

Ref document number: 1020047006120

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2002775113

Country of ref document: EP