MX2008000117A - Multi-point conference system, multi-point conference method, and program. - Google Patents

Multi-point conference system, multi-point conference method, and program.

Info

Publication number
MX2008000117A
MX2008000117A MX2008000117A MX2008000117A MX2008000117A MX 2008000117 A MX2008000117 A MX 2008000117A MX 2008000117 A MX2008000117 A MX 2008000117A MX 2008000117 A MX2008000117 A MX 2008000117A MX 2008000117 A MX2008000117 A MX 2008000117A
Authority
MX
Mexico
Prior art keywords
video stream
terminals
video
terminal
transmitted
Prior art date
Application number
MX2008000117A
Other languages
Spanish (es)
Inventor
Kazunori Ozawa
Daisuke Mizuno
Hiroaki Dei
Original Assignee
Nec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nec Corp filed Critical Nec Corp
Publication of MX2008000117A publication Critical patent/MX2008000117A/en

Links

Abstract

In a multi-point video conference server, it is possible to rapidly respond to a video switching request from a terminal and reduce the calculation amount. The server (300) decodes only m (1 < m a??ñ n) video streams selected by n terminals and stores an unnecessary video stream in a buffer. Upon reception of an instruction to switch to another stream from a terminal, the server (300) uses the data accumulated in the buffer, decodes the nearest I frame by going back to the past, and starts to provide video using the video stream.

Description

MULTIPOINT CONFERENCE SYSTEM, MULTIPOINT CONFERENCE METHOD, AND PROGRAM Technical Field The present invention pertains to a multipoint conference system, a multipoint conference method, and a program, and in particular, to a so-called multipoint videoconferencing system that terminates a plurality of video data pieces and transmits a video stream to a terminal, an apparatus that uses a program for it, and a multipoint videoconferencing method.
BACKGROUND ART The Japanese Patent Application Open to the Public No. 2002-290940 (Patent Document 1) presents a video conferencing system in which a server, which is arranged in a network, temporarily receives a video stream transmitted from each terminal and then distributes the video stream to all terminals. In this mode, the server receives the video data of all the terminals respectively and distributes the video data to each terminal. Each terminal decodes a plurality of received video streams and displays them in a predetermined display format for such videoconferencing. as a visualizer composed of an equally divided screen and a close-up of the loudspeaker. Also, a way is known in which the server in the network decodes all the video data received from each terminal, encodes the video data after performing the necessary image processing, and transmits only one video stream in response to one. request of each terminal. According to this mode, the server can process video streams considering the performance of the terminals and therefore, there is an advantage in that the coding method, coding settings, options and the like can be set arbitrarily. Patent Document 1: Japanese Patent Application Open to the Public No. 2002-290940 Description of the Invention Problem to be solved by the Invention However, even in the last mode in which a server in a network transmits only necessary video streams, there is a problem in which all video streams must be prepared (e.g. decoded) although only requested video streams are really necessary. An increase in computational resources leads to restrictions on the number of channels processed by each server and thus is not desirable, and there is also a situation in which decoding can not be started from some point (place) even if a change request is made because the video stream is compressed in a temporary direction. Therefore, an object of the present invention is to provide a multipoint conferencing system, a multipoint conferencing method, and a program with low computational complexity that can quickly respond to a request to change video streams from the terminals.
Means for Solving the Problem A first aspect in accordance with the present invention provides a multipoint conferencing server connected to a plurality of terminals that transmit a video stream that encodes a requested video stream for each of the terminals before the video stream is transmitted to each of the terminals, where only the video streams that are transmitted to each of the terminals are decoded, and other candidate video streams to change are buffered and, when change is requested, they are decoded when they return in time. The multipoint conference server comprises decoders to decode only video streams that are transmitted to each terminal, buffers for accumulating video streams that are not transmitted without decoding, and a switching control part that selects a video stream from the video streams accumulated in the buffers in response to a request to change the current of video from the terminal, decodes the video stream when returning to a predetermined time in the past, and changes the video stream that is transmitted to the terminal. A second aspect according to the present invention provides a program that is to be executed by a computer that constitutes the multipoint conference server and a multipoint conference system that can be constituted by connecting the multipoint conference server and a group of terminals . A third aspect according to the present invention provides a multipoint conferencing method performed using the multipoint conferencing server characterized in that it comprises a) a decoding stage, where the multipoint conferencing server decodes only a portion of the video streams that are transmitted to each of the terminals; (b) an accumulation step, where the multipoint conference server accumulates video streams that are not transmitted in buffers without decoding them; and (c) a switching stage, wherein in accordance with a request for changing the video stream from the terminal, the multipoint conference server selects an accumulated video stream in the buffers, decodes the video stream upon return at a predetermined time in the past, and change the video stream to be transmitted to the terminal.
Effect of the Invention According to the present invention, computational resources of a server used as a multipoint conference server can be controlled without losing response to requests for change of terminals. Since the change processing is performed taking into account the compression of a video stream in the time direction, the image quality will not be degraded.
BEST MODE FOR CARRYING OUT THE INVENTION Thereafter, the best mode for carrying out the present invention will be described in detail with reference to the drawings. FIGURE 1 is a diagram showing the preliminary configuration of a multipoint conference system according to an embodiment of the present invention. The reference to FIGURE 1 shows a multipoint conference system that connects n (after this, n denotes an integer equal to or greater than 2) terminals 101 to lOn and a multipoint conference server (after this, simply called a server) 200 via a network 500. FIGURE 2 is a diagram showing a connection between each terminal and a server 200 in the multipoint conference system. A terminal 101 shown in FIGURE 2 performs communication with each of a part 210 for video reception, a control signal reception portion 220, and a video transmission portion 290 of the server 200 via the network 500 to perform the transmission and reception of video streams together with the transmission and reception of the signals of predetermined controls. FIGURE 3 is a diagram showing a detailed configuration when terminals n are connected to the server 200 in the multipoint conference system. In addition to the control signal reception part 220 and a control part 250, the server 200 can communicate with each of the terminals 101 through lOn by means of n parts 211 to 21n of video reception, n buffers 231 to 23n, n decoders 241 to 24n, n parts 261 to 26n of selection / composition, n parts 271 to 27n of resizing, n coders 281 to 28n, and n parts 291 to 29n transmission (video) to support n terminals. The control signal reception part 220 is a means for receiving a control signal from terminals 101 to lOn to carry the control signal to the control part 250 and the control part 250 is a means, in addition to the control of the complete server 200, to instruct each unit including the decoders 241 to 24n after determining the video currents that are distributed to each of the terminals 101 through 10 based on the control signals. The video reception portions 211 to 21n are means for receiving packets including video streams from the terminals 101 to lOn via the network 500. The buffers 231 to 23n are destinations for temporary storage of video streams stored in a memory of the video. server 200. Decoders 241 to 24n are means for decoding video streams to create images and, as described below, have an indicator indicating whether a video stream received from each terminal is currently decoded or not by means of inactive / active. The selection / composition parts 261 to 26n are means for selecting a produced image of the decoders 241 to 24n or a plurality of images for amalgamation according to the instructions of the control part 250. In addition, resizing portions 271 to 27n are means for scaling images produced from the selection / composition parts 261 to 26n to the size that fits each of terminals 101 through 10. The encoders 281 to 28n are means for encoding images according to the coding method, coding settings, and parameters that fit each of the terminals 101 through lOn to convert such images into a video stream. The video transmission parts 291 to 29n are means for transmitting a video stream created by the encoders 281 to 28n for each of the terminals 101 to lOn correlated by the network 500. Although it is not illustrated to facilitate the understanding of the present invention, the multipoint conference server 200 is equipped with various processing means for manipulating voice streams. Then, a review of the operations of the server 200 will be provided using FIGURE 3. When each of the terminals 101 through lOn transmits a video stream as packets to the server 200, the portions 211 to 21n in video reception in the server 200 they each receive and analyze packets individually from each terminal to extract video streams. If it is assumed that all received currents are used (all received currents will be transmitted to any of the terminals), the buffers 231 to 23n are not used and the streams are decoded individually by the decoders 241 to 24n to create from one to n images. Then, the selection / composition parts 261 to 26n select / compose images according to the instructions of the control part 250 and the coders 281 to 28n perform the coding processing for each terminal. A video stream created by encoding is returned packet before it is transmitted individually to terminals 101 through lOn by video transmission parts 291 to 29n. Then, the terminals 101 through lOn can change the video stream received from the server 200 by transmitting a control signal to the signal receiving part 220 of controls of the server 200 to carry a request to the server 200. Operations when all the streams received will not be transmitted, in which an effect of the present invention will be fully apparent, will be described in the following. The operation is the same as that of the aforementioned case until the video reception portions 211 to 21n of the server 200 individually extract video streams after receiving and analyzing packets for each terminal.
Then, the indicators of the decoders 241 to 24n are referred to. Here, if the indicators of the decoders 241 to 24n are active (to be decoded), the video streams are decoded similarly to the aforementioned case. If, on the other hand, the indicators of the decoders 241 to 24n are inactive (they are not to be decoded), processing is analyzed to temporarily store the video streams in the buffers 231 to 23n. FIGURE 4 is a flowchart showing operations of the decoders 241 through 24n when an activation instruction is received from the control part 250 in an inactive state (not to be decoded). After receiving the activation instruction, the decoders 241 to 24n check whether any video current is stored or not in the buffers 231 to 23n (step S001). Here, if any video stream is stored in the buffers 231 through 23n, the decoders 241 through 24n decode the stored data (current data) (step S003). As will be described later, a frame of reference (a frame encoded in frame of reference, after that referred to as a frame I) is always stored in buffers 231 through 23n, decoding will start from frame I.
A portion of the data that has been decoded is deleted from the buffers and if the data is still stored in the buffers 231 through 23n, the above steps S001 and S002 are repeated. Meanwhile, the decoders 241 to 24n ignore the time information and decode the currents stored in the buffers 231 to 23n all at the same time. The last image among a plurality of images generated by the decoding is used by the selection / composition parts 261 to 26n. If on the other hand, the buffers 231 through 23n no longer contain data (N in step S001), the decoders 241 through 24n make a transition to the decoding state in which the flag is set to be active (to decode) ( step S002). FIGURE 5 is a flow diagram showing the operations of the decoders 241 through 24n when an inactivation instruction is received from the control part 250 in an active state (to be decoded). After receiving the inactivation instruction, instead of immediately stopping the decoding, the decoders 241 to 24n decide the behavior based on the data receptions by the video reception part. If the packet video stream received in the stage SlOl are not data of the frame I (N stage S102), the decoders 241 to 24n perform the decoding similarly in the aforementioned active state (to be decoded) (step S103). If on the other hand, the video stream of the received packets is data of the I frame (Y step S102, the decoders 241 to 24n store the data in the buffers 231 to 23n without decoding the data (step S104). the size of the data of the frame I is large, sometimes it is divided into a plurality of packets, in this way, the decoders 241 to 24n check whether the data received is the last data of the frame I (step S105) and, if the stored data is not the last data of the I frame, return to the SlOl stage to receive subsequent divided data of the I frame. If on the other hand, the received data is the last data of the I frame (Y in step S105), the decoders 241 through 24n stop decoding the processing and transition to a state without decoding in which the flag is set to be inactive (not to be decoded) (step S006). 23n se con This way, data is stored so that the data is always stored starting with the start of an I frame and, when the data of the I frame must be stored recently, the previous data is deleted. FIGURE 6 is a diagram for illustrating frame storage control in buffers 231 through 23n made by the aforementioned method. The terms 23x_T0 to 23x_T5 on the left side of FIGURE 6 represent changes of the internal state of the same buffer 23x according to the time flow (TO to T5). The terms P_T0 to P_T4 on the right side of FIGURE 6 represent the video stream data arriving at each point in time. The term Ix (x is the order of arrival) represents the data of currents of a frame I and the term Px (x is the order of arrival) represents the data of currents different to the frame I. The buffer is empty in the state 23x_T0 of FIGURE 6 and then the data P_T0, which is not the I frame arrive. Since a control operation is performed to first store an I frame in the buffers 231 through 23n, the data T_T0 is discarded in this case. The buffer is empty in the state 23x__Tl of FIGURE 6 similar at the previous point in time and then, when the data P_T1, which is the I frame, arrive, the data P_T1 is stored to enter the state 23x__T2. When the P_T2 data also arrives in the state 23x_T2 of FIGURE 6, the data P_T1 of the frame I is already stored and in this way the data P_T2 is subsequently stored to enter the state 23x_T3. When the data P_T3 further arrives in the state 23x_T3 of FIGURE 6, similarly the data P_T3 is subsequently stored to enter the state 23x__T4 of FIGURE 6. If in the state 23x_T4 of FIGURE 6, the data P_T4, which is a new frame I, arrive, all the previous data are discarded and the data P_T4 are stored as the first data enter the state 23x_T5. As already described in the above, since the size of the data in the I frame becomes large, it is sometimes divided into a plurality of packets. FIGURE 7 is a diagram for illustrating frame storage control when an I frame divided into a plurality of packets arrives. The terms 23x_T10 to 23x_T13 on the left side of FIGURE 7 represent changes of the internal state of the same buffer 23x according to the time flow (UNCLE to T13). The terms P_T10 to P_T12 on the right side of FIGURE 7 represent the video current data arriving at each point in time. The term Ixy (x is the order of arrival and y is the division number) represents the current data of a frame I and Px represents the data different to the frame I. The data PI UNCLE and P2 UNCLE arrival in the state 23x_T10 of FIGURE 7 are data (I2a, I2b) of the frame I divided into two parts from one part to another. First, the data P1_T10 in the first half is stored in the buffer and, in this phase, the existing data is not discarded due to the arrival of a new I frame and the state 23x_Tll is entered. Then, when the data P Til in the second half is further stored in the buffer in the state 23x_Tll of FIGURE 7, all the data prior to the new data of the frame I (I2a, I2b) are discarded to enter the state 23x_T12. Then, in the state 23x_Tl2 of FIGURE 7, as already described, when the data P_T12 without the I frame arrives, the data P_T12 is subsequently stored to enter the state 23x_T13. The operation after the decoding is performed by the decoders 241 to 24n will be described again with reference to FIGURE 3. Based on the instructions of the control part 250, the selection / composition parts 261 to 26n acquire decoded images of the decoders 241 to 24n. Then, according to settings of the terminals 101 to 10, the selection / composition parts 261 to 26n perform the processing (composition processing) to compose a plurality of images horizontally and vertically. Also, if the size of the image acquired or When the composite and that of a video stream transmitted to terminals 101 through lOn are different, the resizing portions 271 to 27n perform image scaling processing based on instructions from the control part 250. Then, the encoders 281 through 28n encode images that conform to the bit proportions and parameters of the transmission destination terminals 101 to lOn to convert images into a video stream. In addition, the transmission (video) portions 291 to 29n form the converted video stream to transmit packets at the terminals 101 through the network 500. In accordance with the present embodiment, as described above, it is sufficient decode m video streams (1 < = n), which are smaller than the number of terminals n, so that it becomes possible to control an increase in computational complexity in the server and increase the number of channels that can be processed per machine. This is because a request for switching the video stream occurs only occasionally and in this way, unnecessary decoding can be prevented. In addition, according to the present modality, While the multipoint conference system has the configuration capable of controlling an increase in computational complexity, it is possible to respond quickly to a request for switching of the video stream from the terminals. This is because the unused current data is stored in the buffers and kept in a state in such a way that the data stream can be decoded at any time. In addition, when the switching request arrives, the decoding starts with an I frame when returning in time, inhibiting the degradation of the image quality. Then, a second embodiment in which the present invention is applied to a multipoint conference system based on an MPEG-4 stream will be described in greater detail with reference to the drawings. FIGURE 8 is a diagram showing a detailed configuration of a server 300 of the multipoint conference system according to the second embodiment of the present invention. The reference to FIGURE 8 shows that the server 300 can communicate, in addition to a DTMF (Multi-Frequency Dual Tone) reception portion 320 and a control part 350, n RTP reception portions 311 to 31n (Protocol Real-Time Transport), n buffers 331 to 33n, n decoders 341 to 34n of MPEG-4, n parts 361 to 36 n of selection / composition, n parts 371 to 37n of resizing, n encoders 381 to 38n of MPEG-4 and n parts 391 to 39n of RTP transmission to support n terminals. The DTMF reception part 320 is a means corresponding to the control signal reception part 220 in the first mode and is a means for receiving a DTMF signal for each terminal and carrying the DTMF signal to the part 350 of control. The control part 350 is a means, in addition to the control of the complete server 300, for determining the MPEG-4 currents that are distributed to each terminal based on the DTMF signal and which instructs each unit including the decoders 341 to 34n of MPEG-4. The RTP reception portions 311 to 31n are means corresponding to the video reception portions 211 to 21n in the first mode and means for receiving / analyzing RTP packets including MPEG-4 streams from the terminals via the network 500 for extract the MPEG- currents. The buffers 331 to 33n are destinations for storing video streams in a memory of the server 300. The decoders 341 to 34n of MPEG-4 are means corresponding to the decoders 241 to 24n in the first mode and means for decoding the video streams to create images. Similar to the first modality mentioned above, decoders 341 to 34n of MPEG-4 they have an indicator that indicates whether a video stream received from each terminal is currently going to be decoded or not by means of active / inactive. The selection / composition parts 316 to 36n are means for selecting, according to instructions of the control part 350, a produced image of the MPEG-4 decoders 341 to 34n or a plurality of images of the decoders 341 to 34n of MPEG-4 for composition in a state in which images are placed vertically and horizontally. In addition, resizing portions 371 to 37n are means for scaling images produced from selection / composition parts 361 to 36n in the size that fits each terminal. The MPEG-4 encoders 381 'to 38n are means corresponding to the encoders 281 to 28n in the first embodiment and are means for encoding images according to the coding method, coding settings and parameters that are set to each terminal for convert such images into an MPEG-4 stream. The transmission parts 391 to 39n of RTP are means corresponding to transmission part 291 to 29n (video) in the first embodiment and are means for forming in packets with RTP an MPEG-4 stream created by the MPEG-4 encoders 381 to 38n to transmit packets to each of the terminals 101 through lOn correlated by network 500. Although not illustrated to facilitate the understanding of the present invention, the multipoint conference server 300 is equipped with various processing means to manipulate voice streams. Then, operations of the server 300 will be described with reference to FIGURE 8. When each terminal transmits an MPEG-4 stream as RTP packets to the server 300, the RTP receiving portions 311 to 31n of the server 300 receive and analyze each packet individually from each terminal to extract the MPEG-4 streams. The MPEG-4 decoders 341 to 34n change their operation depending on whether the maintained indicator is active or not, as shown in the following. The MPEG-4 decoders 341 to 34n in the active state decode the MPEG-4 streams to create images transmitted from each terminal. If the indicator is changed from active to inactive, instead of stopping the decoding immediately, decoders 341 to 34n of MPEG-4 continue to decode the processing until an I frame arrives and, after the I frame arrives, rewrites the indicator to make a transition to the state without decoding. After making a transition to the state without decoding, the MPEG-4 decoders 341 to 34n they store the MPEG-4 stream data in the I frame that has reached the buffers 331 to 33n. Similar to the aforementioned first mode, the content of the buffers 331 to 33n is retained until a whole new I frame arrives (if the I frame is divided, the last data is expected) and it is clear when the new I frame arrives If the indicator is changed from inactive to active, the decoders 341 to 34n of MPEG-4 decode the content upon returning to the last frame (frame I) accumulated in the buffers. On the other hand, after the selection / composition parts 361 to 36n select / compose images according to instructions of the control part 350 and the resizing parts 371 to 37n perform the scaling processing, the encoders 381 to 38n of MPEG-4 perform coding processing for each terminal. An MPEG-4 stream created by the encoding becomes packet with RTP by the RTP transmission parts 391 to 39n before they are transmitted individually to the terminals. The terminals can also change the video received from the server 300 by transmitting a control signal as a DTMF signal to the DTMF receiving portion 320 of the server 300 to take a request to the server 300. In the second mode described in the above, a example of using the DTMF signal as a control signal was described, but in fact the DTMF signal, the SIP (Session Initiation Protocol) RTSP (Real Time Propagation Protocol) or the like can also be used. In addition, each of the above embodiments was described assuming that the server maintains data after the last I frame in its buffer and, when a switch request is made, decodes from the beginning of the buffer (ie, the last one). plot I). However, the present invention can be carried out naturally in various modifications and replacements without departing from the spirit of the present invention, particularly with respect to the video streams that are stored in the buffer memory and, when a switching request is made, decoding is done by returning to a predetermined time in the past. For example, apart from the update logic of the buffer, the reading logic of the buffer memory (frame search I) can be provided naturally.
BRIEF DESCRIPTION OF THE DRAWINGS FIGURE 1 is a diagram showing a preliminary configuration of a multipoint conference system according to an embodiment of the present invention.
FIGURE 2 is a diagram showing a connection between each terminal and a server in the multipoint conference system according to an embodiment of the present invention. FIGURE 3 is a diagram showing a detailed configuration of a multipoint conference server according to an embodiment of the present invention. FIGURE 4 is a diagram for illustrating multipoint conference server operations according to an embodiment of the present invention. FIGURE 5 is a diagram for illustrating multipoint conference server operations according to one embodiment of the present invention. FIGURE 6 is a diagram for illustrating the transition of the buffer state of the multipoint conference server according to an embodiment of the present invention. FIGURE 7 is another diagram for illustrating the transition of the buffer state of the multipoint conference server according to an embodiment of the present invention. FIGURE 8 is a diagram for illustrating operations of a multipoint conference server according to a second embodiment of the present invention.
Explanation of Reference Numbers 101 to lOn: Terminal 200, 300: Multipoint conference server (Server) 500: Network 210, 211 to 21n: Video reception part 220: Control signal receiving part 231 to 23n, 331 to 33n: Memory buffer 23x_T0 to 23x_T5, 23x_T10 to 23x_Tl3: Memory buffer 241 to 24n: Decoder 250, 350: Control part 261 to 26n, 361 to 36n: Selection part / composition 271 to 27n, 371 to 37n: Part of resizing 281 to 28n: Encoder 290, 291 to 29n: Transmission part of video (Transmission part) 311 to 31n: Reception part of RTP 320: Reception part of DTMF 341 to 34n: MPEG-4 decoder 381 a 38n: MPEG-4 Encoder 391 A 39N: Transmitting part of RTP P_T0 to P_T4, Pl pO, P2_T10, P_T11, P_T12: Data of the video stream

Claims (13)

  1. CLAIMS 1. A multipoint conference server connected to a plurality of terminals that transmit a video stream that encodes a requested video stream for each of the terminals before the video stream is transmitted to each of the terminals, comprising: decoders that decode only a portion of the video streams that are transmitted to each of the terminals; intermediate memories that accumulate video streams that will not be transmitted starting with a reference frame without decoding them; and a switching control part which, in accordance with a request for changing the video stream from the terminal, selects an accumulated video stream in the buffers, decodes the video stream starting with the reference frame upon return to a predetermined time in the past, and changes the video stream that is transmitted to the terminal. The multipoint conferencing server according to claim 1, wherein: the switching control parse performs the decoding upon returning to a last reference frame accumulated in the buffer. 3. The multipoint conference server according to claim 1 or 2, further comprising: a buffer update means for erasing the accumulated content in the buffer each time a reference frame is entered. The multipoint conference server according to any of claims 1 to 3, further comprising: a selection / composition part that links a plurality of requested video streams from the terminal to compose a video stream for transmission . 5. A multipoint conferencing system, comprising: connecting the multipoint conferencing server according to any of claims 1 to 3, and a plurality of terminals for exchanging video streams with the multipoint conferencing server. 6. A program to be executed by a computer that constitutes a multipoint conference server that connects to a plurality of terminals that transmit a video stream and encode a requested video stream for each of the terminals before the video stream is transmitted to each of the terminals, causing the computer to perform: the processing to decode, between the video streams received from each of the terminals, select a portion of the video streams that are transmitted to each of the terminals, the processing to accumulate the video streams that will not be transmitted to each of the terminals that begin with a frame of reference in buffers without decoding them, and processing, in accordance with a request for switching the video stream from the terminal, to select an accumulated video stream in the buffers, decoding the video stream that starts with the reference frame when returning to a predetermined time in the past, and changing the video stream that is transmitted to the terminal. The program according to claim 6, comprising: transmitting the video stream upon returning to a last reference frame accumulated in the decoding buffer to process the switching of the video stream that is transmitted to the terminal. 8. The program according to claim 6 or 7, which further causes the computer to perform processing to erase accumulated content in the buffer each time a reference frame is entered. 9. The program according to any of the claims 6 to 8, which further causes the computer to perform processing to bind a plurality of requested video streams from the terminal to compose a video stream for transmission. 10. A multipoint conferencing method performed by using a plurality of terminals that transmit a video stream and a multipoint conferencing server that encodes a requested video stream from each of the terminals before the video stream is transmitted to each of the terminals, which comprises: a decoding stage, where the multipoint conference server decodes only a portion of the video streams that are transmitted to each of the terminals; an accumulation step, where the multipoint conference server accumulates video streams that are not transmitted starting with a reference frame in buffers without decoding them; and a switching stage, wherein in accordance with a request for switching the video stream from the terminal, the multipoint conference server selects an accumulated video stream in the buffers, decodes the video stream starting with a frame of reference when returning to a predetermined time in the past, and change the video stream that is transmitted to the terminal. The multipoint conferencing method according to claim 10, wherein in the step to change the video stream that is transmitted to the terminal, the multipoint conferencing server performs the decoding upon returning to a last "frame" of accumulated reference in the buffer memory and transmits the video stream to the terminal. The multipoint conferencing method according to claim 10 or 11, wherein the multipoint conferencing server further comprises a step for erasing the accumulated content in the buffer each time a reference frame is entered. The multipoint conferencing method according to any of claims 10 to 12, wherein the multipoint conferencing server further comprises a step for linking a plurality of requested video streams from the terminal to compose a video stream for the broadcast.
MX2008000117A 2005-07-12 2006-06-08 Multi-point conference system, multi-point conference method, and program. MX2008000117A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005202965 2005-07-12
JP2006011549 2006-06-08

Publications (1)

Publication Number Publication Date
MX2008000117A true MX2008000117A (en) 2008-03-18

Family

ID=40328185

Family Applications (1)

Application Number Title Priority Date Filing Date
MX2008000117A MX2008000117A (en) 2005-07-12 2006-06-08 Multi-point conference system, multi-point conference method, and program.

Country Status (1)

Country Link
MX (1) MX2008000117A (en)

Similar Documents

Publication Publication Date Title
US8253775B2 (en) Multipoint conference system, multipoint conference method, and program
US11956472B2 (en) Video data stream concept
EP1815684B1 (en) Method and apparatus for channel change in dsl system
US9148694B2 (en) Method and apparatus enabling fast channel change for DSL system
US8760492B2 (en) Method and system for switching between video streams in a continuous presence conference
US8228363B2 (en) Method and system for conducting continuous presence conferences
JP2017022715A (en) Network streaming of coded video data
US20080018803A1 (en) Fast Channel Change in Digital Video Broadcast Systems over Dsl Using Redundant Video Streams
CN101370139A (en) Method and device for switching channels
CN102067551A (en) Media stream processing
US8281350B2 (en) Content distribution system, conversion device, and content distribution method for use therein
KR101396948B1 (en) Method and Equipment for hybrid multiview and scalable video coding
US20110191448A1 (en) Subdivision of Media Streams for Channel Switching
KR101433168B1 (en) Method and Equipment for hybrid multiview and scalable video coding
EP2557780A2 (en) Method and system for switching between video streams in a continuous presence conference
TWI491218B (en) Media relay video communication
MX2008000117A (en) Multi-point conference system, multi-point conference method, and program.
JP2003199062A (en) Transmitter and receiver

Legal Events

Date Code Title Description
FA Abandonment or withdrawal