JP2004228968A - Streaming controller - Google Patents

Streaming controller Download PDF

Info

Publication number
JP2004228968A
JP2004228968A JP2003014884A JP2003014884A JP2004228968A JP 2004228968 A JP2004228968 A JP 2004228968A JP 2003014884 A JP2003014884 A JP 2003014884A JP 2003014884 A JP2003014884 A JP 2003014884A JP 2004228968 A JP2004228968 A JP 2004228968A
Authority
JP
Japan
Prior art keywords
traffic
igmp
receiver
control device
multicast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2003014884A
Other languages
Japanese (ja)
Inventor
Hong Chen
Jun Haneda
Shuji Inoue
Peku Yau Tan
ペク ヤウ タン
ホング チェン
修二 井上
潤 羽根田
Original Assignee
Matsushita Electric Ind Co Ltd
松下電器産業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Ind Co Ltd, 松下電器産業株式会社 filed Critical Matsushita Electric Ind Co Ltd
Priority to JP2003014884A priority Critical patent/JP2004228968A/en
Publication of JP2004228968A publication Critical patent/JP2004228968A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

Provided is a streaming control device capable of efficiently reducing traffic on a subnet.
A streaming control device relates to an engine that handles IGMP messages according to the IGMP protocol, and is used in a process on a router side that operates according to the IGMP protocol. The engine manages a multicast traffic transfer information database containing receiver information. More specifically, each time the receiver sends an IGMP message "IGMP Membership_Report message" to the streaming controller, this engine records the receiver. Based on the recorded information of the receiver, the engine can immediately know whether there is another receiver that desires the same traffic. Therefore, there is no receiver that desires the traffic by receiving the “IGMP Leave_Group message”. If so, stop the transfer immediately.
[Selection diagram] Fig. 1

Description

[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a streaming control device capable of efficiently reducing traffic on a network.
[0002]
[Prior art]
Multicast technology is an important technology for delivering audiovisual content to a plurality of receivers on an IP network in real time. Unlike the conventional unicast technique, the multicast technique does not need to make a new copy even when the content is requested from a new receiver, so that valuable network resources can be saved. This effect is particularly useful for delivering content that requires a wide bandwidth, such as MPEG content that requires most of the available network bandwidth in a single stream. Multicast technology is used.
[0003]
A content distribution source using the multicast technology (hereinafter, referred to as a “multicast distribution source”) is an arbitrary unicast IP in a class D IP address space (224.0.0.0 to 239.255.255.255). A predetermined data packet is transmitted using the IP address in the group address instead of the address. When a receiver requesting content distribution transmits an “IGMP (Internet Group Management Protocol) Membership_Report message” to a multicast router (hereinafter simply referred to as a “router”), the router transmits a packet with a group address as a destination address. Send to When not requesting the distribution of the content, the receiver transmits an “IGMP Leave_Group message” to the router. The router that has received the message sends an “IGMP Group_Query message” to check whether a receiver related to the traffic of this group exists in the same subnet. Then, if there is no response within a predetermined time, the router stops forwarding the group traffic to the subnet.
[0004]
The IGMP protocol is the basis of multicast technology. At present, improvements have been made to the IGMP protocol focused primarily on multicast technology that allows the receiver to inform routers of the multicast distribution source of the traffic they want to receive as well as the group. With such an improved IGMP protocol, unnecessary traffic transferred from a multicast distribution source to a subnet without receivers desiring traffic can be reduced. This is important for the development of the multicast technology in view of the utilization of the multicast technology and the limitation of the group address space.
[0005]
[Problems to be solved by the invention]
As described above, the process using the “IGMP Leave_Group message” according to the IGMP protocol has some delay. That is, the multicast router does not stop forwarding the group's traffic to the subnet until after the time obtained by multiplying the last member inquiry count by the last member inquiry interval. This delay results in extra traffic that the receiver is not soliciting from the subnet.
[0006]
This can cause serious problems when a fast group scan is performed by the receiver, such as leaving a predetermined group and joining another group. That is, during the delay, the router still forwards the old group traffic to the subnet, and at the same time, forwards the new group traffic to the subnet immediately after the router receives the "IGMP Membership_Report message" for the new group. There was a problem that the traffic above would be doubled. In addition, when the number of switching groups is large, there is a problem that extra traffic congests the subnet and sacrifices service to other hosts in the same subnet.
[0007]
Since these problems are caused by a process using the “IGMP Leave_Group message” of the IGMP protocol, any multicast method based on the IGMP protocol inherits the problem.
[0008]
The present invention has been made in view of the above-described conventional problems, and has as its object to provide a streaming control device capable of efficiently reducing traffic on a subnet.
[0009]
[Means for Solving the Problems]
In order to solve the above problems, in order to achieve the above object, a streaming control device according to the present invention is a streaming control device that distributes content by multicast in response to a request from a receiver, Traffic conversion means for converting the input unicast traffic into multicast traffic; and information stored in a multicast traffic transfer information database storing the transfer status of the content and the receiver requesting delivery of the content, and Traffic distribution means for distributing the multicast traffic converted by the traffic conversion means on the basis of the above rules.
[0010]
Therefore, the information of the receiver recorded in the multicast traffic transfer information database makes it possible to immediately know whether there is another receiver requesting the same traffic. Can be stopped immediately. As a result, traffic can be made more efficient and smaller.
[0011]
Further, the streaming control device according to the present invention, a message transmitting means for creating and transmitting an IGMP message at a predetermined interval, based on the response to the transmission of the IGMP message and the IGMP message transmitted from the message transmitting means, An information updating unit for updating information for multicast traffic transfer, and an interval setting unit for configuring an interval for transmitting the IGMP message are provided.
[0012]
Further, in the streaming control device according to the present invention, the traffic conversion means may include a data packet selection means for selecting a data packet of the unicast traffic input from the data distribution source, and a unicast traffic selected by the data packet selection means. Data conversion means for converting a data packet of the cast traffic into a data packet of the multicast traffic.
[0013]
Further, the streaming control device according to the present invention, when receiving the IGMP message on the multicast port, if the IGMP message is a “Membership_Report message” for the Internet group, the traffic for the multicast group corresponding to the message. And the receiver is registered in the database for the corresponding multicast group. If the IGMP message is a “Leave_Group message”, the receiver registered in the database for the corresponding multicast group is removed. Disabling the forwarding of traffic for the corresponding multicast group when the receiver is not linked to the group, so that the multicast traffic forwarding information database And a database updating means for updating.
[0014]
In the streaming control device according to the present invention, the traffic distribution unit and the database updating unit share the multicast traffic transfer information database.
[0015]
The streaming control device according to the present invention includes a plurality of output links for outputting multicast traffic data packets and a plurality of input links for inputting unicast traffic data packets.
[0016]
Also, in the streaming control device according to the present invention, the input link has the function of the output link, the output link has the function of the input link, and connects the input link to a plurality of data distribution sources. The link is connected to multiple receivers.
[0017]
Further, a network system according to the present invention is a network system including the streaming control device according to the present invention, wherein the receiver can receive a data stream from any data distribution source in any cluster, and the data distribution source The streaming control device is connected to a switch connected to, and each output of the streaming control device is connected to the receiver via the switch, thereby supporting the data distribution source and a plurality of clusters of the receiver. So that the network is configured.
[0018]
In this way, a network structure in which a plurality of streaming control devices are interconnected to form a framework capable of having any number of data distribution sources and receivers regardless of hardware limitations is provided. I have.
[0019]
Further, in the network system according to the present invention, the streaming control device is used for switching between the data distribution sources at a high frequency.
[0020]
BEST MODE FOR CARRYING OUT THE INVENTION
The streaming control device according to the present invention handles IGMP messages according to the IGMP protocol and is used in a process on the router side that operates according to the IGMP protocol. In addition, the streaming control device manages a multicast traffic transfer information database including information on the receiver. More specifically, each time the receiver transmits an IGMP message “IGMP Membership_Report message” and “IGMP Leave_Group message” to the streaming control device, the streaming control device records or records the receiver that transmitted each message. To delete. Since the streaming controller can immediately know whether there is another receiver requesting the same traffic from the recorded information of the receiver, the transmission can be stopped immediately when there is no receiver. There is.
[0021]
Further, the streaming control device has an engine that converts unicast traffic to multicast traffic. This is to make the starting point of the multicast traffic closer to the receiver so that the influence of the delay caused by the “IGMP Leave_Group message” is minimized. Also, a network structure in which a plurality of streaming control devices are interconnected is possible so that a framework capable of having an arbitrary number of data distribution sources and receivers is formed irrespective of hardware limitations.
[0022]
Hereinafter, embodiments of a streaming control device according to the present invention will be described in detail with reference to the drawings. The streaming control device of the present embodiment provides a device that supports high-speed group switching in, for example, a digital security system based on IP.
[0023]
The streaming control device according to the present embodiment is arranged on an IP network. For the streaming controller to function properly, the receiver needs to have an IGMP host-side stack. That is, the receiver uses the IGMP message to request multicast traffic to the streaming controller or leave a predetermined multicast group to the streaming controller.
[0024]
Further, since the streaming control device of the present embodiment has an engine for processing IGMP messages corresponding to RFC2236, other network nodes in the network can use IGMP for multicast signaling without modification to the stack. When the streaming control device of the present embodiment is provided in a network, the data distribution source transmits unicast traffic to the streaming control device having a destination address (U2M address), and when the streaming control device receives the traffic, the traffic is transmitted. To multicast.
[0025]
In addition, when the receiver requests reception of traffic of a predetermined group, the receiver transmits an “IGMP Membership_report message” of the group to the streaming control device of the present embodiment. When the streaming control device receives this message, it transfers the traffic of the predetermined group to the port to which the receiver is connected, and simultaneously records the information of the receiver in the multicast traffic transfer information database. On the other hand, when the receiver requests to stop receiving traffic of a predetermined group, the receiver transmits an “IGMP Leave_Group message” of the group to the streaming control device of the present embodiment. When the streaming controller receives this message, it checks the multicast traffic forwarding information database and stops forwarding group traffic to that port unless another receiver has requested the same group of traffic on the same port. .
[0026]
In fact, a single streaming controller can only support a certain number of data distribution sources due to hardware limitations. Thus, some streaming controllers can be operated with a network architecture to support more data distribution sources. Thus, the interconnect device can support any cluster of data distribution sources and receivers.
[0027]
The streaming control device of the present embodiment is set and controlled manually or by some software. This software is executed in the streaming control device of this embodiment or on a remote host. When the streaming control device of the present embodiment receives a message from this software, the software can control the streaming control device in real time.
[0028]
The streaming control device records statistical information on traffic passing through the streaming control device, and the information is stored in an external storage device. Therefore, the software running on the streaming control device or the remote host according to the present embodiment can search this information in real time by a specific type of message.
[0029]
In addition, the engine provided in the streaming control device of the present embodiment can be made available or unavailable during operation by the setting tool. The streaming controller plays the role of a general network when some of the engines are unavailable. For example, when the unicast / multicast conversion engine is disabled, the device operates as a multicast router by fast group switching.
[0030]
Hereinafter, an example in which a streaming control device that distributes audiovisual content is used in an IP-based digital security system will be described. The streaming control device of the present embodiment is a device that performs conversion from unicast to multicast and transfers packets, and uses an “IGMP Membership-Report message”, an “IGMP Leave_Group message”, and an “IGMP Leave_Group message” according to the IGMP (Internet Group Management Protocol) protocol. General_Query message ". In the following description, these terms are used as follows.
[0031]
First, a “packet” is a constituent unit of data distributed on a network. A “stream” is a collection of packets transferred to a network having a common attribute. "Traffic" is a collection of streams transmitted on a network. The “flow” is a data path used to distribute a stream. “QoS” is a term of “service quality” of a data stream or traffic. The “QoS probe” is a device that measures and notifies the QoS of a flow at one of the network components. “NE” refers to a network element used to function as a network router, gateway, or intelligent network hub. The “upper layer” refers to an arbitrary entity on the upper part of the device that processes the packet transmitted from the streaming control device of the present embodiment.
[0032]
FIG. 1 is a block diagram illustrating a streaming control apparatus that converts unicast data packet traffic into multicast data packet traffic and transfers the traffic to a receiver based on a request from the receiver. The streaming control apparatus according to the present embodiment shown in the figure has four main components: a U2M conversion and transfer engine 101, an inquiry and status update engine 102, a sub-IGMP processing engine 103, and a transfer and receiver registration table. 104.
[0033]
Note that the structure of the streaming control device shown in the figure is simplified for explanation, and does not always reflect the actual physical architecture. Also, in practice, the device may have a more complex architecture and special engines. Further, FIG. 1 shows only main data paths.
[0034]
The data path shown in FIG. 1 is a data path A from the inquiry and state update engine 102 to the U2M conversion transfer engine 101, a data path B between the inquiry and state update engine 102 and the transfer and receiver registration table 104, a U2M conversion Data path C between transfer engine 101 and transfer and receiver registration table 104, data path D between sub-IGMP processing engine 103 and transfer and receiver registration table 104, and U2M conversion transfer engine 101 and sub-IGMP processing engine 103 , A data path F from the sub IGMP processing engine 103 to the inquiry and status update engine 102. As shown in FIG. 1, data paths A and F are unidirectional, and the other data paths B, C, D and E are bidirectional. The direction of the data path indicates the direction of information flow on the data path.
[0035]
In practice, there are more data paths, special control paths and signaling paths. The U2M conversion transfer engine 101 and the sub IGMP processing engine 103 shown in FIG. 1 are implemented by a hardware method such as an FPGA or as a software module. When these engines are implemented in hardware, the data path serves to physically connect them. If these engines were implemented as software modules, the data paths shown in FIG. 1 would indicate the interface between these modules.
[0036]
As shown in FIG. 1, unicast traffic from a data distribution source is input to the U2M conversion transfer engine 101 from a unicast port. The U2M conversion and transfer engine 101 processes traffic based on a predetermined rule, and transfers the obtained traffic to an output port based on the information stored in the transfer and receiver registration table 104 and the predetermined rule.
[0037]
FIG. 2 is a state transition diagram of the U2M conversion transfer engine 101. As shown in the figure, the U2M conversion and transfer engine 101 performs six states, an initialization state 201 for performing initialization, an idle state 202 which transits to another state according to various conditions, and performs unicast / multicast conversion of a packet. It can take one of a U2M conversion state 203, a transfer state 204 for transferring a packet, a forward unicast state 205 for transferring a unicast packet, and an upper layer state 206 for transferring and processing a packet to an upper layer.
[0038]
The U2M conversion transfer engine 101 starts from an initialization state 201. FIG. 3 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the initialization state 201. As shown in FIG. 3, the U2M conversion transfer engine 101 reads the hardware configuration (S2101). In step S2101, the number of ports, unicast port addresses, and multicast port addresses can be read, but more attributes can be read. Next, the U2M conversion transfer engine 101 reads the user configuration (S2102). Step S2102 includes reading of the unicast / multicast method, the unicast address for U2M conversion, the IP address of the device, the multicast group for conversion, and the number of receivers that support the same camera at the same time. It is not limited to.
[0039]
Next, the U2M conversion transfer engine 101 creates and initializes a “forward_condition_tbl table” and a “receiver_register_tbl table” corresponding to the configuration read in steps S2101 and S2102 (S2103). These tables are the transfer and receiver registration table 104 shown in FIG. 1, and can be recorded in hardware as a ROM or a memory block. The initial values of the two tables take arbitrary values. For example, the value of the “forward_condition_tbl table” is set to −1, and the value of the “receiver_register_tbl table” is set to 0.
[0040]
After the initialization state 201, an idle state 202 is set. FIG. 4 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the idle state 202. The steps described below are executed when the U2M conversion transfer engine 101 transitions from the idle state 202 to another state. It is also assumed that the event that triggers the transition is the arrival of a packet. Otherwise, a default transition back to the idle state 202 occurs.
[0041]
As shown in FIG. 4, the U2M conversion transfer engine 101 obtains a packet that causes the event (S2201). Next, the U2M conversion transfer engine 101 determines whether the packet obtained in step S2201 is from a unicast port (S2202). If the packet is from a unicast port, the process proceeds to step S2203; Proceed to step S2204. In step S2203, the destination address of the packet obtained in step S2201 is obtained. Next, the destination address is compared with the arranged U2M address to determine whether the two addresses are equal (S2205). If they are equal, the process proceeds to step S2210, and if not, the process proceeds to step S2208. In step S2210, the “unicast_pkt_destined_u2m_address condition” (the converted unicast address of the packet to be converted) is set, and the U2M conversion transfer engine 101 transitions to the U2M conversion state 203.
[0042]
On the other hand, in step S2208 (when the destination address is not equal to the U2M address), it is determined whether or not the destination address acquired in step S2202 is equal to the IP address of the device itself. Proceed to step S2211. In step S2209, “Other_packet_destined_for_U2Mbox condition” is set, and the state transits to the upper layer state 206. On the other hand, in step S2211, “Unicast_not_for_U2M condition” is set, and the U2M conversion / transfer engine 101 transits to the forward unicast state 205.
[0043]
If the result of determination in step S2202 is that the packet obtained in step S2201 is not from a unicast port, the flow advances to step S2204. In step S2204, it is determined whether the packet is from the sub-IGMP processing engine 103 (S2204). If the packet is from the sub-IGMP processing engine 103, the process proceeds to step S2206; otherwise, the process proceeds to step S2207. In step S2206, “Pkt_from_Sub-IGMP_engine condition” is set, and the U2M conversion / transfer engine 101 transits to the forward unicast state 205. On the other hand, in step S2207, “Query_pkt_from_Query_engine condition” is set, and the U2M conversion transfer engine 101 transitions to the transfer state 204.
[0044]
As described above, when the “unicast_pkt_destined_u2m_address condition” is set as a transition condition from the idle state 202, the U2M conversion transfer engine 101 transitions to the U2M conversion state 203. FIG. 5 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the U2M conversion state 203. As shown in the figure, the U2M conversion and transfer engine 101 first obtains a distribution source address of a packet that induces a transition (S2301). Next, unicast / multicast conversion (U2M conversion) is applied to packets based on the configuration scheme.
[0045]
Several conversion schemes are conceivable, and the U2M conversion transfer engine 101 follows one of those configured as the current working scheme. For example, the U2M translation and forwarding engine 101 can translate all traffic into one configured multicast group by changing the destination address field of all packets to the same multicast group address. Further, the U2M conversion transfer engine 101 can convert traffic based on the distribution source address of the traffic. One method is possible by storing the low 8 bits of the initial destination address and setting the high 24 bits of the destination address in the multicast group address prefix, but numerous schemes are possible.
[0046]
Next, after the conversion in step S2302, the U2M conversion transfer engine 101 sets a new destination address in the packet destination address field (S2303). Then, the U2M conversion transfer engine 101 transitions to the transfer state 204 after finishing the processing of the U2M conversion state 203. However, this transition is not protected.
[0047]
As described above, when the “Query_pkt_from_Query_engine condition” is set as a transition condition from the idle state 202, the U2M conversion transfer engine 101 transitions to the transfer state 204. FIG. 6 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the transfer state 204. As shown in the drawing, the U2M conversion transfer engine 101 first calculates “grp_index” and “table index” from the destination address of the packet (S2401). Note that a mapping method from the destination address to “grp_index” is configurable. One simple case is to use the low 8 bits of the destination address as the exponent, but numerous mapping schemes are possible and the above is not the only one.
[0048]
After obtaining the "grm_index", the "forward_condition_tbl" table is checked to determine whether there is a port having a request for packets of this group, and whether the destination address is all_DRP address: 224.0.0.1. (S2402). If there is a port that has a request for traffic in this group, or if the destination address is "224.0.0.1", the packet is duplicated and forwarded to that port. These processes are performed in parallel when the U2M conversion transfer engine 101 is implemented in hardware. If no port has a request for this group's traffic, the packet is discarded and the associated source is released. After the transfer state 204, the U2M conversion transfer engine 101 transitions back to the idle state 202.
[0049]
As described above, if the “Pkt_from_Sub-IGMP_engine condition”, “unicast_not_for_U2M condition” or “broadcast_pkt condition” is set as a transition condition from the idle state 202, the U2M conversion transfer engine 101 transitions to the forward unicast state 205. . FIG. 7 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the forward unicast state 205. As shown in the figure, first, the U2M conversion transfer engine 101 obtains a packet, transfers an output port index, and calculates (S2501). This output port index is calculated based on the rule set in the box. The rules are configurable and will be described later. Next, the U2M conversion transfer engine 101 transfers the unicast packet to the output port (S2502). However, if the rule indicates that no port needs the packet, the packet is dropped and the associated distribution source is released.
[0050]
As described above, when the “Other_packet_destined_for_U2Mbox condition” is set as a transition condition from the idle state 202, the U2M conversion transfer engine 101 transitions to the upper layer state 206. FIG. 8 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the upper layer state 206. As shown in the figure, the U2M conversion transfer engine 101 transmits the packet to the upper layer state 206 for processing (S2601). The upper layer state 206 is any hardware or software module that reads the contents of the packet and operates according to the contents.
[0051]
FIG. 9 is a state transition diagram of the query and state update engine 102 shown in FIG. The query and status update engine 102 has a function of periodically generating an “IGMP general_query message” and a function of maintaining the transfer and receiver registration table 104.
[0052]
The query and state update engine 102 begins with an initialization state 301. FIG. 10 is a flowchart illustrating an inquiry about the initialization state 301 and an operation performed by the state update engine 102. As illustrated in FIG. 10, the inquiry and state update engine 102 sets “Enable_Query” in the streaming control device of the present embodiment (S3101). If there are multiple devices in the same network, the only device is the queryer, which sends an "IGMP general_Query message". The decision as to which device will be the interrogator, as described below, is made manually or by a predetermined specific protocol.
[0053]
Next, the query and status update engine 102 determines whether the query (Query) is valid (S3102). If the query is valid (the device is configured as a query device), the process proceeds to step S3103, and the query is not valid (query). If it is not configured as a device), a series of processing ends. In step S3103, the inquiry and status update engine 102 obtains the configuration of the inquiry interval from the streaming control device of the present embodiment (S3103). It should be noted that the predetermined default value is transmitted into the device and is optionally modified manually or through a specific protocol. Basically, the shorter the interval, the more bursty the more IGMP messages need to be processed by the sub-IGMP processing engine 103 within the same period. On the other hand, the longer the interval, the longer the time before the streaming control device of the present embodiment stops the transfer of the multicast traffic when the “IGMP Leave_Group message” is lost on the link.
[0054]
Next, the inquiry and status update engine 102 forms an "IGMP General_Query message" (S3104). The message needs to be formed based on RFC2236 with the maximum response time according to the inquiry interval. Since the “IGMP General_Query message” always remains in the same state, the formed “IGMP General_Query message” is reused later. Next, the inquiry and status update engine 102 starts an inquiry timer (S3105). The length of the inquiry timer is set to the inquiry interval as described above. Then, the inquiry and state update engine 102 transitions to the idle state 302. However, this transition is not protected. The action associated with this transition is to set an inquiry timer (Query_Timer).
[0055]
When the “Query_timer_expire condition” is set when the streaming control device of the present embodiment is configured as an inquiry device, the inquiry and state update engine 102 transitions from the idle state 302 to the update state table state 303. However, except in this case, the sub-IGMP processing engine 103 transmits a signal. FIG. 11 shows a flowchart describing the query of the update status stable status 303 and the operation performed by the status update engine 102. As shown in FIG. 11, in step S3301, the value of Forward_status_tbl [port_no] [grp_index] is reduced unless each of the counter values of the multicast port for each group is 0 (zero). When the counter value reaches 0, receiver_register_tbl [port_no] [ grp_index] is reset. This step is performed for all ports and all groups, and is performed so as to be executed in parallel with hardware.
[0056]
The inquiry and state update engine 102 transitions from the update state table state 303 to the inquiry state 304 when the streaming control device of the present embodiment is configured as an inquiry device. However, otherwise, the state transits to the idle state 302. FIG. 12 shows a flowchart describing the query of query status 304 and the operation of status update engine 102. As shown in FIG. 12, in step S3201, a “formed IGMP General_Query message” is transmitted to the U2M conversion transfer engine 101 via the data path 1. The query and state update engine 102 then transitions from the query state 304 to the idle state 302. At the time of this transition, a value given in advance is set to the inquiry timer (Query_Timer).
[0057]
FIG. 13 is a state transition diagram of the sub-IGMP processing engine 103 shown in FIG. The sub-IGMP processing engine 103 processes the IGMP message received at the multicast port, and performs transfer and updates the receiver registration table 104 accordingly. The sub IGMP processing engine 103 transfers the unicast packet received at the unicast port. Alternatively, the U2M conversion transfer engine 101 performs an operation according to a rule preset in the streaming control device of the present embodiment.
[0058]
The sub-IGMP processing engine 103 starts from the initialization engine 401 state. FIG. 14 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the state of the initialization engine 401. As illustrated in FIG. 14, the sub IGMP processing engine 103 performs hardware setting on the streaming control device of the present embodiment (S4101). The setting includes a unicast port address, a multicast port address, the number of multicast ports, and the like.
[0059]
Next, the sub IGMP processing engine 103 obtains software settings from the streaming control device of the present embodiment (S4102). Possible settings include the IP address of the device, network robust settings, whether the device is an interrogator, and the address of the controller port. Next, the sub-IGMP processing engine 103 initializes the unicast transfer table when required by the settings obtained in the above steps (S4103).
[0060]
Next, the sub IGMP processing engine 103 transitions from the initialization engine 401 to the idle 402 state. In the present embodiment, it is assumed that the transition from the idle state 402 is performed by the arrival of a packet, and that other events fall into the category of “default” which makes a transition to the idle state 402 itself. There are many types of events, and the assumption is made only for convenience of explanation.
[0061]
FIG. 15 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the idle 402 state. The process described below is executed when the sub IGMP processing engine 103 transitions from the idle state 402 to another state. As shown in FIG. 15, the sub-IGMP processing engine 103 obtains a packet that triggers an event (S4201). Next, the sub-IGMP processing engine 103 determines whether the packet acquired in step S4201 is an IP packet (S4202). If the packet is an IP packet, the process proceeds to step S4203; otherwise, the process proceeds to step S4207. .
[0062]
In step S4207 (when the packet is not an IP packet), “Other_packet condition” is set, and the sub IGMP processing engine 103 transitions to the state of the drop packet 406. On the other hand, in step S4203 (when the packet is an IP packet), it is determined whether the packet acquired in step S4201 is an IGMP packet by checking the header information of the packet (S4203). If the packet is not an IGMP packet, the process proceeds to step S4204. , IGMP, the process proceeds to step S4205. In the header information, the IGMP packet is a type D IP address of class D.
[0063]
In step S4204 (if it is not an IGMP packet), it is determined whether the packet is a unicast packet (S4204). If it is a unicast packet, the process proceeds to step S4206; if it is not a unicast packet, the process proceeds to step S4207 described above. The state transits to the drop packet 406 state. In step S4206 (when the packet is a unicast packet), “unicast_pkt_received condition” is set (S4206), and the sub IGMP processing engine 103 transits to the unicast forward 405 state.
[0064]
If it is determined in step S4203 that the packet is an IGMP packet, the process proceeds to step S4205. In step S4205, the IGMP type and group fields of the packet are obtained. Next, in step S4208, the group address is checked based on the settings in the streaming control device of the present embodiment, and the type field is equal to “membership_report” or “Leave_Group”, that is, the IGMP group matches the multicast group prefix, It is determined whether the message is a “membership_report message” or a “Leave_Group message” (S4208). In step S4208, if they are equal, the process proceeds to step S4209. If they are not equal, the process proceeds to step S4210.
[0065]
In step 4209 (if equal), “Received_IGMP_membership_report_pkt condition” or “Receive_IGMP_Leave_group_pkt condition” is set based on the type field, and the sub-IGMP processing engine 103 transitions to the combined report 403 state or the leave report 404 state. On the other hand, in step S4210 (if not equal), it is determined whether the message is an “IGMP General_Query message” or whether the streaming control device of the present embodiment is a non-inquirer. In step S4210, if the above two conditions are satisfied, the process proceeds to step S4211, and if not, the process proceeds to step S4207, and the sub IGMP processing engine 103 transitions to the state of the drop packet 406. In step S4211, “non-Query && IGMP_general_query condition” is set, and the sub-IGMP processing engine 103 transitions to the state of the trigger inquiry engine 407.
[0066]
Next, upon receiving the “IGMP Membership_Report message”, the sub IGMP processing engine 103 makes a transition from the idle 402 state to the combined report 403 state. FIG. 16 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the combined report 403 state. As shown in FIG. 16, the sub IGMP processing engine 103 obtains a port number “port_No” for receiving a packet (S4301). Next, the group address in the packet is mapped to a group index “grp_index” by the same hash function used in the transfer state of the U2M conversion transfer engine 101 (S4202). Next, the sub IGMP processing engine 103 sets the corresponding cell of the “Forward_status_tbl [port_no] [grp_index] table” to a value that enables the forwarding (forward) of the multicast traffic (S4303).
[0067]
Next, the sub IGMP processing engine 103 obtains the distribution source address of the IGMP message and sets the corresponding bit of the corresponding cell in the receiver registration table “receiver_register_tbl” (S4304). Next, the bit of the cell is set in the “Receiver_register_tbl [port_no] [grp_index] table” (S4305). This operation need not be a bitmap, but can be recorded in any form, and the receiver registration table can be multidimensional. When the above steps are performed, the message packet is destroyed, and the associated distribution source is released (S4306). Then, the sub IGMP processing engine 103 makes a reverse transition from the combined report 403 state to the idle 402 state.
[0068]
Similarly, upon receiving the “IGMP Leave_Group message”, the sub IGMP processing engine 103 transitions from the idle 402 state to the leave report 404 state. FIG. 17 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the leave report 404 state. Steps S4401, S4402, and S4403 are the same as steps S4301, S4302, and S4304 performed by the sub IGMP processing engine 103 in the combined report 403 state shown in FIG. 16, and the hashing function and the mapping method are the same for both. After these three steps, the sub-IGMP processing engine 103 obtains the corresponding bits of the receiver registration table “Receiver_register_tbl [port_no] [grp_index] table” (S4404). Next, it is determined whether to set the corresponding bit (S4405). It checks whether the source has previously sent a "Membership_Report message" for this group. If the distribution source has not requested the group, the sub IGMP processing engine 103 does not process the “IGMP Leave_Group message”.
[0069]
If the corresponding bit has been set in step S4405, that is, if the distribution source has transmitted the “Membership_Report message”, the process advances to step S4406 to cancel the setting of the corresponding bit and leave it unset (S4406). Next, the sub IGMP processing engine 103 determines whether or not it is the last receiver to leave the group on this port (S4407). If it is the last receiver, the process proceeds to step S4408. In step S4408, a value that can be transferred to the traffic of the group of the U2M conversion transfer engine 101 is set in the corresponding cell Forward_status_tbl [port_no] [grp_index] in the transfer status table. When the above steps are performed, the message packet is destroyed, and the associated distribution source is released (S4409). Then, the sub-IGMP processing engine 103 makes a reverse transition from the leave report 404 state to the idle 402 state. However, this transition is not protected.
[0070]
In addition, when the sub IGMP processing engine 103 receives a unicast packet and transfers it, it transitions from the idle 402 state to the unicast forward 405 state. FIG. 18 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the unicast forward 405 state. As shown in FIG. 18, the sub IGMP processing engine 103 obtains a unicast packet (S4501). Next, the transfer rule and the device status are checked to determine a multicast port to which the packet should be transferred (S4502). Next, the packet is transferred to the unicast port and the multicast port determined in step S4502 (S4503). Then, the sub IGMP processing engine 103 transitions from the unicast forward 405 state to the idle 402 state.
[0071]
Further, when the “Other_packet condition” is set in the idle 402 state, the sub IGMP processing engine 103 transitions from the idle 402 state to the drop packet 406 state. FIG. 19 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the state of the drop packet 406. As shown in FIG. 19, the sub-IGMP processing engine 103 drops the packet and releases the related distribution source (S4601). Then, the sub-IGMP processing engine 103 transitions from the state of the drop packet 406 to the state of the idle 402.
[0072]
Further, when the “non_Queryer && IGMP_general_query condition” is set in the idle 402 state, the sub IGMP processing engine 103 transitions from the idle 402 state to the trigger inquiry engine 407 state. FIG. 20 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the trigger inquiry engine 407 state. As shown in FIG. 20, the sub IGMP processing engine 103 transmits a signal to the inquiry and status update engine 102 to indicate the arrival of the “IGMP General_Query message” (S4701). Since there may be a plurality of copies of the same “General_Query message” received by various ports, the combination number of the IP packet is used to determine whether this is a new inquiry message. Only new inquiry messages trigger a signal. Then, the sub IGMP processing engine 103 transitions from the state of the trigger inquiry engine 407 to the state of idle 402.
[0073]
The “Forward_status_tbl table” and the “Receiver_register_tbl table” which are the transfer and receiver registration table 104 are realized by hardware and software. When implemented in software, the table is created by device settings and dynamically modified. As shown in FIG. 1, these tables are accessible (B, C, D) from all three engines, with only the query and status update engine 102 and the sub-IGMP processing engine 103 changing to table contents. Therefore, control for adjusting the simultaneous access to the table is required. The control is performed using a lock signal when executed by hardware, and is executed by, for example, “mutex” when executed by software.
[0074]
As described above, the architecture of the streaming control device of the present embodiment does not limit the number of unicast ports and multicast ports. For this reason, the streaming control device of the present embodiment is easily executed with a plurality of multicast ports and unicast ports using the same engine. The number of ports is rather determined by hardware limitations instead of the above algorithm. This architecture allows a unicast port to be a multicast port, and vice versa, by simply combining the U2M conversion and forwarding engine 101 and the sub-IGMP processing engine 103.
[0075]
The functionality of the engine of the streaming control device of the present embodiment may be enabled or disabled. If the engine is enabled, the device operates normally as described above. On the other hand, when the engine is disabled, the device is not present for traffic and acts like a bridge or switch. It is easy to make these valid and invalid operations. For example, a method using a lock signal when the engine of the apparatus is executed by hardware, and using a flag when the engine of the apparatus is executed by software is also possible. These operations can be performed by remote control.
[0076]
FIG. 21 is an explanatory diagram showing a scenario of a case where a service is provided to a monitoring system using the streaming control device of the present embodiment. As shown in FIG. 21, there are a plurality of clusters of cameras (data sources) connected to the apparatus by a layer 2 switch corresponding to the switches in the claims. This device exists in the same network. This network structure facilitates extension of the network to any number of cameras. Due to hardware limitations, such as link capacity, one device can only connect cameras (data sources) up to a certain amount. According to the structure, this problem can be solved by providing a plurality of streaming control devices of the present embodiment. The monitoring device connected to this device can monitor traffic from any camera in the camera cluster. This monitoring device can also perform high-speed switching of the camera. The switching speed is determined only by the processing speed of the sub IGMP processing engine 103.
[0077]
As shown in FIG. 21, when one or more streaming control devices of the present embodiment are in a network, a layer 2 switch and the present device form a loop. Careless handling can cause packet looping potential problems. To solve this problem, some techniques are needed but provide a solution.
[0078]
As described above, when there are a plurality of streaming control apparatuses of the present embodiment in the same network, one of them is used as an inquirer, which periodically sends out an “IGMP General_Query message”. The selection of this interrogator is performed manually or by a predetermined protocol. Because of the symmetry of the network architecture, any one of the devices is placed as an interrogator without affecting the performance of the network. This selection is also made by an arrangement message. For example, any device initially recognizes itself as an interrogator and starts sending “IGMP_Query messages”. When one device receives a "General_Query message" from another device, it checks the address of the transmitter. If the sender's address is less than its own address, it becomes a non-inquirer and performs a non-inquirer routine. Also, some existing well-defined placement messages are used for the same purpose.
[0079]
When performing the operation of the streaming control device of the present embodiment, if there are a plurality of multicast ports, one of the ports is configured as “control_port” that transfers packets separately. The selection of "control_port" is made manually at the time of deployment or by a specific protocol. In the manual case, the operator can select any layer 2 switch connected to the monitor and place the multicast port connected to this switch to be the "control_port" of the device. This is a simple routine that can be easily accomplished as described below. After identifying the interrogator, any multicast port can be selected as "control_port" based on a predetermined arrangement. The interrogator transmits "control_port_msg" sent from the control port. In the case of a non-inquirer, the port where the non-inquirer has received this message is arranged as “control_port”.
[0080]
The following is an example of a rule used to prevent packet looping in a network.
For the U2M conversion transfer engine 101, (1) if the packet is from a unicast port, transfer to the multicast port if the streaming control device of the present embodiment is an inquirer, and "control_port" if the streaming control device is a non-inquirer. (2) If the packet is from the sub-IGMP processing engine 103, the packet is forwarded to all ports except the receiving port in the case of the inquirer, and the packet is dropped in the case of the non-inquirer. On the other hand, for the sub IGMP processing engine 103, it drops all the multicast packets (non-IGMP messages), transfers the packets to the unicast port, and transfers the packets to the U2M conversion transfer engine 101.
[0081]
<Rule 1> Rules for transferring packets when a plurality of devices exist in the same network
These rules are either set at the time of deployment or remotely set by a protocol at run time. An example of a possible protocol is to run one protocol with a device receiving over the TCP protocol, accepting control messages sent to it, as specified below.
structure Rule_msg}
int enable_u2m;
int enable_igmp;
int querier_flag;
int control_port_index;
int u2m_unicast_querier_pt;
int u2m_unicast_non-querier_pt;
int u2m_igmp_query_pt;
int u2m_igmp_non-querier_pt;
int igmp_multicast_pt;
int igmp_unicast_pt:
int igmp_other_pt;
};
[0082]
<Data structure 1> Message that conveys rules for packet transfer
Define the fields in the data structure as follows:
"Enable_u2m"-This field indicates whether the U2M conversion and forwarding engine 101 of the device is enabled. Case values: 1, enable; -1, disable
"Enable_igmp"-This field indicates whether the sub-IGMP processing engine 103 of the device is enabled. Case values: 1, enable; -1, disable
"Querier_flag"-This field indicates whether this device is configured as an interrogator. Case value: 1, set to inquirer; -1, set to non-inquirer
"Control_port_index"-This field indicates which port to set as the control port.
"U2m_unicast_querier_pt"-This field conveys the bitmap for the port to which the packet should be forwarded if the packet came from the U2M translation and forwarding engine 101 unicast if the device is an interrogator.
"U2m_unicast_non-querier_pt"-This field conveys the bitmap for the port to forward the packet to if the packet comes from the unicast port of the U2M translation and forwarding engine 101 when the device is a non-inquirer.
"U2m_igmp_querier_pt"-This field conveys the bitmap for the port to which a packet should be forwarded if a packet arrives from the sub-IGMP processing engine 103 of the U2M conversion forwarding engine 101 when the device is an interrogator.
"U2m_igmp_non_querier_pt"-This field conveys the bitmap for the port to which the packet should be forwarded if a packet arrives from the sub-IGMP processing engine 103 of the U2M conversion forwarding engine 101 when the device is a non-inquirer.
"Igmp_multicast_pt"-This field conveys the bitmap for the port to which the multicast packet should be forwarded if it arrives on the sub-IGMP processing engine 103. Note that all zeros means that the packet should be dropped.
"Igmp_unicast_pt"-This field carries information on whether to forward unicast packets received on the sub-IGMP processing engine 103. Case values: 1 means transfer, -1 means drop packet.
• "igm_other_pkt"-This field carries information on whether to forward other packets (other than unicast, IGMP message packets).
[0083]
The above rule message data type is an illustrative example. In practice, messages come in a variety of forms, can have a few fields, and have special message types for signaling and acknowledgment.
[0084]
The streaming controller of this embodiment can log real-time statistics on traffic passing through it, making it available locally or remotely. This is performed by operating a process for transmitting a log information report message as follows to a device operating on a TCP port when requested.
[0085]
structure log_report {
int time;
int device_identity;
int port_num;
int grp_num;
structure grp_log grp_report [port_num] [grp_num];
};
structure grp_loglo
int port_index;
int grp_index;
int pkt_forwarded;
int giga_pkt_forwarded;
int pkt_dropped;
int giga_pkt_dropped;
int bits_forwarded;
int giga_bits_forwarded;
int bits_dropped;
int giga_bits_dropped;
int avg_bandwidth;
int receiver_rum;
structreceiver_log receiver_report [receiver_rum];
};
structure receiver_loglo
int receiver_identity;
int join_time;
int leave_time;
int avg_bandwidth;
};
[0086]
<Data structure 2> Data structure for log report message
The fields of the data structure are defined as follows.
"Time"-This field conveys information about when this report should be generated. The meaning of the value depends on the actual processing system.
"Device_identity"-This field conveys information about the identity of the device that is interested in this report. The meaning of the value depends on the processing system.
"Port_num"-This field indicates the number of ports included in this report.
"Grp_num"-This field indicates the number of groups included in this report.
"Grp_report"-This is a variable size array that contains information about a group on a given port. It is the type structure grp_log.
"Port_index"-This indicates the port index for this group log report.
"Grp_index"-This indicates the group index for this group log report.
"Pkt_forward", "giga_pkt_forward"-These two fields indicate the number of packets transferred for this group of traffic on this port.
"Pkt_dropped", "giga_pkt_dropped"-These two fields indicate the number of packets dropped for this group of traffic on this port.
"Bits_forward", "giga_bits_forward"-These two fields indicate the number of bits that will be in a forwarding state for this group of traffic on this port.
"Bits_dropped", "giga_bits_dropped"-These two fields indicate the number of bits that will be dropped for this group of traffic on this port.
"Avg_bandwidth"-This field is the average bandwidth of this group of traffic on this port.
• "receiver_num"-This field indicates the number of receiver records included in the report.
"Receiver_report"-this is a variable size array that contains the receiver_num number of receiver record information. It is of data type struct receiver_log.
"Receiver_identity"-This indicates the entity of the receiver regarding this receiver record information.
"Join_time"-This field indicates the receiver's splice group time. "Leave_time"-This field indicates the time the receiver leaves the group.
"Avg_bandwidth"-This field indicates the average bandwidth of the receiver.
[0087]
These message types are for illustrative purposes only and should not be considered as the only ones used for the device. The recording information is also executed in another processing mode. The above information is collected by all three engines and stored in a device, ie, an external storage device, as a database for later search.
[0088]
As described above, according to the streaming control device of the present embodiment, service quality can be provided to traffic passing through the device. For example, in a given receiver, throughput parameters can be set, so that when a packet is transferred to a port of the device, traffic can be formed. Further, even if this apparatus is arranged in a network, no special change is required for other network nodes, so that the influence and cost of the arrangement can be minimized. Therefore, the present apparatus can be easily applied to an existing system.
[0089]
The device is also remotely configurable and reporting, i.e., remote / centralized controllers can be used to manage distributed devices. Also, large networks can be supported by interconnecting several devices to form a framework. The apparatus also facilitates the realization of a remote / centralized monitor to provide real-time statistics on local traffic for remote access. Further, a framework for recording QoS information and a method for incorporating a special QoS framework are provided.
[0090]
In addition, a certain special port is allocated to the above-mentioned camera or surveillance device, and when there are a plurality of these devices, a predetermined attribute is allocated to a predetermined device, and traffic, particularly a predetermined packet, is transferred to an output port based on a predetermined configuration rule. This may prevent looping in the network. Also, the device and device attributes of this link or network service port can be self-configured.
[0091]
【The invention's effect】
As described above, according to the streaming control device of the present invention, it is possible to immediately know whether there is another receiver requesting the same traffic, based on the information of the receiver recorded in the multicast traffic transfer information database. Therefore, when there is no receiver, distribution of traffic can be stopped immediately. As a result, traffic can be made more efficient and smaller.
[Brief description of the drawings]
FIG. 1 is a block diagram illustrating a streaming control device that converts unicast data packet traffic into multicast data packet traffic and transfers the traffic to a receiver based on a request from the receiver.
FIG. 2 is a state transition diagram of a U2M conversion transfer engine 101.
FIG. 3 is a flowchart illustrating an operation performed by a U2M conversion transfer engine 101 in an initialization state 201.
FIG. 4 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in an idle state 202.
FIG. 5 is a flowchart illustrating an operation performed by a U2M conversion transfer engine 101 in a U2M conversion state 203;
FIG. 6 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in a transfer state 204.
FIG. 7 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the forward unicast state 205.
FIG. 8 is a flowchart illustrating an operation performed by the U2M conversion transfer engine 101 in the upper layer state 206.
FIG. 9 is a state transition diagram of the illustrated query and state update engine 102;
FIG. 10 is a flowchart for explaining an inquiry about an initialization state 301 and an operation performed by the state update engine 102;
FIG. 11 is a flowchart illustrating an inquiry about an update status stable state 303 and an operation performed by the state update engine 102;
FIG. 12 is a flowchart illustrating the operation of an inquiry of an inquiry state 304 and the operation of the state update engine 102;
FIG. 13 is a state transition diagram of the sub IGMP processing engine 103.
FIG. 14 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the state of the initialization engine 401.
FIG. 15 is a flowchart illustrating an operation performed by a sub IGMP processing engine 103 in an idle 402 state.
FIG. 16 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the combined report 403 state.
FIG. 17 is a flowchart illustrating an operation performed by a sub IGMP processing engine 103 in a leave report 404 state;
FIG. 18 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in the unicast forward 405 state.
FIG. 19 is a flowchart illustrating an operation performed by the sub IGMP processing engine 103 in a state of a drop packet 406.
FIG. 20 is a flowchart illustrating an operation performed by a sub IGMP processing engine 103 in a state of a trigger inquiry engine 407;
FIG. 21 is an explanatory diagram showing a scenario of a case where a service is provided to a monitoring system using the streaming control device of the embodiment.
[Explanation of symbols]
101 U2M Conversion Transfer Engine
102 Query and Status Update Engine
103 Sub IGMP processing engine
104 transfer and receiver registration table

Claims (9)

  1. A streaming control device that distributes content by multicast in response to a request from a receiver,
    Traffic conversion means for converting unicast traffic input from a data distribution source into multicast traffic,
    The multicast traffic converted by the traffic converting means based on information stored in a multicast traffic transfer information database storing the transfer status of the content and the receiver requesting the delivery of the content, and a predetermined rule. Traffic distribution means for distributing
    A streaming control device comprising:
  2. Message sending means for creating and sending IGMP messages at predetermined intervals;
    Information updating means for updating information for multicast traffic transfer based on the IGMP message transmitted from the message transmitting means and a response to the transmission of the IGMP message;
    Interval setting means for setting an interval for transmitting the IGMP message,
    The streaming control device according to claim 1, further comprising:
  3. The traffic conversion means,
    Data packet selection means for selecting a data packet of unicast traffic input from the data distribution source,
    Data conversion means for converting the data packet of the unicast traffic selected by the data packet selection means to a data packet of the multicast traffic,
    The streaming control device according to claim 1, further comprising:
  4. When receiving the IGMP message on the multicast port,
    If the IGMP message is a “Membership_Report message” for an Internet group, the transfer of the traffic for the multicast group corresponding to the message is enabled, and the receiver is registered in the database for the corresponding multicast group.
    If the IGMP message is a “Leave_Group message”, the receiver registered in the database for the corresponding multicast group is removed, and when the receiver is not linked to the group, the transfer of the traffic for the corresponding multicast group is performed. 3. The streaming control device according to claim 2, further comprising a database updating unit for updating the multicast traffic transfer information database so as to invalidate the multicast traffic transfer information database.
  5. 5. The streaming control device according to claim 4, wherein the traffic distribution unit and the database updating unit share the multicast traffic transfer information database.
  6. A plurality of output links for outputting multicast traffic data packets,
    Multiple input links for inputting unicast traffic data packets,
    The streaming control device according to any one of claims 1 to 5, further comprising:
  7. The input link has the function of the output link,
    The output link has the function of the input link,
    7. The streaming control device according to claim 6, wherein the input link is connected to a plurality of data distribution sources, and the output link is connected to a plurality of receivers.
  8. A network system comprising the streaming control device according to any one of claims 1 to 7,
    The receiver can receive a data stream from any data distribution source in any cluster;
    The streaming control device is connected to a switch connected to the data distribution source,
    A network system comprising: a network configured to support the data distribution source and a plurality of clusters of the receiver by connecting each output of the streaming control device to the receiver via the switch.
  9. 9. The network system according to claim 8, wherein the streaming control device is used to switch between the data distribution sources at a high frequency.
JP2003014884A 2003-01-23 2003-01-23 Streaming controller Pending JP2004228968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003014884A JP2004228968A (en) 2003-01-23 2003-01-23 Streaming controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003014884A JP2004228968A (en) 2003-01-23 2003-01-23 Streaming controller

Publications (1)

Publication Number Publication Date
JP2004228968A true JP2004228968A (en) 2004-08-12

Family

ID=32902796

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003014884A Pending JP2004228968A (en) 2003-01-23 2003-01-23 Streaming controller

Country Status (1)

Country Link
JP (1) JP2004228968A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008510432A (en) * 2004-08-16 2008-04-03 クゥアルコム・フラリオン・テクノロジーズ、インコーポレイテッドQualcomm Flarion Technologies, Inc. Method and apparatus for managing group membership for group communication
JP2010183587A (en) * 2004-08-16 2010-08-19 Qualcomm Inc Group communication signal methods and apparatus
CN107484037A (en) * 2017-09-22 2017-12-15 上海斐讯数据通信技术有限公司 A kind of method and system for realizing radio reception device control video flowing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008510432A (en) * 2004-08-16 2008-04-03 クゥアルコム・フラリオン・テクノロジーズ、インコーポレイテッドQualcomm Flarion Technologies, Inc. Method and apparatus for managing group membership for group communication
JP2010183587A (en) * 2004-08-16 2010-08-19 Qualcomm Inc Group communication signal methods and apparatus
US8488602B2 (en) 2004-08-16 2013-07-16 Qualcomm Incorporated Methods and apparatus for transmitting group communication signals
US8565801B2 (en) 2004-08-16 2013-10-22 Qualcomm Incorporated Methods and apparatus for managing group membership for group communications
US9503866B2 (en) 2004-08-16 2016-11-22 Qualcomm Incorporated Methods and apparatus for managing group membership for group communications
CN107484037A (en) * 2017-09-22 2017-12-15 上海斐讯数据通信技术有限公司 A kind of method and system for realizing radio reception device control video flowing

Similar Documents

Publication Publication Date Title
US7619973B2 (en) Dynamic traffic bandwidth management system and method for a communication network
US7296093B1 (en) Network processor interface system
CA2698255C (en) Intelligent collection and management of flow statistics
US7623536B2 (en) Network relaying method and device
US7864769B1 (en) Multicast packet replication
US6894972B1 (en) Intelligent collaboration across network system
US6826612B1 (en) Method and apparatus for an improved internet group management protocol
US7215666B1 (en) Data burst scheduling
US8213347B2 (en) Scalable IP-services enabled multicast forwarding with efficient resource utilization
US7225275B2 (en) System and method for delivering high-performance online multimedia services
US7167450B2 (en) Network management method and communications network system
US7877508B1 (en) Method and system for intelligently forwarding multicast packets
EP1063818B1 (en) System for multi-layer provisioning in computer networks
CN100399764C (en) Multibusiness network switch
TWI284469B (en) A network system having a plurality of switches capable of improving transmission efficiency and method thereof
US20020133491A1 (en) Method and system for managing distributed content and related metadata
EP0637153A1 (en) Method and apparatus for an automatic decomposition of a network topology into a backbone and subareas
US5485455A (en) Network having secure fast packet switching and guaranteed quality of service
US6631118B1 (en) System and method for providing dynamic bandwidth on demand
US6934250B1 (en) Method and apparatus for an output packet organizer
KR101072966B1 (en) Method, device and system for distributing file data
KR20130136532A (en) Condensed core-energy-efficient architecture for wan ip backbones
US20020040366A1 (en) System and method for rewriting a media resource request and/or response between origin server and client
US7406092B2 (en) Programmable pseudo virtual lanes for fibre channel systems
EP1950908B1 (en) A method and system for controlling the bandwidth