EP2903223A1 - Procédé de transmission d'informations d'image et système de communication de paquet - Google Patents

Procédé de transmission d'informations d'image et système de communication de paquet Download PDF

Info

Publication number
EP2903223A1
EP2903223A1 EP13841733.2A EP13841733A EP2903223A1 EP 2903223 A1 EP2903223 A1 EP 2903223A1 EP 13841733 A EP13841733 A EP 13841733A EP 2903223 A1 EP2903223 A1 EP 2903223A1
Authority
EP
European Patent Office
Prior art keywords
packets
packet
image information
image
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13841733.2A
Other languages
German (de)
English (en)
Other versions
EP2903223A4 (fr
Inventor
Kazunori Ozawa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Publication of EP2903223A1 publication Critical patent/EP2903223A1/fr
Publication of EP2903223A4 publication Critical patent/EP2903223A4/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/121Shortest path evaluation by minimising delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/50Allocation or scheduling criteria for wireless resources
    • H04W72/52Allocation or scheduling criteria for wireless resources based on load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/283Flow control; Congestion control in relation to timing considerations in response to processing delays, e.g. caused by jitter or round trip time [RTT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/36Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
    • H04L47/365Dynamic adaptation of the packet size

Definitions

  • This invention relates to transmission of image information through use of packet communication.
  • this invention relates to transmission of image information through use of packet communication via a data communication network including, in at least a part thereof, a wireless communication section such as a mobile communication network.
  • a packet delay occurs in some cases depending on a traffic congestion situation of the packet communication network.
  • a traffic congestion situation of the packet communication network.
  • mobile communication such as mobile phone communication
  • its traffic congestion situation varies greatly depending on the locations of terminals and time.
  • a bit rate suitable for an actual traffic congestion situation is not necessarily achieved.
  • the packet delay occurs and a real-time characteristic is thus deteriorated.
  • the actual traffic is less congested than the assumed one, an opportunity for transmission at a high bit rate at which data could have been transmitted under this actual traffic situation without a delay is missed as a result.
  • the thin client is a technology with which a virtual client on a server is operated from a terminal as if an actual terminal were operated and an application is run through use of the virtual client to generate screen information, and the screen information is transferred to the terminal to be displayed on a screen of the terminal.
  • the thin client has an advantage in that because no data remains in a terminal, there is no fear of leakage of secret information, corporate information, and the like to the outside even if the terminal is lost.
  • Patent Document 1 JP-A-2011-193357
  • an amount of data to be transferred with the use of the thin client is suppressed to be an amount that is manageable by the available bandwidth or less, the data remains in the middle of the network, and owing to this, a delay time that elapses before the data arrives at the terminal becomes longer, a screen of the terminal freezes due to a delayed arrival of data for updating the screen, or a response speed of the terminal is decreased.
  • Patent Document 1 is given as a document in which the art related to this invention is disclosed.
  • a server machine configured to transmit, when transmitting first encoded image data to a client terminal and then transmitting second encoded image data having higher image quality than that of the first encoded image data to the client terminal, a piece of image data corresponding to a part different from image data constituting the first encoded image data, among a plurality of pieces of image data constituting the second encoded image data.
  • This invention has been made in view of the above-mentioned circumstances, and it is an object of this invention to transmit, when image information is transmitted via a packet communication network, the image information without causing a delay and as higher-quality data in response to a temporal variation of traffic of the packet communication network.
  • a packet communication system including: a first node; and a second node, the first node including: packet generation means for encoding image information to be transmitted to generate a plurality of packets P 1 , P 2 , ..., P m , the plurality of packets P 1 , P 2 , ..., P m each corresponding to the image information and having data amounts q 1 , q 2 , ..., q m , respectively, that satisfy a relationship of q 1 ⁇ q 2 ⁇ ... ⁇ q m , where m is a natural number of 2 or more; and packet transmission means for transmitting the plurality of packets P 1 , P 2 , ..., P m to the second node, which is different from the first node, via a packet communication network, the second node including: delay time measurement means for measuring delay times t 1 , t 2 , ..., t m
  • a packet communication device including: packet reception means for encoding image information to be transmitted to receive a plurality of packets P 1 , P 2 , ..., P m via a packet communication network, the plurality of packets P 1 , P 2 , ..., P m each corresponding to the image information and having data amounts q 1 , q 2 , ..., q m , respectively, that satisfy a relationship of q 1 ⁇ q 2 ⁇ ... ⁇ q m , where m is a natural number of 2 or more; delay time measurement means for measuring delay times t 1 , t 2 , ..., t m of the plurality of packets P 1 , P 2 , ..., P m , respectively; and packet selection means for selecting any one of the plurality of packets P 1 , P 2 , ..., P m based on the delay times t 1 , t 2 , ..., t m .
  • a packet communication device including: packet generation means for encoding image information to be transmitted to generate a plurality of packets P 1 , P 2 , ..., P m , the plurality of packets P 1 , P 2 , ..., P m each corresponding to the image information and having data amounts q 1 , q 2 , ..., q m , respectively, that satisfy a relationship of q 1 ⁇ q 2 ⁇ ... ⁇ q m , where m is a natural number of 2 or more; and packet transmission means for transmitting the plurality of packets P 1 , P 2 , ..., P m to a destination packet communication device, which is different from the packet communication device, via a packet communication network.
  • the destination packet communication device is configured to: measure delay times t 1 , t 2 , ..., t m of the plurality of packets P 1 , P 2 , ..., P m , respectively; and select any one of the plurality of packets P 1 , P 2 , ..., P m based on the delay times t 1 , t 2 , ..., t m .
  • the destination packet communication device is configured to: measure delay times t 1 , t 2 , ..., t m of the plurality of packets P 1 , P 2 , ..., P m , respectively; and select any one of the plurality of packets P 1 , P 2 , ..., P m based on the delay times t 1 , t 2 , ..., t m .
  • a method of transmitting image information including, when transmitting image information from a first node to a second node via a packet communication network: a packet generation step of encoding, by the first node, image information to be transmitted to generate a plurality of packets P 1 , P 2 , ..., P m , the plurality of packets P 1 , P 2 , ..., P m each corresponding to the image information and having data amounts q 1 , q 2 , ..., q m , respectively, that satisfy a relationship of q 1 ⁇ q 2 ⁇ ... ⁇ q m , where m is a natural number of 2 or more; a packet transmission step of transmitting the plurality of packets P 1 , P 2 , ..., P m from the first node to the second node via the packet communication network; a delay time measurement step of measuring, by the second node, delay times t 1 , t 2 , ...
  • the node on the transmission side transmits the one piece of image information as the plurality of packets having the data amounts that are different from one another, and the node on the reception side selects the packet having the largest data amount from among the packets that have been received without a delay or within the allowable delay time and decodes the image information of the selected packet. Accordingly, it is possible to transmit the image information at a higher bit rate within such a range as to enable the transmission without a delay under the congestion situation of the packet communication network at a given time.
  • the image information transmission system 1 includes a transmission node 2 and a reception node 3.
  • the transmission node 2 is a packet communication device for encoding and packetizing image information X 4 input thereto and transmitting the resultant image information to the reception node 3 via a packet communication network.
  • the transmission node 2 is preferably a wireless communication device for performing packet data communication, such as a mobile phone terminal, but may also be a server machine or a client device installed on a network such as the Internet.
  • the transmission node 2 includes an encoder 5, a variable-length packet generation unit 6, and a packet transmission unit 7.
  • the encoder 5 encodes the image information X 4, and in encoding the image information X 4, generates a plurality of pieces of data d 1 , d 2 , ..., d m (where m is a natural number of 2 or more) corresponding to one piece of image information X 4.
  • m is a natural number of 2 or more
  • the encoder 5 generates the pieces of data so that a relationship of q 1 ⁇ q 2 ⁇ ... ⁇ q m holds.
  • the encoder 5 encodes the image information X 4 at bit rates of 128 kbps, 256 kbps, 512 kbps, and 1 Mbps to generate pieces of data d 1 , d 2 , ..., d 4 , respectively.
  • the variable-length packet generation unit 6 generates variable-length packets each having a packet length corresponding to the data amount.
  • the variable-length packet generation unit 6 generates packets P 1 , P 2 , ..., P m corresponding to the pieces of data d 1 , d 2 , ..., d m , respectively.
  • the generated packets are variable-length packets, and hence a magnitude relation among data amounts of the packets P 1 , P 2 , ..., P m inherits a magnitude relation among the pieces of data d 1 , d 2 , ..., d m as it is.
  • the packet transmission unit 7 transmits the packets P 1 , P 2 , ..., P m to the packet communication network in this stated order.
  • the packet transmission unit 7 transmits a packet set 8 that corresponds to the image information X 4 and includes m packets whose data amounts are different from one another to the reception node 3 in ascending order of the data amounts.
  • An order relation of the transmitted packets is illustrated as the packet set 8.
  • the reception node 3 may also preferably be a server machine or a client device installed on the network such as the Internet. Alternatively, the reception node 3 may also be a wireless communication device for performing packet data communication, such as a mobile phone terminal.
  • a delay time measurement unit 10 measures a delay time for each packet.
  • the packet transmission unit 7 transmits the packets P 1 , P 2 , ..., P m in this stated order, and hence the packet reception unit 8 basically receives the packets P 1 , P 2 , ..., P m in this stated order.
  • a packet selection unit 11 selects and outputs a packet having the largest data amount from among the packets each having an allowable delay time based on the delay times t 1 , t 2 , ..., t m and the data amounts of the corresponding packets.
  • the delay time on the network of the packet having a smaller data amount is conceivably shorter, and in contrast, the delay time on the network of the packet having a larger data amount is conceivably longer.
  • the packet selection unit 11 sequentially determines the delay times of the packets P 1 , P 2 , ..., P m , which have been received in this stated order, and when determining that the delay time of a given packet exceeds an allowable range, selects a packet received immediately before the given packet. In this case, packets received afterwards may be discarded without being subjected to the determination based on their delay times.
  • the packet selection unit 11 selects the packet P 2 , which has been received immediately before the packet P 3 .
  • the delay time of the packet having a smaller data amount is conceivably shorter. It is thus conceivable that unless a traffic congestion situation suddenly changes, the fact that, within the packet set corresponding to the image information X 4, the packets P 1 and P 2 received earlier are not detected to be significantly delayed and the packet P 3 is detected to be significantly delayed means that the packets P 4 , P 5 , ..., P m to be received afterwards are significantly delayed.
  • the determination based on the delay time may be omitted for the packet P 4 and packets to be received afterwards, or instead, the packets themselves may be discarded.
  • the packets P 1 , P 2 , ..., P m are transmitted in ascending order of their data amounts, and hence the packet received immediately before the packet determined as being significantly delayed has the largest data amount among the packets that have been received with a small delay.
  • both of the packets P 1 and P 2 received before the packet P 3 have arrived at the reception node 3 without being significantly delayed, and the packet P 2 , which has been received immediately before the packet P 3 , has the largest data amount between the packets P 1 and P 2 .
  • a decoder 12 decodes data stored in the selected packet and outputs image information X' 13. With this, as compared with a case where only the packet generated at a single data rate is transmitted, the reception node 3 can decode the image information X' 13 based on the data encoded at a larger data rate that is determined depending on the congestion situation of the packet communication network.
  • the reception node 3 may transfer the packet selected by the packet selection unit 11 to another packet communication device via a packet transmission unit 14.
  • a third node is a general packet communication device here. More specifically, the third node is preferably a wireless communication device for performing packet data communication, such as a mobile phone terminal, but may also be a server machine or a client device installed on the network such as the Internet. The third node does not need to select the packet unlike the second node, and decodes the received packet as it is.
  • FIG. 2 illustrates an example in which a mobile network 150 is used as a network in the remote mobile communication system 100. Further, Fig. 2 illustrates a configuration adopted in a case where an SGSN/GGSNN device is used as a packet transfer device.
  • the SGSN/GGSN device herein refers to a device formed by integrating a serving GPRS support node (SGSN) device and a gateway GPRS support node (GGSN) device.
  • SGSN serving GPRS support node
  • GGSN gateway GPRS support node
  • Fig. 2 illustrates as an example a configuration in which a server machine 110 of a thin client is disposed in a cloud network 130 and the cloud network 130 and the mobile network 150 are connected to each other.
  • an end user connects the portable terminal 170 to a virtual client of the server machine 110 disposed in the cloud network 130 to operate the virtual client as if operating an actual terminal.
  • a packet storing an operation signal is transmitted from client software installed in the portable terminal 170 to the server machine 110 via a base station 194, an RNC device 195, and an SGSN/GGSN device 190 on the mobile network 150.
  • the operation signal herein refers to a signal transmitted from the client software of the portable terminal 170 to the server machine 110 through operations performed on the portable terminal 170, such as a key operation, a touch operation on a screen, a character input, and scrolling.
  • the operation signal packet is transmitted from a packet transmission unit of the client software installed in the portable terminal 170, and arrives at the server machine 110 on the cloud network 130 via the base station 194, the RNC device 195, and the SGSN/GGSN device 190 on the mobile network 150, and the server machine 110 receives the operation signal.
  • a well-known protocol can be used here as a protocol to be used when the operation signal is transmitted, but it is assumed here that TCP/IP and HTTP, which is an upper layer protocol than TCP/IP, are used. Note that, Session Initiation Protocol (SIP) or the like may also be used other than HTTP.
  • SIP Session Initiation Protocol
  • Fig. 3 is a block diagram illustrating a configuration of the server machine 110.
  • An operation signal packet reception unit 182 receives the packet storing the operation signal from the client software of the portable terminal 170 via the base station 194, the RNC device 195, and the SGSN/GGSN device 190.
  • the operation signal packet reception unit 182 extracts the operation signal from the received operation signal TCP/IP packet and outputs the extracted operation signal to a virtual client unit 211.
  • the virtual client unit 211 includes application software capable of providing various services, a control unit, a screen generation unit, a cache memory, and others. Further, the virtual client unit 211 has such a configuration that the application software can be updated with ease from the outside of the server machine 110. Note that, the virtual client unit builds a virtualized environment on a host OS, runs a guest OS on the built virtualized environment, and runs the virtual client on the guest OS, which is not shown in Fig. 3 . Arbitrary OSes can be used here as the host OS and the guest OS.
  • the virtual client unit 211 analyzes the operation signal input from the operation signal packet reception unit 182 and activates the application software designated by the operation signal. A screen created by the application software is generated at a predetermined screen resolution and the generated screen is output to a screen capturing unit 180.
  • the screen capturing unit 180 captures and outputs the screen at a predetermined screen resolution and a predetermined frame rate.
  • the entire screen may be compressed and encoded by an image encoder, or the screen may be divided into a plurality of (2, for example) regions and each of the regions may be compressed and encoded by different image encoders. Described below is an example in which the screen is divided into two types of regions and different image encoders are used for the respective types of regions. It is assumed here that, as an example, the regions include a video region and other regions.
  • a division unit 184 divides the captured screen into a plurality of blocks each having a predetermined size. It is assumed here that the size of each block is, for example, 16 pixels x 16 lines, but another size such as 8 pixels x 8 lines may also be used. When a smaller block size is used, an accuracy of discrimination by a discrimination unit is enhanced, but a processing amount of the discrimination unit increases.
  • the division unit 184 outputs the blocks obtained by division to a discrimination unit 185.
  • Fig. 4 illustrates a configuration of the discrimination unit 185.
  • the discrimination unit 185 discriminates between two types of regions of the screen. In this case, those two types include a video region and the other regions. Further, it is assumed that a motion vector is used as an image feature amount to be used by the discrimination unit.
  • a motion vector calculation unit 201 calculates, for each block, such a motion vector Vk(dx, dy) as to minimize Dk of the following Expression 1, for example.
  • D k ⁇ i ⁇ j f _ n X i , Y j - f _ n - 1 ⁇ X i + d x , Y j + d y
  • f_n_k(Xi, Yj) and f_n_1(Xi, Yj) represent pixels included in a k-th block of an n-th frame and pixels included in a k-th block of an (n-1)th frame, respectively.
  • the motion vector calculation/discrimination unit 201 next calculates, for each block, a magnitude and direction of the motion vector in accordance with the following Expression 2 and Expression 3, respectively.
  • Vk represents the magnitude of the motion vector in the k-th block
  • ⁇ k represents the angle (direction) of the motion vector in the k-th block.
  • a region discrimination unit 202 retrieves Vk and ⁇ k for a plurality of consecutive blocks, and when the values of Vk exceed a predetermined threshold value and the values of ⁇ k vary in the plurality of consecutive blocks, determines those blocks as the video regions. It is assumed here that a first region means the video region.
  • the region discrimination unit 202 does not determine those blocks as the video region and determines those blocks as a movement region, which is caused by screen scrolling or the like.
  • the region discrimination unit 202 outputs to an image encoding unit 186 of Fig. 3 a discrimination flag indicating whether or not there is a video region and a range of the region when there is a video region. It is assumed here that a region obtained by shaping the blocks into a rectangular region is used as the video region, and that the range of the region includes the number of pixels in a horizontal direction and the number of lines in a vertical direction of the rectangular region and the numbers and sizes of the blocks included in the region.
  • the region discrimination unit 202 discriminates, as the other regions other than the video region, between, for example, the movement region and a still image region, and outputs a discrimination flag and the range of the region to the image encoder unit 186 of Fig. 3 .
  • a reduction processing unit 225 and a second image encoder unit 228 inputs the captured image from the image capturing unit 180, inputs the size of each block for division from the division unit 184, and inputs the discrimination flag, the range of the video region, and the ranges of the other regions (for example, the movement region and the still image region) from the discrimination unit 185.
  • the reduction processing unit 225 determines whether or not the number of pixels in the horizontal direction and the number of lines in the vertical direction, that is, the size of the video region exceeds a predetermined size. It is assumed here that the predetermined size is, for example, the QVGA size.
  • the reduction processing unit 225 calculates an image reduction filter to reduce the image included in the video region so that the video region has the QVGA size, and outputs the reduced image to a first image encoder 227. In this case, the reduction processing unit 225 has reduced the size of the first region, and hence the reduction processing unit 225 outputs the size before the reduction to the first image encoder 227 as the range of the video region.
  • the reduction processing unit 225 When the size of the video region does not exceed the predetermined size, without calculating the image reduction filter, the reduction processing unit 225 outputs the image of the video region to the first image encoder 227 as it is and outputs the size of the video region as well to the first image encoder 227 as it is.
  • the first image encoder 227 inputs the image signal of the video region and uses a predetermined video encoder to compress and encode the image signal into bit streams having a plurality of bit rates and outputs the bit streams having the plurality of bit rates to a first packet transmission unit 176 of Fig. 3 .
  • the following configuration may be adopted for the selection of the plurality of bit rates, for example.
  • the plurality of bit rates is selected from among predetermined bit rates based on information such as the image size of the terminal or the type of network to be used, or the above-mentioned information is received from the terminal at the time of initiating a session and is used for the selection. It is assumed here as an example that four types of bit rates are used.
  • the first image encoder 227 further outputs information on the video region to the first packet transmission unit 176 of Fig. 3 .
  • the second image encoder 228 inputs information on the other regions, and in a case of a still image, uses a still image codec to compress and encode the image at a plurality of bit rates and outputs bit streams having the plurality of bit rates to a first packet transmission unit 176 of Fig. 3 .
  • a wavelet encoder or JPEG 2000 is used as the still image codec, but another well-known codec such as JPEG may also be used. Note that, when the wavelet encoder or the JPEG 2000 encoder is used, with the use of the characteristics of wavelet transform used in those encoders, with respect to coefficients obtained after the wavelet transform, as shown in Fig.
  • compressed and encoded bit streams of B1, B2, B3, and B4 are acquired from four types of regions of LL, LH, HL, and HH, respectively, in a range of from a low frequency to a high frequency.
  • bit streams of B1, B1+B2, B1+B2+B3, and B1+B2+B3+B4 may be output as the bit streams having four types of bit rates from the second image encoder 228. With this configuration, an image quality degradation can be made less conspicuous on the terminal.
  • the second image encoder 228 further outputs the information on the other regions as well to the first packet transmission unit 176 of Fig. 3 .
  • the second image encoder 228 In a case of the movement region, the second image encoder 228 outputs the bit stream obtained by compressing and encoding the image before the movement by the still image codec and one representative type of motion vector to the first packet transmission unit 176 of Fig. 3 .
  • the second image encoder 228 further outputs the information on the other regions as well to the first packet transmission unit 176 of Fig. 3 .
  • an audio encoding unit 187 of Fig. 3 inputs an audio signal accompanying the screen from the screen capturing unit 180, uses an audio encoder to compress and encode the audio signal, and outputs the resultant audio signal to a second packet transmission unit 177 of Fig. 3 .
  • MPEG-4 AAC is used as the audio encoder, but another well-known audio encoder may also be used.
  • the first packet transmission unit 176 inputs the region information from the first image encoder 227 and the second image encoder 228 of Fig. 5 , and in the case of the video region, the first packet transmission unit 176 inputs the compressed and encoded bit streams having the four types of bit rates from the first image encoder of Fig. 5 and forms four types of packets storing the corresponding bit streams.
  • the first packet transmission unit 176 stores the respective pieces of bit stream data in payloads of the packets of a predetermined protocol, arranges the four types of packets in a predetermined order within a predetermined time section, and consecutively transmits the four types of packets at short time intervals to the SGSN/GGSN device 190 of Fig. 2 .
  • the predetermined time interval is an ascending order of the bit rates, and in the above-mentioned example of the bit rates, the order of 128 kbps, 256 kbps, 384 kbps, and 512 kbps.
  • the first packet transmission unit 176 inputs the bit streams having the four types of bit rates from the second image encoder 228 of Fig. 5 and forms four types of packets. Specifically, the first packet transmission unit 176 stores the respective bit streams in payloads of the packets of a predetermined protocol, arranges the four types of packets in a predetermined order within a predetermined time section, and consecutively transmits the four types of packets at short time intervals to the SGSN/GGSN device 190 of Fig. 2 . It is assumed here that the predetermined time interval is an ascending order of the bit rates.
  • UDP/IP can be used as the predetermined protocol, for example.
  • a well-known protocol other than UDP/IP, such as RTP/UDP/IP, may also be used.
  • a time section of from several tens of ms to 100 ms may be used as the predetermined time section.
  • a time interval of from several ms to several tens of ms may be used as the short time interval.
  • the region information may be stored in an RTP header or a UDP header, or in the payload.
  • the second packet transmission unit 177 stores the compressed and encoded bit stream obtained by compressing and encoding the audio signal in the payload of the packet, forms the packet of a predetermined protocol, and outputs the packet to the SGSN/GGSN device 190.
  • a well-known protocol such as RTP/UDP/IP, UDP/IP, or TCP/IP is used as the predetermined protocol, but it is assumed here that UDP/IP is used as an example.
  • the SGSN/GGSN device 190 transfers the packet received from the server machine 110 to the RNC device 195 by tunneling under the GTP-U protocol.
  • the RNC device 195 wirelessly transmits the packet to the portable terminal 170 via the base station 194.
  • client software 171 is installed in the portable terminal 170.
  • the client software 171 is for transmitting to the server the operation signal issued when the user operates the terminal and for receiving the packet from the server and decoding the compressed and encoded stream for display.
  • Fig. 7 illustrates a configuration of the client software 171.
  • Fig. 8 illustrates a configuration of a first packet reception/delay measurement/selection unit 250 of Fig. 7 .
  • a packet reception unit 270 receives a plurality of consecutive packets for each of the video region and the other regions.
  • the packet reception unit 270 extracts, in the case of the video region, the information on the video region stored in the four types of consecutive packets, which have been received in ascending order of the bit rates, reception time information R(j), and transmission time information S(j) (1 ⁇ j ⁇ 4), extracts the bit stream information from the payloads of the four packets, and outputs those extracted pieces of information to a delay measurement unit 271_1.
  • the delay measurement unit 271_1 uses S(j) and R(j) of each packet to calculate, for each of the four packets, a delay time D(j) in accordance with the following Expression 4.
  • D j R j - S j
  • D(j) represents a delay time of a j-th packet.
  • the delay measurement unit 271_1 outputs to a selection unit 272_1 the calculated delay times D(j), the extracted four types of bit streams, and the information on the video region.
  • the selection unit 272_1 compares the values of D(j) with one another, and selects the bit stream stored in the packet that has been received immediately before the delay time Dj suddenly increases.
  • D1 100 ms
  • D2 120 ms
  • D3 118 ms
  • D4 250 ms
  • the delay time that suddenly increases is D4 corresponding to the fourth packet
  • the third packet is the packet that has been received immediately before the delay time suddenly increases.
  • the selection unit 272_1 thus selects the bit stream stored in the payload of the third packet, that is, the packet having the bit rate of 384 kbps.
  • the selection unit 272_1 then outputs the selected bit stream and the information on the video region to a first image decoder 252 of Fig. 7 .
  • the packet reception unit 270 receives the plurality of consecutive packets for the other regions.
  • the packet reception unit 270 extracts, in the case of the other regions, the information on the other regions stored in the four types of consecutive packets, which have been received in ascending order of the bit rates, reception time information R'(m), and transmission time information S'(m) (1 ⁇ m ⁇ 4), extracts the bit stream information from the payloads of the four packets, and outputs those extracted pieces of information to a delay measurement unit 271_2.
  • the delay measurement unit 271_2 uses S'(m) and R'(m) of each packet to calculate, for each of the four packets, a delay time D'(m) in accordance with the following Expression 5.
  • D ⁇ m R ⁇ m - S ⁇ m
  • D'(m) represents a delay time of an m-th packet.
  • the delay measurement unit 271_2 outputs to a selection unit 272_2 the calculated delay times D'(m), the extracted four types of bit streams, and the information on the other regions.
  • the selection unit 272_2 compares the values of D'(m) with one another, and selects the bit stream stored in the packet that has been received immediately before the delay time D'(m,) suddenly increases. The selection unit 272_2 then outputs the selected bit stream and the information on the other regions to a second image decoder 253 of Fig. 7 .
  • the first image decoder 252 inputs the information on the video region and the bit stream having the bit rate selected by the first packet reception/delay measurement/selection unit 250, decodes the bit stream, and outputs the decoded bit stream to an enlargement processing unit 254.
  • the first image decoder 252 further outputs the information on the video region as well to the enlargement processing unit 254.
  • H.264 decoder is used as the first image decoder, but another well-known image decoder such as the H.264 SVC decoder, MPEG-4 SVC decoder, or MPEG-4 decoder may also be used.
  • a decoder to be used is the same type as the first image encoder 227 of the server.
  • the enlargement processing unit 254 inputs the image signal obtained after decoding and the information on the video region.
  • the enlargement processing unit 254 first uses the image signal after the decoding to calculate the size of the region of the image signal after the decoding (hereinafter referred to as "A"), and compares A with the size of the video region based on the information on the video region (hereinafter referred to as "B").
  • A the image signal after the decoding
  • B compares A with the size of the video region based on the information on the video region
  • the enlargement processing unit 254 performs the enlargement processing on the image signal after the decoding by well-known filter calculation so that A matches B, and outputs the image signal having the enlarged size B to a screen display unit 256.
  • the enlargement processing unit 254 passes the enlargement processing therethrough and outputs the decoded image signal to the screen display unit 256 as it is.
  • the enlargement processing unit 254 further outputs the information on the video region to the screen display unit 256.
  • the second image decoder 253 inputs the information on the other regions and the bit stream selected by the first packet reception/delay measurement/selection unit 250, decodes the bit streams relating to the other regions, and outputs the decoded bit streams to the screen display unit 256.
  • the second image decoder 253 further outputs the information on the other regions to the screen display unit 256.
  • the screen display unit 256 inputs the information on the video region and the image signal of the video region from the enlargement processing unit 254, and inputs the information on the other regions and the image signal of the other regions from the second image decoder 253.
  • the screen display unit 256 uses the information on the first region to display the image output from the enlargement processing unit 254 in the first region, and uses the information on the other regions to display the images output from the second image decoder 253 in the other regions.
  • the screen display unit 256 generates a display screen by combining the image signals of the respective regions in this manner, and outputs the generated display screen.
  • a second packet reception unit 251 receives the packet, extracts the compressed and encoded bit stream relating to the audio data stored in the packet, and outputs the obtained bit stream to an audio decoder 255.
  • the audio decoder 255 inputs and decodes the compressed and encoded stream and outputs the decoded stream in synchronization with the image signals of the screen.
  • MPEG-4 AAC can be used as the audio decoder here, but another well-known audio decoder may also be used. It should be understood, however, that an audio decoder to be used is the same type as the audio encoder of the server.
  • An operation signal generation unit 257 detects operations input to the portable terminal 170 by the user, such as screen touching, screen scrolling, icon touching, and a character input, generates the operation signal for each of the operations, and outputs the generated operation signal to a packet transmission unit 258.
  • the packet transmission unit 258 inputs the operation signal, stores the operation signal in a packet of a predetermined protocol, and transmits the packet to the network.
  • TCP/IP, UDP/IP, or the like can be used here as the predetermined protocol.
  • the following effect is achieved.
  • the bit streams having the plurality of kinds of bit rates at which those bit streams are to be actually transferred are used to transfer the packetized bit streams from the server.
  • the respective delay times of the packets that have been received on the terminal side are calculated, and the bit stream stored in the packet that is not increased in its delay time is selected and decoded for display. It is therefore possible to use the thin client without an increase in the delay time and the freezing of the screen even in a network having a narrow bandwidth or even when the bandwidth of the network varies.
  • the types of regions of the screen among which the discrimination unit discriminates may be three or more. Further, an image feature amount other than the motion vector may also be used as the image feature amount to be used for the discrimination among the regions, or a plurality of types of image feature amounts may be combined for use.
  • the following configuration may also be adopted. Only one type of region is used and the division into the regions and the discrimination among the regions are not performed, and only one type of image encoder and only one type of image decoder are used. When only one type of encoder/decoder is used, a video encoder/decoder or a still image encoder/decoder may be used as the image encoder/decoder.
  • a mobile LTE/EPC network may also be used, or a WiMAX network or a Wi-Fi network may also be used.
  • a fixed network, an NGN, or the Internet may also be used. Note that, in those cases, the network is connected from a fixed terminal or a PC, instead of from the mobile terminal.
  • the server machine is disposed in the cloud network, but may also be disposed in the Internet. Further, when the server of the thin client is disposed in an enterprise, the server machine may also be disposed in an enterprise network. Further, as another configuration, when a telecommunications carrier itself disposes the thin client server, the server machine 110 may also be disposed in the mobile network 150, the fixed network, or the NGN.
  • a packet communication system including:
  • the packet transmission means transmits the plurality of packets P 1 , P 2 , ..., P m in ascending order of the data amounts
  • the packet selection means determines, every time each of the plurality of packets is received, whether or not the each of the plurality of packets is valid based on the delay time of the each of the plurality of packets, and when determining that the each of the plurality of packets is invalid, selects one of the plurality of packets that has been received immediately before the each of the plurality of packets.
  • a system in which the system divides one image into a plurality of image regions and transmits one of the plurality of image regions as the image information.
  • a system classifies each of the plurality of image regions into any one of a plurality of types of image regions based on an image feature amount relating to the each of the plurality of image regions, and transmits one of the plurality of image regions that has been classified into a predetermined type of image region as the image information.
  • a packet communication device including:
  • a packet communication device in which the packet reception means receives the plurality of packets P 1 , P 2 , ..., P m in ascending order of the data amounts, and in which the packet selection means determines, every time each of the plurality of packets is received, whether or not the each of the plurality of packets is valid based on the delay time of the each of the plurality of packets, and when determining that the each of the plurality of packets is invalid, selects one of the plurality of packets that has been received immediately before the each of the plurality of packets.
  • a packet communication device in which the packet communication device divides one image into a plurality of image regions and transmits one of the plurality of image regions as the image information.
  • a packet communication device classifies each of the plurality of image regions into any one of a plurality of types of image regions based on an image feature amount relating to the each of the plurality of image regions, and transmits one of the plurality of image regions that has been classified into a predetermined type of image region as the image information.
  • a packet communication device including:
  • a packet communication device in which the packet transmission means transmits the plurality of packets P 1 , P 2 , ..., P m in ascending order of the data amounts, and in which the destination packet communication device determines, every time each of the plurality of packets is received, whether or not the each of the plurality of packets is valid based on the delay time of the each of the plurality of packets, and when determining that the each of the plurality of packets is invalid, selects one of the plurality of packets that has been received immediately before the each of the plurality of packets.
  • a packet communication device in which the packet communication device divides one image into a plurality of image regions and transmits one of the plurality of image regions as the image information.
  • a packet communication device classifies each of the plurality of image regions into any one of a plurality of types of image regions based on an image feature amount relating to the each of the plurality of image regions, and transmits one of the plurality of image regions that has been classified into a predetermined type of image region as the image information.
  • the packet reception means receives the plurality of packets P 1 , P 2 , ..., P m in ascending order of the data amounts
  • the packet selection means determines, every time each of the plurality of packets is received, whether or not the each of the plurality of packets is valid based on the delay time of the each of the plurality of packets, and when determining that the each of the plurality of packets is invalid, selects one of the plurality of packets that has been received immediately before the each of the plurality of packets.
  • a program according to Supplementary Note 15 in which the program classifies each of the plurality of image regions into any one of a plurality of types of image regions based on an image feature amount relating to the each of the plurality of image regions, and transmits one of the plurality of image regions that has been classified into a predetermined type of image region as the image information.
  • the packet transmission means transmits the plurality of packets P 1 , P 2 , ..., P m in ascending order of the data amounts
  • the destination packet communication device determines, every time each of the plurality of packets is received, whether or not the each of the plurality of packets is valid based on the delay time of the each of the plurality of packets, and when determining that the each of the plurality of packets is invalid, selects one of the plurality of packets that has been received immediately before the each of the plurality of packets.
  • a program according to Supplementary Note 19 in which the program classifies each of the plurality of image regions into any one of a plurality of types of image regions based on an image feature amount relating to the each of the plurality of image regions, and transmits one of the plurality of image regions that has been classified into a predetermined type of image region as the image information.
  • a method of transmitting image information including, when transmitting image information from a first node to a second node via a packet communication network:
  • the packet transmission step includes transmitting the plurality of packets P 1 , P 2 , ..., P m in ascending order of the data amounts
  • the packet selection step includes determining, every time each of the plurality of packets is received, whether or not the each of the plurality of packets is valid based on the delay time of the each of the plurality of packets, and when determining that the each of the plurality of packets is invalid, selecting one of the plurality of packets that has been received immediately before the each of the plurality of packets.
  • a method further including classifying each of the plurality of image regions into any one of a plurality of types of image regions based on an image feature amount relating to the each of the plurality of image regions, and transmitting one of the plurality of image regions that has been classified into a predetermined type of image region as the image information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Information Transfer Between Computers (AREA)
EP13841733.2A 2012-09-27 2013-09-04 Procédé de transmission d'informations d'image et système de communication de paquet Withdrawn EP2903223A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012214170 2012-09-27
PCT/JP2013/074443 WO2014050547A1 (fr) 2012-09-27 2013-09-04 Procédé de transmission d'informations d'image et système de communication de paquet

Publications (2)

Publication Number Publication Date
EP2903223A1 true EP2903223A1 (fr) 2015-08-05
EP2903223A4 EP2903223A4 (fr) 2016-05-04

Family

ID=50387954

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13841733.2A Withdrawn EP2903223A4 (fr) 2012-09-27 2013-09-04 Procédé de transmission d'informations d'image et système de communication de paquet

Country Status (7)

Country Link
US (1) US20150256443A1 (fr)
EP (1) EP2903223A4 (fr)
JP (1) JP5854247B2 (fr)
KR (1) KR101640851B1 (fr)
CN (1) CN104685840A (fr)
TW (1) TWI566550B (fr)
WO (1) WO2014050547A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107079044A (zh) * 2014-09-25 2017-08-18 交互数字专利控股公司 用于内容感知缓存的过程和用于多点协作传输的无线电资源管理
EP3235259A4 (fr) * 2014-12-16 2018-05-23 Hewlett-Packard Development Company, L.P. Traitement de données dans un terminal client léger
KR102594608B1 (ko) * 2016-12-06 2023-10-26 주식회사 알티캐스트 하이브리드 유저 인터페이스 제공 시스템 및 그 방법
JP6725733B2 (ja) * 2018-07-31 2020-07-22 ソニーセミコンダクタソリューションズ株式会社 固体撮像装置および電子機器

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03240981A (ja) * 1990-02-15 1991-10-28 Nisshin Steel Co Ltd 鋼板変色防止剤
US6804403B1 (en) * 1998-07-15 2004-10-12 Digital Accelerator Corporation Region-based scalable image coding
US7327761B2 (en) * 2000-02-03 2008-02-05 Bandwiz Inc. Data streaming
US7539130B2 (en) * 2000-03-28 2009-05-26 Nokia Corporation Method and system for transmitting and receiving packets
JP4480119B2 (ja) * 2000-03-30 2010-06-16 キヤノン株式会社 画像処理装置及び画像処理方法
US7068683B1 (en) * 2000-10-25 2006-06-27 Qualcomm, Incorporated Method and apparatus for high rate packet data and low delay data transmissions
JP2002135322A (ja) * 2000-10-30 2002-05-10 Nec Corp 音声符号化自動選択器
EP1359722A1 (fr) * 2002-03-27 2003-11-05 BRITISH TELECOMMUNICATIONS public limited company Système et procédé pour transmettre des flux de données
US7668968B1 (en) * 2002-12-03 2010-02-23 Global Ip Solutions, Inc. Closed-loop voice-over-internet-protocol (VOIP) with sender-controlled bandwidth adjustments prior to onset of packet losses
US20040240390A1 (en) * 2003-05-30 2004-12-02 Vidiator Enterprises Inc. Method and apparatus for dynamic bandwidth adaptation
US7054774B2 (en) * 2003-06-27 2006-05-30 Microsoft Corporation Midstream determination of varying bandwidth availability
US8503538B2 (en) * 2004-01-28 2013-08-06 Nec Corporation Method, apparatus, system, and program for content encoding, content distribution, and content reception
JP2008099261A (ja) * 2006-09-12 2008-04-24 Yamaha Corp 通信装置およびプログラム
WO2010112074A1 (fr) * 2009-04-02 2010-10-07 Nokia Siemens Networks Oy Procédé et dispositif de traitement de données dans un réseau de communications
JP5538792B2 (ja) * 2009-09-24 2014-07-02 キヤノン株式会社 画像処理装置、その制御方法、及びプログラム
JP5685913B2 (ja) * 2009-12-11 2015-03-18 日本電気株式会社 ネットワーク帯域計測システム、ネットワーク帯域計測方法およびプログラム
JP5434570B2 (ja) * 2009-12-22 2014-03-05 日本電気株式会社 ストリーム配信装置
JP2011193357A (ja) 2010-03-16 2011-09-29 Nec Personal Products Co Ltd サーバ装置、クライアント端末、通信システム、制御方法及びプログラム
WO2011132783A1 (fr) * 2010-04-23 2011-10-27 日本電気株式会社 Système de mesure de largeur de bande utile, dispositif émetteur, procédé de mesure de largeur de bande utile, et support d'enregistrement
CN101888544B (zh) * 2010-06-30 2012-05-30 杭州海康威视数字技术股份有限公司 一种低带宽的视频数据传输方法和硬盘录像机
KR20120058763A (ko) * 2010-11-30 2012-06-08 삼성전자주식회사 영상 장치에서 영상 데이터를 송신하기 위한 장치 및 방법
WO2014050546A1 (fr) * 2012-09-27 2014-04-03 日本電気株式会社 Procédé de transmission d'informations audio et système de communication de paquet

Also Published As

Publication number Publication date
JP5854247B2 (ja) 2016-02-09
KR101640851B1 (ko) 2016-07-19
JPWO2014050547A1 (ja) 2016-08-22
EP2903223A4 (fr) 2016-05-04
WO2014050547A1 (fr) 2014-04-03
KR20150040942A (ko) 2015-04-15
TWI566550B (zh) 2017-01-11
US20150256443A1 (en) 2015-09-10
CN104685840A (zh) 2015-06-03
TW201424300A (zh) 2014-06-16

Similar Documents

Publication Publication Date Title
KR101667970B1 (ko) 서버 장치, 단말, 씬 클라이언트 시스템, 화면 송신 방법 및 프로그램
EP2879337B1 (fr) Système de communication, procédé et programme
WO2011142311A1 (fr) Système de communication mobile à distance, dispositif de serveur et procédé de commande de système de communication mobile à distance
EP2903223A1 (fr) Procédé de transmission d'informations d'image et système de communication de paquet
KR20150034778A (ko) 리모트 통신 시스템, 서버 장치, 리모트 통신 방법, 및 프로그램
EP2903224B1 (fr) Procédé de transmission d'informations audio et système de communication de paquet
JP5505507B2 (ja) サーバ装置、映像品質計測システム、映像品質計測方法およびプログラム
EP2882164A1 (fr) Système de communication, appareil de serveur, procédé de commande d'appareil de serveur et programme
CN117411998B (zh) 基于网络传输的监控系统、方法、设备和介质
WO2014057809A1 (fr) Système et méthode de transmission de vidéo de mouvement
WO2014042137A1 (fr) Système et procédé de communication, et dispositif de serveur et terminal
CN117560522A (zh) 码率确定方法、装置、设备及存储介质
CN117440155A (zh) 虚拟桌面图像的编码方法、装置及相关设备

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150428

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
RA4 Supplementary search report drawn up and despatched (corrected)

Effective date: 20160406

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/70 20130101ALI20160331BHEP

Ipc: H04L 12/811 20130101AFI20160331BHEP

Ipc: H04L 12/26 20060101ALI20160331BHEP

17Q First examination report despatched

Effective date: 20170607

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20191001

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20200212